Jan 13 2016
Jan 13

Developer experience is our primary concern with this Drupal 8 version of doing CRM.

We thought we could improve the experience of helping developers contribute to the project. We noticed that for Drupal 8 all the cool kids were moving to hosting their development to github, such as with Drupal Commerce, but even core bits of Drupal.

So we did some investigating and decided to join them. We thought it would be helpful to share a couple of our thoughts and reasons, we are by no means authorities on this!

Getting Started

Being able to work with Github is really nice. Someone can come along to github and easily fork the main repository which is possible on Drupal.org but much easier on github.

No Special Users

We have a principle where “no individuals is special”. On drupal.org the module maintainers get access to more tools then everyone else. On github everyone is basically the same. In theory someone’s fork may become a bigger deal than the original. This means everyone has the same tools and so the things we do to make lives for our developers easier, everyone else gets to share.

We found that when some developers were maintainers and had access to drupal.org’s git they had a much nicer experience than the people who had to just download the source code or set up their own git experiences.

Pull Requests

Pull Requests are really nice. We think pull requests are pretty much a nicer way of doing patches as you can just click a few buttons and copy and paste it into the issue queue. With Dreditor it is not a big deal but github keeps track of minor changes to a patch much more effectively especially if multiple people are working on it.

  • Although it does require giving others access to my fork of a project and so we have found that sometimes patches are easier
  • Although if multiple people are working on a pull request, they can do it by forking the pull request owner’s repository and do a pull request with that first!

Drupal.org

We definitely still use Drupal.org as the issue queue and turn off all of github’s issue tracking features. We then reference issue numbers in as many commits as possible and certainly all pull requests (We post pull requests in their issue).

One of the committers can, every so often push the “main repository” or any repository to the git repo on drupal.org

TravisCI

We also use travis-ci to handle tests and will follow up with a more detailed post about how we handle testing.

Dec 09 2015
Dec 09
DrupalCamp is the distribution, for spinning up camp sites.  Lately, as always :), I was part of the site building team at a local camp in India and after working with the team, found that all the sites were/are having similar content architecture e.g. content types, listing pages etc., except their themes and designs.
While sprinting at DrupalCamp Pune, we decided to build a site for DrupalCamp Delhi. While sprinting we realised we need  a common code/feature base for camp sites. And there we started building distribution in D8. The time we started making this distribution profile Drupal 8 was in beta 16 and we are hoping to make first release candidate soon.
Drupalcamp comes up with the 2 contributed modules fb_likebox, twitter_block for the facebook and twitter share blocks in sidebar.
Configurations that this installation profile provides are:
  1. Content types
    1. Basic Page :  For creating basic pages like about us,etc. on the site.
    2. Session : This is used for creating/submitting sessions.
    3. Sponsor : This is used to keep information about the sponsors.
  2. Listings (Views):
    1. Accepted Sessions
    2. Proposed Sessions
    3. Sponsors
    4. Students.
  3. Social sharing buttons:
    1. facebook
    2. twitter
  4. Blocks:
    1. facebook
    2. twitter

We need to just use the drupalcamp profile and theme the site.
Challenges that came up while building this profile :
  1. Contributed modules : The contributed modules stable release was not out so ported the fb_likebox, twitter_block module to drupal 8 and created the stable release of fb_likebox. Special thanks to the maintainer baekelandt for quick promptness.
  2. There was no tool available which will provide the boilerplate code, so we started with forking the standard profile and used the config-export to export the configurations. Now with the drupalconsole 0.9.9 we have generate profile command available to  generate the boilerplate code. Used the phing to automate the testing as sam specified in its blog.

Thanks to the entire Drupalcamp Distribution Team!
What's Next ?
Please join hands to release the stable version for twitter_block and include this stable release in the drupalcamp distribution.

Mar 07 2015
Mar 07

Previously we read how to be a webmaster at drupal.org , Now I became the git administer at drupal.org. So I think to write a blog post so that others can benefit and also get to know how to become git administer.

In simple words: "Start Contributing". Git Administer privileges are granted to the users with proven record of contributions in the Project Applications issue queue. A solid history of consistent contributions on drupal.org is a must to get consideration for an elevated role.

How to start contributing & where you can contribute :

  1. Join the code review group
  2. Read How to review full project applications
  3. Some helpful tools for reviewing :
  4. Learning Sources : 
  5. if you found any problem while contributing,just comment on the below post/ if you need immediate answer you can try and find one of the git administer on IRC - #drupal-codereview IRC channel on Freenode.

Benifits of becoming git administer :

  1. You will see a new challenging case in every new project application.
  2. Your drupal apis knowledge will become sharp.
  3. Many more....

I would encourage you to learn more about that process and join the group of reviewers.
Next article:  A guide to review project applications
Feb 23 2015
Feb 23

Security of the Drupal website is a important stuff for the site owners, site developers.This blog post has my presentation at the Drupal Camp Mumbai that  intended for Drupalers who want to avoid security loop holes while writing code or architecting solutions. We delved into common security issues that ails custom code and has both vulnerable and secure code snippets.This is mostly about my encounters and experience after doing 50+ project application reviews and also a good guideline for new contributors.

[embedded content]

Hack Proof Your Drupal Site from Naveen Valecha

Next article:  A guide to review Project Applications.


Jan 28 2015
Jan 28

As recently I became the webmaster on the drupal.org and few people started asking me about how to become a webmaster and where to start. So I write a blog post so that others can benefit.
In simple words: "Start Contributing". Webmaster privileges aren't granted lightly for obvious reasons. A solid history of consistent contributions on drupal.org is a must to get consideration for an elevated role.

How to start contributing & where you can contribute :

  1. Help out in the content queue, especially reviewing Service Listing requests .Marketing guidelines for reviewing the service listing requests.
  2. Training Listing requests, Marketing guidelines to review training listing requests.
  3. Help out in the webmasters queue, Textual improvements requests.
  4. Depending on your skills, you can also contribute to drupal customizations as well.
  5. if you found any problem while contributing to above sections,just comment on the below post/ if you need immediate answer you can try and find one of the webmasters on IRC - #drupalorg IRC channel on Freenode.
If you need guidance, join the #drupalorg IRC channel and find dddave, lizzjoy, tvn or myselfA general overview about the various ways to contribute to Drupal.org can be found in the documentation.
If you have good coding skills, you are more than welcome to help reviewing project applications. I encourage you to learn more about that process and join the group of reviewers.
Next article:  We will look at how to be a git administrator on drupal.org

Jan 23 2015
Jan 23

Was looking out some good English Speaking training course website for my cousinto become a fluent English speaker.


On opening the homepage I found the section "Free trial".

I did the same thing as all guys do.Click on the Free trial link,created my account.I personally feeling happy to get the 5 free audio English lessons.
I clicked on one of my lessons link and come to the page, found the download link of the audio file.



Copied and pasted the url in the browser and the downloaded the file. https://www.dailystep.com/en/download/file/fid/13531 Just change the fid in the url and tried to download some random file with file id. I thought what the Shit!   I then look around which drupal module has provided this route.After checking I got to know that this is the download file module which provides this route.After checking I assume that it might be permission stuff problem.
I found the similar permission problem on the drupal.org as well https://www.drupal.org/node/2394993
Trust is the Major Strategy in Security of any web application.We should be sure which user role we should assign which permissions.
Note: Published the post after suggesting the fix to the site owner Jane
Nov 10 2014
Nov 10

I have recently joined QE42 pune, As the drupal 8 module port code sprint will happen in
QED42 on 8 November,2014 https://groups.drupal.org/node/448748 and It was postponed later and I was unaware of that.
So I decided to port a module, so I picked a random project from the code reviews done by me from project applications.
So I picked Anonymous suggestion box and ported the module code to drupal 8 and created its meta in the module issue queue https://www.drupal.org/node/2371803

After that I have the time left for code writing so I decided to pick another one jquery carousel and ported half of its code to drupal 8 https://www.drupal.org/node/2371855
If anyone wants to join hands while porting this module then tweet me over twitter lets finish this before this week end and will take another one for next week end.

Waiting for the postponed sprint to meet #pune drupalers over there.

Apr 28 2014
Apr 28

Sometimes you want to license files without people needing to purchase them. Even using coupon codes to make products free still requires them to be purchased through the Commerce Checkout system.

This is fine for physical products where you still want email and address details of potential future clients.

However when it comes to files, users require an account to access their files, so chances are you have all the details for them already. And there is no shipping required so why make them go through the checkout process just to get a license for a free file? (Seriously if you have reasons comment!)

Here is a snippet of how to generate a file license for a user:

Unrelated

Grammar Lesson:

Today I learnt the difference between 'license' and 'licence'. Unless you are American (in which case just ignore the existence of 'licence') read this.

Apr 15 2014
Apr 15

Super Site Deployment with ctools exportable revert snippets

Sometimes when you are deploying new code to a production site you want to update views, panels, etc. with new code exports, but for one reason or another the defaults are overriden by the database.

Well with the following scripts you can stop worrying about that and just have an update hook take care of reverting (or deleting) the overriding database entries.

Improvements appreciated and feel free to comment!

Oct 29 2013
Oct 29
Here are some of the Performance tips for the Drupal site.

  1. APC : APC (Alternative PHP Cache) is a PHP OP code cache. It is a very quick win when working with PHP and can offer a great performance boost when using Drupal. It is very much a “set it and forget it” type of application which can just be installed, enabled and left to do it’s thing.
  2. Memcache : Drupal's support for memcache is really good and easy to implement.There is a memcache drupal module(https://drupal.org/project/memcache) to integerate the memcache on the site.
  3. Varnish : When you have a lot of anonymous users reverse proxy cache can save you a lot of server load. Varnish is one of the more popular solutions within the Drupal world. Varnish sits in front of your web server application, for example Apache, Nginx or lighttpd, and can run on the same server or a remote server.Use the varnish module to integrate the varnish on your site https://drupal.org/project/varnish
  4. Boost : Boost provides static page caching for Drupal enabling a very significant performance and scalability boost for sites that receive mostly anonymous traffic. For shared hosting this is your best option in terms of improving performance. On dedicated servers, you may want to consider Varnish instead. When the page is then requested it is loaded quickly, because it is coming straight from the disk and no PHP or MySQL processing is needed.See the Boost module here https://drupal.org/project/boost
  5. CDN : A CDN is used to distribute static assets such as images, documents, CSS and JavaScript across many locations.The goal of a CDN is to serve content to end-users with high availability and high performance.There is a CDN drupal module(https://drupal.org/project/cdn) to use the Content delivery Network.
  6. Disable Database logging module : This module logs the action performed on the site to the database.Use syslog module which is also in drupal core.Using syslog you can also write the more technical log entires to the server's standard log on the file system and save the database queries.
  7. Enable Page & Block Cache : Enable Drupal caching (Administer > Configuration > Performance). When enabled, Drupal will render the page and associated blocks once, and then save that result in the database. This can drastically reduce the number of database calls run on a page since the results are pre-rendered. Drupal’s caching engine is most effective for anonymous visitors – if your site is mostly “read only” and doesn’t have visitors logging in, caching can make a dramatic improvement in site load speed.
  8. Increase Cache Lifetime : An option for some sites may be to increase the cache lifetime. This determines how long Drupal will hold onto a cached result before it will re-generate the page. If you have frequently changing content, you may want to set the cache lifetime to only 5 minutes, but if your content doesn’t change often, an acceptable value may be several hours.The cache lifetime depends on your site usage.
  9. Optimize JavaScript and CSS Files : Enable the option of optimize javascript and CSS files in the performance settings. When enabled, Drupal will consolidate all CSS and JS files included on each page into a minimum files, and compress the code by removing whitespace. This will reduce the overall file size and improve page load speeds.if you need more aggeration of css and js files then use Advanced CSS/JS Aggregation module https://drupal.org/project/advagg
  10. Disable Un-used Modules from the site .

if you have done all the performance tips written above and you are still getting the performance problems then either get the suggestions from the High performance drupal group https://groups.drupal.org/high-performance

There are also some of the related performance related articles.See there links below

  1. http://www.creativebloq.com/web-design/drupal-performance-tips-9122837
Oct 29 2013
Oct 29
There are many service providers in the world that provide drupal related services.As per the drupal marketplace some of the providers are Featured(those which have an exceptional community contributions and show continued support of the Drupal project) and others are not Featured.See the Top 10 Featured service providers here https://drupal.org/drupal-services.you can filter them according to the Services,location and sectors.

When hiring a Drupal site developer it's important to understand what you need and where you can find the service provider with that set of skills.you can find the detail here https://drupal.org/node/51169

if you are getting some problem while hiring a Drupal Developer then feel free to contact me using my contact form.

Aug 27 2012
Aug 27

So this was my first DrupalCon and i really don't know whether i am happy with it or not...

I am not the hardcore-coder like some others, my experience in drupal is a mixture of sitebuilding and developing mainly smaller modules. So i just left of for munich very open-minded as i knew several other Cons e.g. the Linux-Tage in Germany.

It was a big meeting indeed, lots of people you just know by their nicknames, you just know what they have done so far for the community. You had a lot of possibilities to get to know each other and after all the location was great - also the food. So a big "thumbs up" for the venue!

I visisted the KeyNote on the first day and yes, i really WAS disappointed by this one. Allthough the idea of the Interview-Mode was cool, there was nothing really cool about the talk itself.
Nothing new, nothing you could not read on the internet weeks and months before... If this should be kind of "kick off", it totally failed in my opinion. There was no big bang or something like that, seen better keynotes on other OSS-Meetups.

I attended some of the talks the next days, mainly to see that most people have the same challenges in project that we have every day. But i really liked to see how the others solved them, got to know a few modules i didn't know and saw a lot of
solutions i never thought of. Allthough i atteneded just sessions witch where labeled "intermediate" or above it was not possible to go too deep in the problems, they where mainly just scratched at the surface... But as mostly on every convention you
get new ideas and hints where to find new solutions, so here all i am really satisfied with the sessions.

The biggest problem of this con was that it was something "in between"... Too many business-people to meet developers and to get mostly technical details out of it, too many developers to make business. In my opinion it's much better for the
technical interested people to attend one of the various Drupal-Camps out there then the Con. We should leave that to business and sales as this seems to be the targeted audience. But this problem is not only drupal specific, it's noit really easy to meet
everbodys needs.

Aug 27 2012
Aug 27

From 20th to 24th to August 2012 the DrupalCon Europe came for the first time to Germany. During the week of this summer's hottest week so far 1800 Drupalistas from all over the world gathered under the motto "Open Up! Connecting systems and people" in the Grand Westin Hotel in Munich, Germany.

The DrupalCon started with a big surprise, announced on the opening session: Several European Drupal shops (NodeOne, Krimson, Mearra and Wunderkrau) fusion together to become the new, Captain Drupal powerd Wunderkraut - after Acquia, the next big elephant among Drupal companies. This surely will have a positive effect to the Drupal community as some people saw Acquia taking disproportional influence into Drupal development and having another big player in the community could diverse continued and strong development directions. For the smaller companies and start ups this will probably not cause disadvantages since it make Drupal generally stronger and more interesting to larger - industry size - companies, which would probably not hire a small shop, like Dries said on the opening session: "Elephants want to dance with elephants".


Personally I liked this DrupalCon a lot! As a developer I was positively surprised to see not as many sales persons but a lot of community members to talk to, exchanging knowledge, opinions and living the "momentum" about Drupal. The venue was awesome and not only the geek-friendly coffee stands all over the place made the attendees feel well. The food was remarkable excellent and with over €300k the biggest expense of all, but really, it is worth to keep everybody happy and good food achieves that more than anything else.


Although DrupalCons have become huge, it had the spirit of a perfectly sized conference since there were a lot of rooms with great sessions for everybody: Heavy programmers could come together for the core conversations track on a 5 minutes walking distance at the Sheraton hotel, which made it a cozy and productive place. The main sessions were nicely grouped into eight tracks with clear topics, this way, for all kinds of interests, good quality and interesting sessions were offered. It was definitively not a DrupalCon where you could follow all happenings, but rather everybody could enjoy exactly what one was looking for. You can find all the session's recordings on the schedule or the DrupalCon channel on blib.tv
Unfortunately I was a little bit disappointed by the keynotes: Dries' keynote was as a simple interview without any new information and Anke Domscheit-Berg's keynote was just a general summary of Open Data initiatives, an interested person could read together quickly through blog posts and general news. No innovative stuff here, but paired with some awkward invitations to participate. Yes, it is an important field, and we all need to demand actively and know about it, but somebody with more activist insight would have rocked way more.


For me, the motto had it's truth on the part connecting people: As the community lead for DrupalCon São Paulo I got to know personally and work with the staff and people from the Drupal Association, which outside of the US' Drupal community doesn't seem to have arrived yet. Surely, the Drupal Association is not working as open and community based as most of the community members would like, but they are really opening up, and it is the responsibility of the international community to get involved! I would like to point to the awesome session by Donna Benjamin Infiltration Day 853: Drupal Association Board. Confessions of a not-so-secret double agent, the only one of the non North-American board members. Where she did a great call to people to participate and form constructively the Drupal Association, which is important to all of us. A nice detail was that out of a quarter million people entitled to vote for members of the board only 650 actually made use of it. It is necessary that the community accepts the Drupal Association, not as a decision making instance rather than as a representative that forsters the Drupal community and for this reason the whole world and active groups have to participate in the election and forming process.


This DrupalCon in Munich was an important step for scaling Drupal (community) internationally: The German community, as they were cooking their own soup for a long time, now got deeply involved into the international scene, two next DrupalCons have been announced in new parts of the world (Latin America, Brazil, São Paulo and Australia, Sydney), a prosperously discussion about internationalizing the Drupal Association has begun and another huge Drupal company has arisen on the European Drupal horizon.

Drupal rocks! Drupal rockt! Drupal rockea!

Nov 28 2011
Nov 28

Last weekend, between November 25th to 27th,
the awesome guys from Comm-Press hosted a views (code) sprint in their new office .

At first I was a little bit sceptical. Can you scale a codesprint up to 20 people?

But they proved it's definitive possible!
At Saturday everyone said hello to each other and Karsten started to throw in some ideas what people could do. It turned out there where enough tasks for everyone, such as:

  • conduct a usuability test
  • develop some UI ideas based on the UX test
  • provide regex support for both string and numeric filters
  • port views_cloud to d7
  • write a views integration for taxonomy_entity_index
  • improve the webform views integration
  • issue triage, more about this might be added later
  • review the existing documentation and start to improve it
  • go through the "needs review"/RTBC patches and reduce their numbers

 Review des Views Sprints

Quite some people started with the issue triage aka rock in the issue queue.
For this, Karsten gave a really good explanation on how to act in the issue queue.
After that, around 10 people started to work on it, and the outcoming results blew my mind: they worked
on 250 isses within two days (c.f. http://drupal.org/project/issues/views?page=4&status=All ) accompanied by around 50 issues in several other issue queues. This impressive figures don't even include the work of the documentation and the UI team, because they did more kind of a preperation work for future issues.

The enviroment was incredible: We were provided with food, something to drink and even some nerf guns! Furthermore, special thanks go to @freudenreich_m and @ralfhendel for organizing the famous German hacker elixir "club-mate" at Sunday morning.

The total statistics:
* ~300 issues
* 60 commits directly to Views and 40 to other projects/sandboxes etc.
* 5 Nerf guns

Thanks for all people which took part or were involved via sponsoring etc.
The drupal community is unique!

There were also some remote part-time sprinters: das-peter, damz and maybe other people I forgot to mention (I apologize if that might have been the case).

Aug 23 2011
Aug 23

Views3 UI

Views UI Design

Every site builder new to Drupal ought to learn and get used to Views in order to be able to build really flexible websites without writing code.
The current UI is kind of hard for beginners although it is much better than the one Views 1 had.

In order to achieve further improvements within the View UI, Acquia , a Drupal company founded by Drupal creator Dries Buytaert , asked its interaction designer Jeff Noyes to brainstorm and discuss designs with merlinofchaos, the maintainer of Views.

Their goal is to integrate Views 3 into Drupal
Gardens
, which provides customizable starter Drupal packages that can be set up an configurated in a few minutes and are then also hosted by Acquia.
The target group for Drupal Gardens is an audience with presumably little experience with module as complex as Views, thus, the Views 3 UI should be redesinged in order to make it more intuitive.

Codesprint

After Jeff's mockups were finalized, Acquia's engineering team knew they would need help from the Views maintainers to implement the design quickly enough, so they decided to host a code sprint.
Last week (14-18 February 2011), a group of developers met in San Jose (CA, US) and was joined virtually by others to further develop the new Views UI.

The working location for those developers from various parts of the globe was an apartement booked by Acquia, where they put together developers from all over the world, paying their travel costs and other expenses.

After a relaxing Sunday evening the work started the next morning.
Having determined the current state of the code, everyone got started on a specific part. On some days we had video-meetings with Jeff to discuss issues with the design.

During the day and the evening, and sometimes even at night, the hacking went on at incredible speed.

UI Changes/Designs

The new UI has many changes, here are just some of them:

  • A new wizard in order to create new views easily and provide some default settings.
    So for example if you create a new node view, the "node/content: published filter" is added by default.
  • A new listing of existing views integrated into CTools Export UI. So every module which uses CTools Export UI (and you should) has some kind of similar listing interface.
  • completely reorganized view edit screen.
  • The settings links got ordered/grouped by importance. See the screenshot [#1]
  • Every setting appears now by default in a modal
  • The add new field/filter/argument/sort form was redone. It provides some serious awesomeness[#2]
  • "Search-as-you-type" form
  • a listing of already added items is now at the bottom
  • the filter configuration page was completely redone in order to provide the 80% most frequent usecases as easy as possible.
  • The 80% usecase was considered in all areas of the new UI. So important stuff got moved up and less important stuff got moved into the "more" fieldset.
  • the same was done for arguments, fields and stuff like that, so, for example, the field configuration form is much smaller when you open it.[#3]
  • in order to calm down everyone: No views features were removed. Some bugs even got fixed.
  • Configure your view in the preview: You have contextual links for editing/adding new elements directly in the preview.[#4]
  • View templates: You can provide some default views which act as templates. The difference to a default view is that you "clone" the view template every time you use it.
  • Many more things that would exceed the average size of a blog entry Wink

If you want to try out the new UI you have to checkout the current UI from drupal.org/github or wait until it's merged into the official version of views.

Sadly enough, there's still one serious issue with the new interface: Once you get used to it, it's really hard to go back to the old one.

At the end of the week we nearly finished the work and the crowd scattered again. Some work also got started to provide an automatic migration of views for renamed tables from d6 to d7. This upgrade path is the main blocker for a beta.

People involved in making the new UI a reality:

Thanks for the really great week!
Thanks also to erdfisch for supporting me that week and in general, during my involvement in Views.

Oct 21 2010
Sam
Oct 21

Version Control API is central to Drupal's migration from CVS to git. It's also the single thing that's taken up the most time in the work we've done to date, and there's still a fair bit left to do. But we're now at a point where we need to step back and take a high-level look at the direction it'll finally take, so I thought I'd use where we are as an opportunity to explain the goals and architecture of the module, both historically and looking to the future. Apologies in advance for any of the history I get wrong - I'm sure I'll do it, so please feel free to correct me.

In The Beginning

Version Control API was originally written as a 2007 Google Summer of Code project by Jakob Petsovits (aka jpetso). From the outset, VCAPI was intended to replace Project*'s tight coupling with CVS (via the cvslog module) so that Drupal could get off CVS and on to a different version control system. VCAPI tried to build a system & datastructure similar enough to cvslog that moving over wouldn't be too painful, but at the same time was VCS-agnostic. We could decide later which VCS would fill the gap. (Technically, it would even have been possible for different projects to use a different VCS - though we ultimately decided against that because of the added social and technical complexity.)

Given that VCAPI was intended from the beginning to replace cvslog, it's hardly surprising that they both do essentially the same thing: store representations of VCS repository data in Drupal's database, such that that data is readily accessible for direct use by Drupal. They also map Drupal's users to user data in repositories, thereby allowing for the management of repository ACLs directly in Drupal. (cvslog also integrates directly with Project*, while VCAPI opted to separate that into versioncontrol_project). They then provide output that any drupal.org user would be familiar with - the project maintainers block, the commit activity information in users' profiles, the commit stream, etc. Whereas cvslog was only concerned with integrating with CVS, VCAPI attempted to solve these problems (particularly storing repository data) in an abstracted fashion such that the data from any source control system could be adequately represented in a unified set of Drupal database tables. VCAPI would provide the datastructure, helper functions, hooks, etc., and then "backend" modules (such as the git backend) would implement that API in order to provide integration with a particular source control system.

A quick aside - any good engineer will see "storing representations of VCS repository data in Drupal's database" and trip a mental red flag. It's data duplication, which raises potentially knotty synchronization problems. So let me head that one off: extracting the data was especially necessary with CVS, as it was _far_ too slow and unscalable to make system calls directly against the repository in order to fulfill standard browser requests. And while git is MUCH faster than CVS, the data abstraction layer is still necessary. System calls are slow, and there's disk IO to think about; it's worth trying to avoid tripping those during normal web traffic. More importantly, generating an aggregate picture of versioncontrol-related activity within a given Drupal system, particularly one that has a lot of complex vcs/drupal user mapping and/or a lot of repositories, really requires a single, consistent datastore. Stitching together db- and repo-sourced data on the fly gets infeasible very quickly. Finally, putting the data into a database makes it possible for us to punt on caching, since Views/Drupalistas are accustomed to caching database queries/output.

Anyway, with all this in mind, jpetso made a herculean effort in writing the original 1.x branch of VCAPI. He came up with the original abstracted datastructures and general methodologies that allowed us to replicate the functionality of cvslog in an API that could be reimplemented by different VCSes. More about that history can be seen in g.d.o posts. And at its core, the system worked.

Unfortunately, there were also aspects of the system that were awkward and overengineered. Much of the original API was actually just a querybuilder; many of the abstracted concepts had become so abstract as to be unintuitive to new developers (e.g., there were no "branches" or "tags" in VCAPI - just the meta-concept of "labels"). The underlying problem, though, was an architectural predilection towards an 'API' that did backflips to abstract and accommodate all possible backend behaviors, then own all the UIs, rather than providing crucial shared functionality and readily overridable UIs that backends could extend as needed. You can't work with, let alone refactor, VCAPI without running into this last problem. The module was suffering from an identity crisis - is it an API for the backends? Or an API for third-party systems, like say Project*, which want to utilize the repository tracking features of VCAPI? The crisis was also evident in the querybuilder: the same system was used for building aggregate listings as for retrieving individual items, and optimized for neither.

Enter: OO

jpetso needed to start moving on to other things by 2008, and when he offered the project up for maintainership, I volunteered. After porting to Drupal 6, discussions began about how well-suited VCAPI & backends would be to object orientation. In particular, it could help to make the API less overbearing and release more control into the backends. And for GSoC 2009, marvil07 made exactly that his goal: porting VCAPI over to OO.

Note - there was other work going on throughout this time period by a variety of people, GSoC and otherwise. I do NOT mean to slight any of that work - it's just that those changes were less central to the evolution of the API itself, and therefore tangential to the focus here.

Prior to marvil07's work, VCAPI was an exemplary instance of Drupal's love for massive arrays. They were used to capture all the data being stored in the database, to send instructions to the querybuilders, as return values for all the various informational hooks implemented by backends...and just about everything else. marvil07's refactor revealed some of the real 'things' VCAPI deals with, in the form of discrete classes:

  • VersioncontrolRepository - Represents a 'repository' somewhere; at the bare minimum, this includes information like VCS backend, path to the repository root, and any additional information specified by the backend.
  • VersioncontrolItem - Represents a known versioned item - that is, a file or a directory - in a repository.
  • VersioncontrolBranch - Represents a known branch in a repository.
  • VersioncontrolTag - Represents a known tag in a repository.
  • VersioncontrolOperation - Represents, usually, a commit action in a repository. The 'operation' concept is one of the abstractions that can get confusing.

Each of these classes have two responsibilities - CUD (that's CRUD sans-R), and retrieving other related data (e.g., you could call VersioncontrolRepository::getItem() to retrieve a set of VersioncontrolItems, or VersioncontrolRepository::getLabels() to retrieve a set of VersioncontrolBranch or VersioncontrolTag). CUD was fairly well implemented on each of these classes by the time marvil07's original GSoC project was over. Related data retrieval was a bit more limited.

This set of classes also replaced awkward alters with inheritance as the new way for backends to interact with VCAPI: VersioncontrolGitRepository extending VersioncontrolRepository, VersioncontrolGitBranch extending VersioncontrolBranch, etc. Interfaces were also introduced to tell VCAPI that a particular backend's objects supported specific types of operations - generating repository URLs, for example. The crucial contribution of marvil07's GSoC project was developing this family of classes, which has remained largely unaltered. Unfortunately there wasn't really time to get to refactoring the logic, so much was simply cut from old 1.x procedural functions and moved into an analogous class method.

By the time we had reached the end of GSoC, I'd grown into the opinion that marvil07's work was an excellent first step. We still largely the same 1.x logic, just moved into an object-oriented environment. API<->backend interaction via inheritance had helped the identity crisis, but not resolved it entirely. There was some more flexibility for the backends to control logic that had once been the sole domain of the API, but we were still swimming upstream - too many disparate hooks, too much logic in VCAPI that the backends couldn't touch. A good foundation, but far from finished.

The Great Git Migration

When the big discussion about switching VCSes happened in February 2010, we were still gradually fleshing out the skeleton that had been introduced during GSoC 2009. During the discussion, the question was quite rightly raised whether we should even bother with VCAPI, or if we should just use something else (or start from scratch), especially given the wide agreement on wanting "deep integration". (On using VCAPI at all, this bit of the thread is particularly enlightening.) I ended up arguing that VCAPI, while by no means perfect, had already done a pretty good job of tackling the not-inconsiderable datastructure and CRUD questions. Those problems would have to be solved anyway, so starting from scratch would have been a waste. Folks ultimately found that to be a convincing argument, and that's been one of the major principles guiding the migration work thus far.

Another guiding principle also emerged from the initial discussions - if we're going to build our own system, it must be developer-friendly & maintainable. For years, the cruft and complexity of Project* has limited contributions to a very small circle of overworked developers; allowing the migration work to produce similarly impenetrable code would be horribly shortsighted. Consequently, the architectural decisions we've made have been as much motivated by the long-term benefits of architecting a tight, intuitive system as the short-term benefits of just finishing the damn migration already. Let's run through some of the big architecture shifts made thus far:

  • One of the biggest weaknesses in VCAPI 1.x was the querybuilder. It was an awkward custom job that introduced a few thousand lines of code and was quite difficult to extend. So we replaced the whole thing using the DBTNG backport.
  • In tandem with the conversion to DBTNG, we did a partial backport D7's entities. All of the classes from marvil07's original OO refactor (VersioncontrolRepository, VersioncontrolItem, etc.) are now instances of VersioncontrolEntity. Their loading is managed by a family of classes descended from VersioncontrolEntityController; all that can be seen in includes/controllers.inc. This is a great conceptual step forward - it makes a TON of sense to treat most of the objects VCAPI handles as entities.
  • We took another bite out of the identity crisis by definitively separating mass-loading for listings from targeted loading for data manipulation. Mass-listings are Views' responsibility, pure and simple. Only when you're actually _doing_ something with the API will objects get built from the complex Controller loaders.
  • We introduced a VersioncontrolBackend class, replacing the array returned from hook_versioncontrol_backend(). This class will increasingly replace procedural logic as a unified behavior object governing everything that VCAPI expects a backend to implement. To that end, the backend acts as a factory for turning data loaded by the VersioncontrolEntityController family into instantiated VersioncontrolEntity objects.

In short, we totally rebuilt VCAPI's plumbing, and with quite an eye towards the future - using DBTNG and Entities will make the D7 port very manageable. And now we're in the final phase of work with VCAPI - fleshing out entity methods, tweaking the datastructure, and dealing with the UI. All the stuff motivating me to write this article, as a way to force myself to think through it all properly.

Looking Forward

First, let's do a quick revisit of VCAPI & backends' purpose. These proceed roughly in order from plumbing -> API -> UI.

  • Maintain a list of repositories known by the system.
  • Maintain a mapping between Drupal users and the users known to the repositories.
  • Maintain ACLs pertaining to those users & repositories, and make the data readily accessible to the hook scripts that actually enforce the ACLs.
  • Track the contents/activity of a repository into an abstracted, cross-vcs format.
  • Link repository activity with users.
  • Provide sane default behaviors that can then be easily adapted to a specific VCS' requirements by the backend module.
  • Provide sane API to third-party (non-backend) client code for using or extending VCAPI's data.
  • Provide overridable & retool-able UIs for administrative functionality.
  • Provide portable, overridable & retool-able UI elements for listing & statistical information, like commit activity streams.

Now, let's run through that list to see how 1.x stacks up:

  • Maintain repository list - check, but CRUD is awkward.
  • User mapping - check, but CRUD is awkward.
  • ACLs - check.
  • Repository content tracking - check, but confusing & awkward through over-abstraction.
  • Repo content<->user link - check.
  • Sane defaults + backend overridability - nope. 1.x worked mostly by overstuffing logic into the API, and allowed backends to interact by flipping toggles. The rest was done with confusing hooks.
  • Third-party utility - nope. Third-party code just has the same set of confusing hooks, and not a lot of helpful API.
  • Admin UI - sorta. Static UI, even hard-coding some assumptions about data sources (e.g., repository "authorization methods"), but with some control afforded to the backends.
  • Portable UI elements - sorta. Blocks were used, but because there was no Views 2 when 1.x was written, there's just those hardcoded blocks. Moving to Views makes creating portable UI elements FAR easier.

Many of the problems in 1.x are helped, or even solved, by the architectural improvements I've been talking about throughout the article. Now let's break out our current work, the 2.x branch, into the same bullets. And forgive me, but I'm going to break narrative here and mention some details that I haven't previously explained. This IS supposed to be a list to help us actually finish up the work, after all :)

  • Maintain repository list - check. VersioncontrolRepository(Controller) has probably gotten more love than any other class. One major addition would be support for incorporating a backend-specific repo interaction classes, along the lines of svnlib or glip. That would make VCAPI into an excellent platform for doing repository interactions that are way outside the original scope; just load up the repository object from VCAPI, then go to town.
  • User mapping - unfinished - VersioncontrolAccount is one of the classes has barely been touched thus far.
  • ACLs - unchanged since 1.x, and in need of revisiting in light of all the other changes; best addressed at the same time we're revisiting VersioncontrolAccount.
  • Repository content tracking - almost there. We're going to undo a conflation made in 1.x; see these two issues. VersioncontrolOperation will go away in favor of VersioncontrolCommit, and we'll introduce a separate system for tracking activity (i.e., network operations) that is clearly separated from tracking repository contents.
  • Repo content<->user link - check. Despite the need for cleanup on VersioncontrolAccount, I believe this linkage is 100%.
  • Sane defaults + backend overridability - check, thanks to the move to good OO patterns.
  • Third-party utility - getting there. The advent of the OO API makes navigating VCAPI's internal datastructures much easier, but we still need to think about where & how we allow for alteration. Y'know, where we put our alter hooks.
  • Admin UI - not yet. We've backtracked from 1.x a bit, taking out some of the more hardcoded UI elements and are fixing to replace them with more flexible pieces. For the most part, that means building lots of Views, e.g., this issue. As with everything else in VCAPI, some of the difficulty comes in offering a dual-level API - one to the backends, the other to third parties.
  • Portable UI elements - zero. We're not going to provide a single block via hook_block() if we can at all avoid it. Views-driven all the way. Complicated, though, because the 'dual-level API' problems mentioned under Admin UI very much apply.

What's now emerging in 2.x is a layered, intelligible API that is thoroughly backend-manipulable, while still presenting third-party code with a consistent, usable interface. And with a repo interaction wrapper like I described above, VCAPI would be a launching point for the "deep integration" we all want. We're not there yet, but we're getting close. There's a general, central push to get a LOT more test coverage (especially testing sample data & standard use cases), without which we'll just never _really_ be sure how well the monstrosity works. There are still some crufty areas - "source item" tracking, "authorization method" for repository account creation - that we need to decide whether we discard, leave, or improve. And we need to come up with a consistent pattern for implementing dual-level Views: every backend needs to be able to generate a list of repository committers or an activity stream, for example, but each backend may be a bit different. So VCAPI provides a sane default, which can then be optionally replaced by a backend-'decorated' version.

I'm hoping this article helps put the VCAPI & family segment of Drupal's git migration in perspective. With any luck, it also gives enough of a sense of the problems we're grappling with that more folks might want to hop in and help us move everything along. Input on these plans are MORE than welcome.

Oct 11 2010
Oct 11

On websites with a lot of users and especially a lot of nodes a lot of queries can kill your site performance.

One central problem is the usage of node access. It's recommended to avoid it as long you can, but there might be other ways to get around some problems. Here is one specific example which is quite interesting.

The goal of the view was to show the X latest users together with it's profile picture and some additional stuff. The site used content profile so a view with the base_table "node" was used. Sadly this query was horrible slow so views caching was set to a long time(using views caching is a easy and good improvement and is highly recommended).

When the cache had to be rebuild the query was still slow. So instead of using node as base table choosing user as base table made it. This improved the query time by the factor 10, because of several reasons:

  • no node access system is used(which wasn't needed on this view)
  • less rows has to be joined

In views3 recently a patch landed which allows to disable node access on a certain view: http://drupal.org/node/621142

Sep 08 2010
Sep 08

By Ramona Fischer

A successful premiere: With 1 200 attendees from 52 countries, DrupalCon in Copenhagen (24th - 27th Aug) was the largest Drupal conference in Europe so far. In addition to the sessions, networking was one of the most important agenda highlights at the four-day conference. The conference featured its own bar (the "FooBar"), the premiere of the first Drupal rock band ("The Kitten Killers "), and a beer brewed especially for the event ("AwesomeSauce"). These were just some of the touches added by the organisers to foster the community spirit. What expectations did attendees have when arriving at DrupalCon? What challenges did the organisers face? And where is the community heading?

"I want to connect and expand my professional network," Drupal-programmer Julien Dubois (CommerceGuys , Paris) says. Attending DrupalCon for the first time, the 23-year-old has especially been looking forward to meeting people he already knows from the web. "It is amazing to be able to put a face to the names." Seeing familiar and new faces has also been the reason for which Drupal-consultant and author Hagen Graf (France) decided to attend the conference. "I have been involved with the community for a long time and find it most interesting to see who is still here and who is new", says Graf who has written various specialised books about Drupal. "Socialising is key to Drupal conferences". During the day people are discussing technical matters, Graf says, whereas at night at the bar the "real talks" take place. Bringing home contracts is what web producer Robert O'Connor (booksellers.com , London) came for. "I am looking for skilled Themers, Designers and Developers with whom we could work together in a project partnership." O'Connor, who is lead developer and was sent by his manager to DrupalCon for the first time, also attended for strategic reasons. "We are interested in the business models our competitors use and how they earn money with Drupal."

"Evolution is healthy"

The fact that more, and larger business are taking an interest in Drupal was quite noticeable at this year's DrupalCon. Whether freelancer, small or medium-sized company or major market player like IBM and Accenture: more and more people and companies are using the open sourced CMS to pay their bills. Their various interests put the community in motion. "Changes within the community are part of a natural evolution" says "User Number One", Dries Buytaert (Acquia , Belgium). The inventor of Drupal considers growth to be an opportunity: "We need to be open and adapt to changes in the market, otherwise we will stagnate." Drupal-trainer Bèr Kessels (Netherlands) knows how much the Drupal community has developed over the past years. The IT-expert joined the Drupal community in the early stages. "Our very first 'DrupalCons' took place in small meeting rooms, where internet access broke down on a regular basis and the pizza deliverer came by bicycle." At the second DrupalCon in Amsterdam various OpenSource conferences took place at the same time, among others a meeting of O'Reilly, the Drupal expert remembers. "Little by little the O'Reilly guys joined us because Drupal was much more interesting." This familial "every-body-knows-each-other" feeling is vanishing the more Drupal spreads, Kessels regrets. "On the other hand, however, the event becomes much more professional in terms of technnique and contents."

A matter of location

A symbol of the changes in the community are the dimension and infrastructure of DrupalCon. "In the beginning it was enough to have a room for 30 to 40 people, which was often available for free at some university", says the event manager of Drupal Association , Cary Gordon (Cherry Hill Company, California). With turnout numbers of currently 1,200 or 3,000 in San Francisco (USA, 2010), finding a scalable location is one of the biggest challenges the organisers have to face, he says. This is confimed by Copenhagen team-lead Morten Birch Heide-Jørgensen : "Our goal was to establish convenient surrounding conditions, connecting the community in a centrally located venue." says the web designer, who estimates his work input for the organisation of DrupalCon to be 800 to 1,000 working hours. After long discussions, they had to switch to Bella Center since a location in the inner city wasn't affordable, he says. A new model and possible solution for the location problem is being introduced by the team of DrupalCon 2011 in Chicago (USA). Organiser Tiffany Farriss revealed on DrupalCon in Copenhagen: "Everything's going to be concentrated in one location: DrupalCon attendees are taking over an entire hotel."

Plattform for exchange and inspiration

Drupal pioneer Dries Buytaert knows how important personal meet-ups are for the community: conferences like DrupalCon are vital for the community because people step out of anonymity and get to know the person behind a handle, the Drupal-inventor says. "Without conferences we could not work together". This is confirmed by the Drupal Association's infrastructure manager, Gerhard Killesreiter (Freiburg, Germany). Activities in local usergroups, drupalcamps and code sprints are just as important, the Drupal consultant adds. According to him, innovation and development take place on a local level. "The Drupal-spark originates locally", Killesreiter says. Web developer Robert O'Connor seems to have already caught the DrupalCon spark: "Once back in London, I will join the local Drupal community", O'Connor said before heading home.

A personal conclusion

My very first DrupalCon; it was an adventurous arrival, and an even more adventurous departure. And in between I had numerous interesting talks with amazing people about the whole world and Drupal. Despite the modest food rations at the 'Con, and the altogether immodest prices of Danish food elsewhere, I somehow managed to scratch by without spending too much money (at least by some definition of too much).

To me, DrupalCon in Copenhagen was an amazing and unique experience, which I won't forget too quickly. This is why I especially would like to thank erdfisch and The Drupal Initiative who made this adventure possible for me. And I would like to thank all those who shared their experience and thoughts with me, giving me the possibility to wrap it all up in words!

And last but not least: some numbers the organizers of DrupalCon CPH have collected:

Altogether 5000 bottles of the specially brewed beer "AwesomeSauce", two bottles of Jack Daniels and one bottle of Jägermeister were served in the "FooBar", the organisers say. "I have no idea how many people drank in the bar" says CPH Team-lead Morten Birch Heide-Jørgensen. "But they were happy." And Szeget-organiser Gábor Hojtsy comments: "Drupalers have a tendency to party very well."

Aug 29 2010
Aug 29

Many people ask is views 3 ready for production. Here is an answer

A quote from earl miles

Right now I only recommend Views 3 in production if there's a Drupal expert on staff/call, mostly due to the fact that a lot of modules are not yet Views 3 compatible and that will really screw you.

Jun 11 2010
Sam
Jun 11

It's official - the Drupal Association has selected me to be the 'Git Migration Lead.' I'm tremendously excited, and can't wait to knuckle down and get this migration DONE. I'll be launching full-tilt into the list of issues that stand between us and git goodness, but before I do, I want to take a minute to clarify how I understand and will be approaching this position.

It's not the DA's role to determine the direction of drupal.org, let alone Drupal itself. Rather, the DA exists to support and facilitate efforts that the community has already decided are worth pursuing. At least, that's how I understand it. Consequently, my role as git lead is primarily about ensuring this migration happens to the satisfaction of the community - not merely my own satisfaction. It helps that we've already got a well-established todo list, but that also requires I be open to input throughout the process. And that's the plan. In fact, I can't think of any part of this project that I don't plan on conducting in public, through a combination of the g.d.o group, in the issue queues, on twitter (I've started a new account just for this), over the dev list, and occasionally on this blog. There will be no shortage of means by which you can get information, give feedback, or - please! - help out (if nothing else, my contact form works).

I think publicizing this process is crucial because it's the best way to make sure we have the energy and participation necessary to ensure it actually happens. And at the end of the day, that's the crux of my responsibility. So I'll be doing a mix cheerleading, organizing volunteer energy, and when necessary, coding - whatever it takes to ensure that the migration is always moving forward. Which is exactly why the DA created this paid position: historically, a collective desire for big infra changes hasn't been enough. Someone's ass needs to be contractually on the line.

Of course, my position is temporary, and will only last through the initial migration (Phase 2). At that point, we're back to all-volunteer energy for further git improvements. So I have another goal for the migration process: we need to grow the group of people familiar with and responsible for our project infrastructure. My hope is that we can take all the interest and excitement over switching to git and cultivate that wider group. So make no mistake, if I get my hooks into you over the next few months, I won't be letting go when the DA stops signing my checks :) And besides, the reality is that those who participate most during phase 2 will have the most clout during phase 3

Anyway - we all know how long this move from CVS has been coming. Now that it's here, let's not let make our community wait a day longer than it has to :)

Feb 10 2010
Sam
Feb 10

Last week, Matt Farina tossed me a question about the best approach to introspecting code in PHP, particularly in relation to whether or not the situation was a good candidate for using PHP's Reflection API. The original (now outdated) patch he gave me as an example had the following block of code in it:

<?php
      $interfaces
= class_implements($class);
      if (isset(
$interfaces['JSPreprocessingInterface'])) {
       
$instance = new $class;
      }
      else {
        throw new
Exception(t('Class %class does not implement interface %interface', array('%class' => $class, '%interface' => 'JSPreprocessingInterface')));
      }
?>

I've used Reflection happily in the past. I've even advocated for it in situations where I later realized it was the totally wrong tool for the job. But more importantly, I'd accepted as 'common knowledge' that Reflection was slow. Dog-slow, even. But Matt's question was specific enough that it got me wondering just how big the gap ACTUALLY was between the code he'd shown me, and the Reflection-based equivalent. The results surprised me. To the point where I ended up writing a PHP microbenching framework, and digging in quite a bit deeper.

My hope is that these findings can help us make more educated judgments about things - like Reflection, or even OO in general - that are sometimes unfairly getting the boot for being performance dogs. But let's start with just the essential question Matt originally posed, and I'll break out the whole framework a later.

FYI, my final and definitive round of benchmarks were performed on a P4 3.4GHz with HyperThreading riding the 32-bit RAM cap (~3.4GB), running 5.2.11-pl1-gentoo, with Suhosin and APC. With Linux kernels, I strongly prefer single core machines for microbenching; I'm told that time calls on 2.6-line kernels get scheduled badly, and introduce a lot of jiggle into the results.

Is Reflection Really That Slow?

NO! In this case, a direct comparison between reflection methods and their procedural counterparts reveals them to be neck in neck. Where Reflection incurs additional cost is the initial object creation. Here's the exact code that was benchmarked, and the time for each step:

<?php
function _do_proc_interfaces() {
 
class_implements('RecursiveDirectoryIterator'); // [email protected]
}function _do_refl_interfaces() {
 
$refl = new ReflectionClass('RecursiveDirectoryIterator'); // [email protected]
 
$refl->getInterfaceNames(); // [email protected]
}
?>

The comparison between these two functions isn't 100% exact, as ReflectionClass::getInterfaceNames() generate an indexed array of interfaces, whereas class_implements() generates an associative array where both keys and values are the interface names. That may account for the small disparity.

While it wasn't part of Matt's original question, curiosity prompted me to test method_exists() against ReflectionClass::hasMethod(), as it's the only other really direct comparison that can be made. The results were very similar:

<?php
function _do_proc_methodexists() {
 
method_exists('RecursiveDirectoryIterator', 'next'); // [email protected] iterations
}function _do_refl_methodexists() {
 
$refl = new ReflectionClass('RecursiveDirectoryIterator'); // [email protected] iterations
 
$refl->hasMethod('next'); // [email protected] iterations
}
?>

These direct comparisons are interesting, but simply not the best answer to Matt's specific question. Although the procedural logic can be mirrored with Reflection, Reflection provides a single step to achieve the exact same answer as took several procedurally:

<?php
// Original procedural approach in patch: [email protected] iterations
function do_procedural_bench($args) {
 
$interfaces = class_implements($args['class']);
  if (isset(
$interfaces['blah blah'])) {
   
// do stuff
 
}
}
// Approach to patch using Reflection: [email protected] iterations
function do_reflection_bench($args) {
 
$refl = new ReflectionClass($args['class']);
  if (
$refl->implementsInterface('blah blah')) {
   
// do stuff
 
}
}
?>

This logic achieves the same goal more directly, and so is more appropriate for comparison. It's also a nice example of how the Reflection system makes up for some of its initial object instanciation costs by providing a more robust set of tools. Now, the above numbers don't exactly sing great praises for Reflection, but given all the finger-wagging I'd heard, I was expecting Reflection to do quite a bit worse. As it is, Reflection is generally on par with its procedural equivalents; the big difference is in object instanciation. It's hard to say much more about these results, though, without a better basis for comparison. So let's do that.

More Useful Results

Benchmarking results are only as good as the context they're situated in. So, when I cast around in search of a baseline for comparison, I was delighted to find a suitable candidate in something we do an awful lot: call userspace functions! That is:

<?php
// Define an empty function in userspace
function foo() {}
// Call that function
foo();
?>

Because foo() has an empty function body, the time we're concerned with here is _only_ the cost of making the call to the userspace function. Note that adding parameters to foo()'s signature has a negligible effect on call time. So let's recast those earlier results as numbers of userspace function calls:

  1. Checking interfaces
    • class_implements(): 3.6 function calls
    • ReflectionClass::getInterfaceNames(): 3.7 function calls
  2. Checking methods
    • method_exists(): 2.0 function calls
    • ReflectionClass::hasMethod(): 2.7 function calls
  3. Logic from Matt's original patch
    • Approach from original patch: 4.5 function calls
    • Approach using reflection: 8.7 function calls (3.6 if ReflectionClass object instanciation time is ignored)

These numbers should provide a good, practical basis for comparison; let 'em percolate.

Let's sum up: as an introspection tool, Reflection is roughly as fast as its procedural equivalents. The internal implementations seem to be just as efficient, as the primary cost seems to have more to do with the overhead of method calls and object creation. Though creating a ReflectionClass object is fairly cheap as object instanciation goes, the cost is still non-negligible.

My interpretation of these results: Given that Reflection offers more tools for robust introspection and is considerably more self-documenting than the procedural/associative arrays approach (see slide 8 of http://www.slideshare.net/tobias382/new-spl-features-in-php-53), I personally will be defaulting to using Reflection in the future. And, if using the additional introspective capabilities of a system like Reflection early on Drupal's critical path (bootstrap, routing, etc.) means we can make a more modular, selectively-loaded system, then their use is absolutely justified. At the end of the day, Reflection should be an acceptable choice even for the performance-conscious.

...With an important caveat: The thing to avoid is the runaway creation of huge numbers of objects. Many reflection methods (ReflectionClass::getInterfaces(), for example) create a whole mess of new objects. This IS expensive, although my benchmarks indicate each additional object instanciation is roughly 1/3 to 1/2 the cost of instanciating ReflectionClass directly. So be sensible about when those methods are used.

My Little Framework

To do all this benchmarking, I wrote a small framework that does four crucial things:

  1. Allows the function to be benchmarked to be specified externally
  2. Runs two loops for each benchmarking run - an inner loop containing the actual function to be benchmarked, which is iterated a configurable number of times, and an outer loop that creates an sample set (of configurable size) with each entry being the result of the inner loop
  3. Processes results, calculating standard deviation & coefficient of variance; additional mean result values are also calculated by factoring out both a configurable time offset, as well as the time offset incurred by processing overhead for the framework itself (the internal offset is calculated on the fly)
  4. Repeats a benchmarking run if the result set's coefficient of variance > a configurable target value

Since I had the framework already together, I ran some more tests in addition to the ones above, mostly focusing on object instanciation costs. The results are in this Google Doc. In addition to the results from the Reflection Comparisons tab (which are from the first part of the blog post), there's also data on the costs for most other Reflection types with a wide range of arguments under Reflection Instanciation. The Object Instanciation tab, there is data on the instanciation time for a small variety of classes; the range of times they require is quite interesting.

Some oddities

Though I put forward static calls as a baseline before, if you look at the framework, you'll notice that it uses a dynamic call. Interestingly, dynamic function calls work almost exactly as fast:

<?php
// Define an empty function in userspace
function foo() {}
// Call our foo() userspace function dynamically
$func = 'foo';
$func();
?>

I glossed over this earlier because, within the confines of the framework, these two have almost exactly the same execution time (variations are totally within error ranges), whether or not an opcode cache is active. This strikes me as strange, as there's no way dynamic function calls can be known at compile-time...not that that's the only relevant consideration. But I don't know the internals of PHP, let alone APC, well enough to grok how that all works. So for the purposes of these benchmarks, I assumed the two to be interchangeable for the purposes of results-gathering. However, because I don't trust those results to be accurate without confirmation from someone with greater expertise, I'd rather people not make that assumption when writing real code.

Also, there is one case where Reflection differs notably from its procedural counterparts: object instanciation. While the other methods were generally on par, the cost of $refl->newInstance() vs. new $class() consistently differed by approx [email protected], or around 3 function calls (see the results for _do_refl_instanciate() vs. _do_proc_instanciate() under the Reflection Comparisons data). I suspect this is a result of the difference between a method call vs. a language construct, as the difference is similar to that of the difference between a static function call and call_user_func().

Sep 30 2008
Sam
Sep 30

User Panels shoots pretty high: it aims to provide a consistent, easy platform for the handling of user-centric information display. It's not about new storage mechanisms or anything like that - just about marshaling all that user content together in a sane, easy-to-use way. I'm hoping that this blog post can be a semi-official shout-out to the drupal community - an RFC, I guess.

user_panels comes out of a basic observation about Drupal: virtually every Drupal site has a different conceptualization of who users are, what their role is within the site, how they should be interacting with one another, etc. This observation helps explain why it's been so difficult to settle the question of how 'users' ought to be handled - most folks, myself absolutely included, are coming from the perspective of a particular use case. Even with deliberate effort, it's been pretty tough to get into a genericized head-space with respect to users. There are, I think, a couple reasons for this.

IMPORTANT NOTE: much of this probably sounds like a proposal for core. It's not. While I'd personally like to see a solution along these lines integrated into core, this solution requires Panels, which would mean getting the Panels engine into core - and that's a whole other bag of chips. This idea CAN be implemented entirely in contrib, and that's what my focus is on here.

The 'User-Centric' Challenge

The first problem is tied to data storage: almost all of the profile solutions that have arisen store user data in nodes. Specifically, there's bio and nodeprofile for D5, which have merged (and an enormous kudos to everyone involved in that effort!) into content profile for D6. Now, many debates have been had about the appropriateness of nodes as a data storage mechanism, and let me be clear that, while it's an important debate, it's not the topic at hand. There's also core's profile.module...but that's it's own whole can of worms that needn't be opened right now.

With the node-based solutions, the problem is at render-time: if you're storing a whole bunch of data about users in their corresponding node, then you've got to pick a render-time strategy for teasing out the particular subsets of data you want and arranging them on the page. Which either means doing it in the theme layer, or handing it off to another module first. Pushing the responsibility directly to the theme layer is just wrongheaded, in my opinion - it means that for any site implementing user-centric data pages, there's a task to be done which sits uncomfortably between a typical drupal dev and themer's toolset. Handing it off to another module first is the better option, because that module can make data-organizational level decisions, then present a consistent package to the theme layer. As far as I'm aware, Advanced Profile Kit is the only module that's really directly focused on that kind of logic for users. (Note that Michelle and I have been talking about this general idea for a while, and the long-term plan is to deprecate APK in favor of user_panels, which she and I would co-maintain)

But Users != Profiles, which begs the question: where does MySite fit in to the above discussion? It doesn't, really. MySite doesn't use nodes as a data storage mechanism, and it's not about building user profiles. It's more analogous to something like an iGoogle homepage - which in turn entails that it provide its own rendering logic for a DND interface. But it's still very much within the 'user-centric' scope. This disjointedness of these connections points to what I believe to be the second major problem with drupal's user handling: there are different conceptual axes along which any given user-centric page can be organized, and we're not always clear on which one we're talking about. Specifically, I see there being three axes: the user profile (bio/nodeprofile/content_profile, APK), the user homepage (mysite), and the user account page, which the core user module currently provides. Three, because of basic structural differences at the access level:

  • The account page is strictly user-facing; menu callback-level access is private.
  • The homepage is typically user-facing, with the potential for exceptions; menu callback-level access is semi-private.
  • The profile is public-facing, with the potential for restricting access to sub-components; menu callback-level access is public.

Note: There are some common exceptions to these access settings, but I'm not aware of any that can't be handled easily.

I don't know if this division has been explicitly articulated anywhere else, but its basic tenets strike me as being implicit in almost all of the discussions about Drupal's user handling. It's the conceptual underpinning over which many such discussions break down, because folks tend to (quite understandably) build a conceptual model of users based on the use case they've worked/are working from. Breakdown tends to occur over this problem: a piece of content that clearly belongs on the private account section/axis for Site A equally clearly belongs in the public profile section/axis for Site B. I think that core's existing system of providing user categories probably gets the gold star for best recognizing this reality, as it opens up the potential for implementing the user as a dynamic platform, viewable/interactable through many different lenses. Unfortunately, the core system crashes and burns on implementation.

A Platform: User Panels

There are a number of problems that a user platform has to solve if it's going to to better than core, more than just the ones I've described above. But they're a decent starting point, so I'll tackle the two major issues - node data retrieval & display, and handling of different user 'axes' - to begin with. A quick excerpt from earlier in the article:

...if you're storing a whole bunch of data about users in their corresponding node, then you've got to pick a render-time strategy for teasing out the particular subsets of data you want and arranging them on the page...Handing it off to another module first is the better option, because that module can make data-organizational level decisions, then present a consistent package to the theme layer.

That.Is.Panels. Minus the fact that Panels is not even remotely restricted to node data, it's a passable description of what Panels does: it grabs a specific bit of data and arranges it with respect to all the other pieces of data, all the while interacting with and presenting a consistent package to the theme layer. Problem 1, check.

The second issue is a little more complex, as it has to do with the way that Panels' context system works. But it's also the essence of user_panels-as-platform. I don't want to digress into the depths of the Panels engine, though, so I'll start with the final vision. PLEASE note that this description simplifies on a number of levels concepts for clarity & brevity:

  • Modules such as like nodeprofile, bio, content_profile, mysite, etc., would provide the content they create and store as pane types to be used by the Panels engine. (Things provided by core can be packaged into the user_panels module itself).
  • Through an administrative GUI, site admins can choose which (if any) of the different axes - private, semi-private, and public - get to use which of the various pane types provided by those modules.
  • Site admins can choose [system] paths at which each of these axes should reside, as well as whether or not to enable the semi-private or public axes at all.
  • Site admins can also set up how each of the displays for the axes should look, and set the override mode for each axis: either 'blueprints' or panels_page-style.

I'm hoping that the only particularly difficult thing to grok in that bullet list is the 'override mode'. The mechanics are pretty abstract and arcane, but in application, it's really pretty simple: imagine that we're overriding node/% with panels, and we're not doing any funky stuff with different displays for different node types. In this case, panels_page does overrides by using a single display for ALL those callbacks. That means there'll be exactly one row in the {panels_display} table, with one configuration, and EVERY single page request of the form node/% will call up the data from that row. Even if you've got 10 million nodes, they're all rendered through that one display.

In Blueprints mode, however, having 10 million nodes would mean that you also have 10 million displays. The difference is significant because it means that for each of those nodes, the node's owner is able to control how his/her node looks without affecting how any other nodes look. All the site admin does is create a 'blueprint' that provides all new nodes with a pre-configured display, that the owner can then change at will. In other words, everyone gets to control the appearance of their own node - or for our case, their profile, or homepage, etc. This is the paradigm under which og_panels operates.

Hopefully I'll have time to write up a little more about this paradigmatic difference, and potentially some efforts towards abstracting the process of writing a blueprints-based system (panels_page-style overrides are fairly straightforward by comparison), but that's all a separate blog post. For our purposes here, the bottom line is: Site admins can decide whether all user_panels are identical (created by the site admin), or if the users should be able to modify them.

Most of what needs to be done to make this a reality isn't actually that hard. We'd need to stitch together an admin interface, and pull in pieces of code that have already been written and tested in og_blueprints and panels_page. Abstracting the blueprints paradigm would be nice, too, but it isn't strictly necessary and can be done later.The only part of this whole idea that I think would be difficult is the very first bullet point in the list - writing the Panels integration for each of those modules. That's the part that'll depend on interest in this idea by the rest of the community.

Aug 29 2008
Nik
Aug 29

Day three already… time flies. Today starts off with a theming talk by my roomie MortenDK – who has apparently recently promoted himself to “King of everything”, which is very humble of him, I’m sure. The Dane managed to cuss his way through ninety minutes of ‘Sex, Drupal & Rock n Roll’ with ease, despite the horrific amounts of heckling from yours truly and Si (LyricNZ).

Next up, I watched William Lawrence with his overview of accessibility in Drupal theming. Some good points here, and answers to questions about screen readers, browser extensions and such. All good!

With it being Friday, we took this opportunity to have a long lunch and went out for Goulash soup, which was grand.

Post luncheon, I attended the Acquia beta test kitchen, where I proceeded to break pretty much everything. I am assured that this is providing them with “useful information”, but I worry that there may have been some fairly colourful language used in reference to certain parts of that session…!

Then followed a quick BoF in the form of a chat about taxonomy and menus, hosted by pwolanin and catch, which proved to be quite useful – lots of stuff about the new hierarchical select module by WimLeers and other titbits like taxonomy_context.module and some potential uses.

The evening takes us to the Taj Mahal indian restaurant, where the service was appalling, but the belly dances were frequent. I hear some people enjoyed that rather more than the meal…

Aug 29 2008
Nik
Aug 29

Thursday, Day two. I didn’t do much today in terms of sessions, but I’d spoken to Kieran Lal of Acquia the day before regarding a voluntary interview… more on that in a tick.

I hear that the testing party went well in the morning, the pancakes weren’t fictitious and lots of people turned up!

Just after lunch, we had the Drupal.org redesign session. Mark Boulton [Design] acknowledges that this particular redesign is potentially the hardest thing he’s ever done, with over 200,000 clients involved in the project – I hope he realises how damn picky we all are!! MortenDK, myself and no doubt several others have put our names down to help out with whatever aspects of the theming that we can.

Next, for me, followed an interview with Leisa, who is the usability expert assigned to interview lots of Drupal.org users prior to the redesign process. I surprised myself with how little I could remember about the various aspects of the front page and most other sections of the site. I genuinely hope that these interviews are going to be useful – I strongly suspect that they will be, as I hear that lots of things were being mentioned repeatedly.

Later that night, we had the welcoming party. You can keep the wine spritzers, thanks – I’ll have a beer. A good time was had by all, after we worked out how to actually get beer. Some pretty good photos from the party can be seen on my Flickr account, in the set tagged DrupalConSzeged2008, with the title DrupalCon Szeged Welcome Party

Aug 27 2008
Nik
Aug 27

Day one of the Drupal Con in Szeged, Hungary. Morten and I made the initial keynote speech by Dries & Co. Well, almost. We were in the door about 4 seconds, there was some clapping, and we left. Couldn’t tell you why we were so late… ;)

Sam Boyer then presented a very decent overview of the Panels module, with some information about the direction of Panels 2 and what’s upcoming in version 3. The key fact that people are most likely hanging on for is the release date for Panels 2 under Drupal 6. Sam seems confident that the answer is something akin to “soon, I promise!” This is a Good Thing. Currently we’re developing a magazine website and using version 6, and really Panels 2 is the last great module that’s not working under that version.

One thing I picked up on was the proposed abstraction of the override functionality (node, taxonomy, etc) into a separate module. This is pretty smart thinking – when I give people a tour of the panels administration interface that powers the Curt Smith website, the override stuff is probably about 40% of the show.

As I write this paragraph, Rasmus Lerdorf is talking about some insane low level PHP optimisation stuff. Some of it I get, some of it is working at a level that is so low that I just can’t afford the brain cells to care about it. He makes a pretty convincing argument for this optimisation though, and points out that you’re basically an idiot if you don’t use opcode caching on your server. Ahem, no comment.

Rasmus also mentions how optimisation can have a positive effect on managing environmental damage – specifically he mentioned the ubiquitous kitten and its untimely demise due to environmental damage. I have taken this concept and run, coming up with the concept of a new unit of measurement for computing – “cycles per kitten”.

Next on the agenda (of most interest to me) is Emma Jane Hogbin’s small business overview, or Wolfgang’s rules module talk. Not sure which yet. More later.

Later: went to Emma Jane’s talk, which was very frank and it was most refreshing to see someone with such vigour and passion talk about her business and practises. Currently listening to ChX talking about menus. I have not a lot to say about that, as I understand very little…

Day one is a great start to an already cool ‘Con, so here’s looking forward to more of the same!

Apr 09 2008
Nik
Apr 09

Just a quick tip for an extra, more accessible theming variable. I have personally found that on nearly all the sites I’ve built, I’ve never had a use for the Mission variable. So it struck me that I could probably use this field to output something else; something relevant to the general workings of the site, for sure though.

So, on this site, I have edited the mission variable and put in the copyright notice that you see at the bottom of the page. I saved the config screen and carried on, thinking that it would just be working – I had tested it out on the front page, and the value was appearing where I had placed the $mission variable in the footer area in my page template. No problems, I thought.

Today, I actually noticed that this was not appearing, and I couldn’t work it out for a while, but I trawled through the phptemplate.engine, and in there is some code that conditionally sets the $mission variable on only the front page – perhaps that’s why it doesn’t get used so much?

Anyway, I opened up the template.php file for my theme, and placed in it the code below, in the _phptemplate_variables bit under case &#039;page&#039; – see here. Now I have a usable variable across all pages of my site, with the added advantage that this is accessible from the admin interface at “site information”. I guess that the theme settings API in Drupal 6 may alleviate this problem, but for simple things like updating the year (which is contained in my © statement) in D5, this is a potential time saver (and face-saver) for administrators.

<?php
 
// populate the $mission variable on every page so we can use it universally
  // don't check <front>, it's already handled in phptemplate.engine
if (!$vars['is_front']) {
   
$vars['mission'] = filter_xss_admin(theme_get_setting('mission'));
  }
?>
Mar 26 2008
Nik
Mar 26

Today I’m just demonstrating a few simple theme adjustments to comments. Comments in Drupal 5 are “not sexy”, out of the box, so in this short post I’m going to illustrate how to:

  • change the text on “submitted” lines in comments (just a little)
  • add nofollow to username links – unless you’re feeling generous
  • remove the “not verified” marker from anonymous users

<?php
// change the "submitted by" text to "posted by"
// note that you can alter the date display too, by changing the way
// format_date() is called - see http://api.drupal.org/api/function/format_date/5
$vars['submitted'] = t('Posted by !a on @b.',
                     array(
'!a' => theme('username', $vars['comment']),
                    
'@b' => format_date($vars['comment']->timestamp)));
?>

This first snippet belongs in the ‘comment’ section of your _phptemplate_variables function in template.php; I have an example of this here.

Moving on to that “not verified” business, and the question of holding on to your link juice, adding an overridden theme_username() function is the way to go. Here’s some code, which you can also place in your template.php file in your theme. Remember, if you don’t have that file in your theme, you can just create it yourself. I personally recommend Zen as a starting point for theming.

<?php
// we can't use phptemplate_username as this is already declared in that engine
function mytheme_username($object) { // rename according to your theme

  // this basically means "if the user has an account"
 

if ($object->uid && $object->name) {
   
// Shorten the name when it is too long or it will break many tables.
   
if (drupal_strlen($object->name) > 20) {  // obviously you could change this value
     
$name = drupal_substr($object->name, 0, 15) .'...';
    }
    else {
     
$name = $object->name;
    }

    if (

user_access('access user prosites/default/files')) {
     
// we could nofollow the internal links too (authenticated user's pages)
      // but there's not really any point - my site doesn't use membership,
      // so usernames are not highlighted -
      // commented line below would do that, though.
      // $output = l($name, 'user/'. $object->uid,
      //     array('title' => t('View user profile.'), 'rel' => 'nofollow'));
     
$output = l($name, 'user/'. $object->uid,
          array(
'title' => t('View user profile.')));
    }
    else {
     
$output = check_plain($name);
    }
  }
// if we're entering this func, the user is anon (i.e. we want to nofollow them)
 
else if ($object->name) {
    if (
$object->homepage) {
     
// this is where we're nofollow-ing the external links to comment authors' pages
      // we don't really need to use t() here, as rel=nofollow is language independent
     
$output = l($object->name, $object->homepage, array('rel' => 'nofollow'));
    }
    else {
     
$output = check_plain($object->name);
    }
   
// commenting out this line prevents "teh ugly" in the $submitted text
    // $output .= ' ('. t('not verified') .')';
 
}
  else {
   
$output = variable_get('anonymous', t('Anonymous'));
  }
  return
$output;
}
?>

If you’ve got any other simple tips like these, let me know, so I can use & share those too!

Update: This code works fine under Drupal 6 as well – but don’t forget to clear your theme registry when you’ve modified things in your theme!

Feb 28 2008
Nik
Feb 28

This snippet of code gives a brief example of how to rewrite components of the $links variable to make them prettier :) Specifically, here I’m overwriting the link generated by the Forward module. You can see the result below: the little envelope icon labelled “Email”. Normally, this would just say “Forward this page”, which is a bit… well, it could be better. Obviously, it’s nice to be able to change these things to taste.

There are two ways to achieve this result: using theme code in template.php, or inside of a helper module. First, I’ll discuss the module approach.

The helper module method is what I had originally used. It’s a little neater, in that you code it once and forget about it, and it doesn’t clutter up template.php’s _phptemplate_variables function, which can easily become bloated with code.

In the module, I’ve added a function to implement Drupal’s hook_link_alter() function. Here’s the code to do it:

<?php
function mymodule_link_alter(&$node, &$links) {
  foreach (
$links as $module => $link) {   // iterate over the $links array
    //drupal_set_message(print_r($links)); // uncomment to display your $links array

    // check if this element is the forward module's link
   

if ($module == 'forward_links') {
     
$title = t('Email this page to a friend');    // change the title to suit
     
$path = path_to_theme() . '/images/email.png' // make an image path

      // now update the links array
      // set the title to some html of the image and choice of link text
     

$links[$module]['title'] = theme('image', $path, $title, $title) . ' Email'; // let's set some attributes on the link
     
$links[$module]['attributes'] = array(
       
'title' => $title,
       
'class' => 'forward-page',
       
'rel' => 'nofollow',
      );
// this must be set, so that l() interprets the image tag correctly
     
$links[$module]['html'] = TRUE;
    }
  }
}
?>

Ok so really, this ought to be done in the theme layer. Like I said, it’s perhaps not as compact and neat, but here’s the code. It’s mostly the same, but note a couple of additions and changes: firstly, we are not changing $links – this is a pre-rendered string by the time it gets to the template.php. We need to get to the original goodies! Hence, we use $vars[&#039;node&#039;]-&gt;links[module-name][field-name].

Secondly, note that because we have now altered the value of one of the original links’ values, does not mean that the node’s $links is correct. This is the bit that caught me out! We must now regenerate the $links variable using the theme_links() function, as per the last line of code below. This mimics what phptemplate.engine does in core.

<?php
function _phptemplate_variables($hook, $vars = array()) {
  switch (
$hook) {
    case
'node':
      foreach (
$vars['node']->links as $module => $link) {
        if (
$module == 'forward_links') {
         
$title = t('Email this page to a friend');
         
$path = path_to_theme() . '/images/email.png';
         
$vars['node']->links[$module]['title'] =
             
theme('image', $path, $title, $title) . ' Email';
         
$vars['node']->links[$module]['attributes'] =
              array(
'title' => $title, 'class' => 'forward-page', 'rel' => 'nofollow');
         
$vars['node']->links[$module]['html'] = TRUE;
        }
      }
     
$vars['links'] = theme('links', $vars['node']->links,
                                           array(
'class' => 'links inline'));
      break;
  }
}
?>

You can achieve this effect for anything that’s in the $links array. On this page (below), you can see the link I’ve described here, another for print-friendly pages and also a themed comment link.

Feb 27 2008
Nik
Feb 27

Currently there are three options for creating error pages in the Drupal system, that I know of. I’m going to show here which I think is the best, for reasons of usability, performance and general webmaster sanity. At the foot of this article, there’s some free code too!

The options:

Drupal’s build in error page support

Drupal provides, out of the box, two fields in the Error Reporting configuration screen. These fields can be set to any internal Drupal path. Usually, they will be set to point the user to a page created specifically for the purpose.

The downside to this is that these will now be nodes in the system, and as such they will show up in popular content lists, site searches and the like. This is clearly not desirable.

Update: I have been made aware of an outstanding issue in Drupal core with error pages. This issue means that a user without “access content” permissions cannot access 403 error pages that are created as nodes. This is true in Drupal 5.x and even 6.1, and is another weak point for this mechanism.

Search404 module

Until very recently I was using search404 but I became less than pleased with the results. To start with, I thought I was aiding usability, but as it transpires… not really. The real killer for me is that search404 often gives me empty search result sets, because the path elements just don’t relate specifically enough to the content.

For instance, the node “/blog/my-drupal-article” will almost certainly contain all the words “my drupal article”, but may not contain the word “blog”, except in the path. This means the search doesn’t catch that article, so you get no results. Given that every 404 page the module generates incurs a DB query automatically, this query is effectively just trash, but cannot be disabled.

Customerror module

Customerror module skirts round the issues of having nodes as error pages. The module makes error handling pages available as custom paths inside Drupal. These aren’t nodes, so we have no issues there.

The configuration screen offers up two textarea fields which will contain the page content to be rendered on each of the 403 and 404 page errors. The key to making this more special than just a plain text or html page is the availability of PHP processing for these fields whilst not requiring nodes for the task.

Ok, so what I’m doing here is recommending customerror as the best choice for this task. That said, let’s throw down some code and make this more useful.

To start, visit the standard Drupal error reporting page at “/admin/settings/error-reporting”. Here, set the default error page fields to “customerror/403” and “customerror/404” respectively, if you’re going to override both these pages.

Now, on the Custom Error module’s config page at “/admin/settings/customerror”, enable both checkboxes that say “Allow PHP code to be executed for 40x”. Now let’s look at handling the 404 error. I’ve added the following code for this site, in the “Description for 404” textarea, and a suitably snappy title in the other field: “404 Not Found Error: No content found at the requested URL”.

<p>Sorry, no content was found at the requested path - it's possible that you've requested this page in error.</p>

<p>Use the search form below, or go to the <a href="http://www.kinetasystems.com/">home page.</a></p>

<?php
// check that the search module exists and the user has permission to hit the form
if (module_exists('search') && user_access('search content')) {
 
// cool! - customerror doesn't trash the page request and the full path is available
 
$path = $_REQUEST['destination'];
 
// bin anything that's not alphanumeric and replace with spaces
 
$keys = strtolower(preg_replace('/[^a-zA-Z0-9-]+/', ' ', $path));

  // retrieve the search form using the data we've pull from the request
  // note that we can override the label for the search terms field here too
 
print drupal_get_form('search_form', NULL, $keys, 'node', 'Search terms');
}
?>

In the 403 error fields, we adopt a similar technique. I’ve used “403 Forbidden Error: Access to this page is denied” for the title. Here we display different content depending on whether or not the user is logged in. If you’re running a site with lots of members, you can uncomment the user login line towards the bottom and the login form will be rendered on the 403 page!

<?php global $user; ?>
<?php if ($user->uid): ?> 
  <p>Sorry <?php print $user->name; ?>, you don't have permission to view the page you've just tried to access.</p>
  <p>If you feel that you have received this message in error, please
    <a href="http://www.kinetasystems.com/blog/creating-custom-error-pages-in-drupal/contact">contact us</a> with specific details so that we may review your access to this web site.</p>
  <p>Thanks</p>
<?php else: ?>
  <p>This page may be available to clients and registered users only. Please select from one of the other options available to you below.</p>
  <ul>
    <li><a href="http://www.kinetasystems.com/user/login?<?php print drupal_get_destination(); ?>">Login</a> to view this page</li>
    <li>Use the <a href="http://www.kinetasystems.com/blog/creating-custom-error-pages-in-drupal/search">search</a> facility</li>
    <li>Go to the <a href="http://www.kinetasystems.com/">home page</a></li>
    <li>Go to the <a href="http://www.kinetasystems.com/blog/creating-custom-error-pages-in-drupal/sitemap">site map</a></li>
  </ul>
<?php //print drupal_get_form('user_login'); ?>
<?php endif; ?>

Now we’ve got friendly, usable error pages that are helpful and don’t scare off visitors!

Updated 24th April 2008

Feb 23 2008
Nik
Feb 23

Quite frequently I’ve been asked about putting images into site “sections”, depending on path or menu trail. Look up, that “Blog” image is what I’m talking about. It’s on all blog related pages. So, here goes – it’s nice to be able to finally offer this information here.

The first main chunk of code attempts to get a menu item and build an image link from that. The second chunk assumes failure of the first and tries again using a partial path method.

If all nodes on your site have menu entries, you can use that piece of code independently. Likewise, if all your nodes can be identified by the first bit of the path, the second chunk will stand alone.

I have got a mixture of the two on this site. A lot of the entries have menu entries, but the blog and portfolio section do not. Therefore, the image links on those sections are powered by the second chunk.

Note: this code expects to find sites/default/files of the GIF type in a directory ‘images/sections’ within my own theme directory. It also will only pick up sites/default/files that have names which are all lower case. In the case of menu entries that contain spaces, those will be replaced with hyphens, so if the menu link is “Site Map”, the image name will have to be “site-map.gif”. Path-based is really dependant on how you are using aliases (e.g. your pathauto.module setup) and isn’t really inside the scope of this article. You’ll have to figure that out yourself.

Okay; in order to not crowd up _phptemplate_variables(), I add just this one line of code in template.php inside that function (under ‘page’ – see here for details):

<?php
$vars
['section_link'] = get_section_link();
?>

Then, elsewhere in that file, this code:

<?php
function get_section_link() {
 
// MENU - attempt to make a section link from a menu item, for this page
  // get active menu trail into an array
 
$menu_items = _menu_get_active_trail(); // $menu_items[1] is the top parent of our menu container, e.g. primary links
  // this gets the required menu item into an array
 
$link_array = menu_item_link($menu_items[1], FALSE); // whip out spaces and make the name lower case
 
$section_name = strtolower($link_array['title']);
 
$section_name = str_replace(' ', '-', $section_name);

  if (

$section_link = render_link($section_name)) {
    return
$section_link;
  }
// PATH - if we've not returned, we couldn't make a valid link from menu
  // let's try a path approach instead?
 
if (module_exists('path')) { // dependency for drupal_get_path_alias $sections = array(); // an empty array to collect stuff in

    // get all the top level links in the primary nav (id of 2) into a array
   

$primary_nav = menu_primary_links(1, 2); // iterate over the array and pull out the top level paths
   
foreach ($primary_nav as $menu) { // get the first element of the aliased path for this menu item
     
$path_element = explode('/', drupal_get_path_alias($menu['href'])); // put the first chunk of each path onto an array
     
$sections[] = $path_element[0];
    }
// get the aliased path for the page we're on
   
$section = explode('/', drupal_get_path_alias($_GET['q']));
   
$section_name = $section[0]; // if the path matches a nav item, create a section image
   
if (in_array($section_name, $sections)) {
      if (
$section_link = render_link($section_name)) {
        return
$section_link;
      }
    }
  }
}

function

render_link($section_name) {
 
// construct the image's path (mine are GIFs stored in a subdir of my theme)
 
$image_path = path_to_theme() . '/images/sections/' . $section_name . '.gif'; // make some text for the image's alt & title tags (SEO, accessibility)
 
$image_alt = $section_name . t( ' section');
 
$image_title = $section_name . t( ' section link'); // render image html using theme_image (returns NULL if file doesn't exist)
 
$section_image = theme('image', $image_path, $image_alt, $image_title); // if the image rendered ok, render link using above variables
 
return ($section_image) ? l($section_image, $link_array['href'],
      array(
'title' => $image_title), NULL, NULL, FALSE, TRUE) : NULL;
}
?>

Then finally in page.tpl.php (and any other page templates) we can use the variable in the “Drupal Way”, and print our variable where we like!

<?php if ($section_link): ?>
  <div id="sectionTitle">
    <?php print $section_link; ?>
  </div>
<?php endif; ?>

Feb 14 2008
Nik
Feb 14

Yesterday afternoon, I popped my head onto the Drupal IRC channels, and suddenly everything went a bit strange…

Somebody shouted, “Drupal 6 is out!” – and then the whole place erupted into hysterical blogging and Digg-ing (over 1000 now) and general chaos. Millions of tiny blue aliens are unleashed – Drupal 6 is ALIVE!

But the real story is obviously the news of the latest release of our favourite content management system-stroke-framework! Or should that be “community site building system” or “social publishing platform” or… well. That’s the great thing about Drupal, it can do pretty much anything! Expect it to be doing the washing-up for you by v7.

So what’s under the hoody? More features! Even better security! Massively improved language support! Performance streamlining! Usability enhancements up the wazoo! Just… loads of stuff. Loads.

Please Digg the story if you can, or for a clearer idea of what’s going on, check the official Drupal.org thread on the release here. As if it wasn’t crystal clear already…

Oh, and here is the official Drupal 6 press release. I quite like this doc – cos I helped write it! I wrote the intro text and title (with some help from Keith and final mods). That’s my fifteen nanoseconds of fame…

Feb 11 2008
Nik
Feb 11

I sometimes run into people on the Drupal IRC channels that have a theming issue they just can’t fathom – dysfunctional CSS. Some poor guy has overridden a core CSS class, but his styles just don’t work. I’ve been there myself before and I know it can be very frustrating.

Rather than showering your CSS with !important tags, here is an alternative – remove the offending style file altogether. You can then copy the style information into your own theme, remove any bits that you don’t want and alter it as you see fit.

Let’s see how we get rid of those sites/default/files and regain control of our CSS. Put some code similar to this in the “page” section of your template.php file’s _phptemplate_variables function (see this example).

<?php
// get all the current css information into an array
$css = drupal_add_css();// copy stuff you want to keep from these sites/default/files into your theme's style.css
// or maybe make a separate file for that and @import it into that file

// now we can ditch unwanted core css sites/default/files from the array and they won't be included

unset($css['all']['module']['modules/user/user.css']);
unset(
$css['all']['module']['modules/node/node.css']);// and now, removing the css sites/default/files of some contributed modules
// I'm putting them into an array to save space and code repetition
$rm[] = drupal_get_path('module','content').'/content.css';
$rm[] = drupal_get_path('module','devel').'/devel.css';
$rm[] = drupal_get_path('module','gotcha').'/gotcha.css';// now we can remove the contribs from the array
foreach ($rm as $key => $value) {
  unset(
$css['all']['module'][$value]);
}
// now place the remaining css sites/default/files back into the template variable for rendering
$vars['styles'] = drupal_get_css($css);
?>

You should now be able to see that the CSS sites/default/files have disappeared from the head of your document – a pretty drastic step, but it’s pretty much guaranteed…!

Feb 07 2008
Nik
Feb 07

Recently released – a Javascript aggregator module for Drupal 5. This function is included in core in Drupal 6, but users of 5.x are left hanging on. Enter, the Javascript Aggregator module.

I put this onto my own site straight away to test it out, as I’m using shared hosting at the moment, and I want to reduce page load times as much as I can. It seems to work just fine. The module requires no core patches, and also includes an interface for file exceptions, as TinyMCE (natch) causes the module to fail.

Theming purists should maybe note the issue I filed here, which clears up a problem with having to add unnecessary PHP into the page.tpl.php file. It looks like this will get sorted and patched fairly soon, though [Edit: that’s fixed]. This is another great way to speed up Drupal page load times!

Posted by Nik - February 7th, 2008 at 11:58am

Sep 14 2005
Sep 14

A lot of people have already blogged about the new Planet Drupal. It's about time to get my act together and blog about it, too.

I have been submitting a few patches for Drupal's aggregator module recently and working closely together with Dries (Drupal's Benevolent Dictator for Life) to create the planet. He mercilessly reviewed (and applied!) my patches, commented on my ideas and suggested lots of improvements. Without him the planet wouldn't be here today.

As I already stated in the original announcement, we run the planet using Drupal, of course (dogfood etc.), and not the (otherwise quite nice) PlanetPlanet, which most other planets use.

The now improved aggregator module is capable to create a Planet site like Planet Drupal in a few minutes and mostly out-of-the-box, requiring no changes to the code. There are a few minor issues which we need to sort out, but the Planet is there now, and you can subscribe to it in any RSS feed reader.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web