Sep 05 2018
Sep 05

Soup Up your dev environment with Fast CGI process Manager : php-fpm

Say you have already set yourself up with a web development environment following Andy Miller’s macOs dev setup guides part one and part two. However one can grow tired of pulling that daily php version switch. The question is: Is there a way to run multiple instances of php side by side? Yes! There is a way. Let’s find out how:

STANDARD

Firstly, a quick recap on how things are currently wired up and how apache is capable of rendering a php file using x-httpd-php, and mod_php.

  • The apache_core enables apache to map the <FilesMatch "\.php$"> to be handled by the SetHandler application/x-httpd-php. Notice that this application is under apache and is not stand alone.
  • Apache then knows what module to load as a single php apache module as defined in /usr/local/etc/httpd/httpd.conf. For example a php5 module definition is something like: LoadModule php5_module /usr/local/opt/[email protected]/lib/httpd/modules/libphp5.so
  • Moreover, when dealing with a PHP file, apache will use CGI by launching a process -> initializing the environment -> digesting the request -> returning the result. in a nut shell FCGI is the protocol that web servers (like apache) use to delegate work to other processes

 

SOUPED-UP

We are going to no longer use the mod_php and instead replace it with stand-alone php-fpm processes. In other words php-fpm is a php daemon that's configured to listen and respond to the FCGI protocol This means that we need to start/stop php independently of apache. The proxy_fcgi_module is used so that the SetHandler can connect to the php-fpm socket. I m going to go through this step by step for adding the php version 5.6 later you can follow the same steps to set up other php versions.

Step 1 - proxy_mod

Add the proxy modules: In your /usr/local/etc/httpd/httpd.conf find the commented lines below and uncomment:


LoadModule proxy_module lib/httpd/modules/mod_proxy.so
LoadModule proxy_fcgi_module lib/httpd/modules/mod_proxy_fcgi.so

 

Step 2 : Configure php-fpm

Copy the listen path: In your /usr/local/etc/php/5.6/php-fpm.conf find the listen path and copy it someplace safe. We have to point our vhost to this listen path in the next step


listen = /usr/local/var/run/php56-fpm.sock

Set up your log paths: In the same file /usr/local/etc/php/5.6/php-fpm.conf look for both Error_log & Access_log. Uncomment and set:


error_log = /Users/YOURNAME/sites/log/5.6/php-fpm.log
access.log = /Users/YOURNAME/sites/log/5.6/$pool.access.log

Set your User/Group Again In the same file /usr/local/etc/php/5.6/php-fpm.conf look for user and group. Change them from _www to whatever user and group you have set your apache to:


user = YOUR_APACHE_USER
group = YOUR_APACHE_GROUP

 

Step 3 : Re-configure your Vhost

Add a new ServerAlias In /usr/local/etc/httpd/extra/httpd-vhosts.conf add a server alias that is version specific. In this case php56 seems reasonably simple and clear. (note that this depends on the dnsmsq wildcard in this example my host is *.test)


ServerAlias *.php56.test

Redefine your SetHandler As mentioned we will use the proxy handling. Change the application/x-httpd-php to the following: ( with respect to VirtualDocumentRoot beings set to /Users/YOURNAME/sites/%1/public_html ). This has to match the listen path you gathered in step 2. Note that I d like to pass the socket to a designated port on the fcgi side ie: 8056 for 5.6 version.


SetHandler "proxy:unix:/usr/local/var/run/php56-fpm.sock|fcgi://localhost:8056"

Add Directory specifications:

<Directory "/Users/YOURNAME/sites/*/public_html">
   DirectoryIndex index.php index.html
   Options Indexes MultiViews FollowSymLinks
   Require all granted
   AllowOverride All
</Directory>

 

Step 4 : Run php-fpm

Fire up your php-fpm: Running the php-fpm with flag -D will force the process to run in the background


/usr/local/opt/php\@5.6/sbin/php-fpm -D

Test your local. You can verify the php-fpm process on Activity Monitor.

Step 5 : Repeat the steps 2 - 4 for other versions of php (7.x). In this case 7.2.

full vhost:

<VirtualHost *:80>
   ServerAlias *.php72.test
   VirtualDocumentRoot /Users/YOURNAME/sites/%1/public_html
   <FilesMatch "\.php$">
     SetHandler "proxy:unix:/usr/local/var/run/php72-fpm.sock|fcgi://localhost:8072"
   </FilesMatch>
   <Directory "/Users/YOURNAME/sites/*/public_html">
       DirectoryIndex index.php index.html
       Options Indexes MultiViews FollowSymLinks
       Require all granted
       AllowOverride All
  </Directory>
</VirtualHost>

Error handling: change the /usr/local/etc/php/7.2/php-fpm.conf


error_log = /Users/YOURNAME/sites/log/7.2/php-fpm.log
access.log = /Users/YOURNAME/sites/log/7.2/$pool.access.log

Fire up your php-fpm:


/usr/local/opt/php\@5.67.2/sbin/php-fpm -D

And now Restart apache and test away.

Last thing to do is to add the start scripts for your php-fpm to your startup login items.

A quick note:There are other ways you could route your php version. One way that was not mentioned above is to place the web projects in different folder bins and instead of setting your ServerAlias rather configure the Directory.

 

Conclusion

Move your dev environment from mod_php to php-fpm and not only you easily can switch between projects without the hassle for reconfiguring your apache but also enjoy the high performance.

As a reminder, this post focuses on local development environment and assumes that security and heavy request load are both not a concern. Although you can find the perfect balance for your production servers with the right tweaks as well as a security in mind; I d leave that discussion out for another post. Just incase if memory is a concern you can always tune your memory usage so it won't deprive other processes. Happy deving!

Image credit: Jonathan Elliott : https://www.flickr.com/photos/jon3lliott/
 
Aug 20 2018
Aug 20

Docker is gaining traction in the software development industry at such a phenomenal rate that more and more teams are adopting it into their processes. Keeping everyone on the same page with technology stacks has become increasingly difficult as the technology itself becomes more complex. There are countless technical articles on Medium extolling the virtues of Docker and how it can help mitigate these types of problems. Now that I have been using Docker as a local development environment for over a year, I’d like to talk about the pros and cons of making the switch.

 

Pros

Docker on Mac and Windows are better than ever.

Docker is getting more and more mature by the day, especially as more developers are turning to it for the development and production environments. Docker for Mac and Docker for Windows reduces much of the complexity of leveraging Docker on non-Linux environments through abstraction. It now works as simply as any other application in your environment -- install it in the same fashion as your other tools like your text editor and browser -- and you are off to the races.

Docker gives you consistency across your team.

Another aspect of Docker that I love is the reliability that you have the same setup as your team. This is great for a couple reasons: there is so much value in being able to run with the assumption that your entire team is using the same setup, and it enables you to run scripts and processes that will empower your entire development team to perform common operations with a simple command.

Docker eases the pain of debugging environments.

Our processes used to consist of having to track down pages and pages of documentation, only to find that this documentation is out of date and needs to be updated, so where do I end up? Spending days pouring over posts on Stack Overflow trying to find out how to update the services I’ve been using on my machine, which are probably different than everyone else’s on my team. So each developer needs to do that exploration independently because each solution will be slightly different. With Docker, you can easily isolate and eliminate environment issues across your team without needing to know how someone’s machine is setup.

Interesting avenues of automation open up with Docker.

As someone who is fascinated with automation, I’ve always tried to find shortcuts to doing the boring, repetitive jobs that take too much time to do manually. Have to update the database on the development server? Ok, great, you just need to look up the password for the development server (which could be any of 4 different places, depending on the project) and once that’s done, you realize you also don’t have the password for the live or staging environment, so you have to wait 3 days for the call from the IT firm that manages that server to give you credentials. With a Docker infrastructure, you can easily transfer the environment (with some small utility changes) to the CI system of choice. Most of the prominent CI solutions available today integrate well with Docker.

Docker speeds up the provisioning process.

You can avoid wild goose chases by providing all you need from within the docker container, potentially using a set of shared ssh keys, or requiring users to use personal keys on an identity service (best case scenario, but impractical for some teams).

Docker for Mac and Windows is more stable than ever before.

Even with its rapid development cycle, the team working on Docker for Mac have done an exceptional job keeping the platform as stable as possible. This is important because the app auto-updates. Despite the fact that I am on the Edge version of the platform (which seems to update weekly), I have only had one issue that caused me to have to rollback. I ended up losing a day of development time to this bug, but after I found a way to rollback gracefully, the following day there was a bugfix released to remedy the issue. Despite this one small glitch, I still stand by Docker because I had more conflicts when I was relying on Homebrew or compiling binaries for services manually.

The Docker community is huge and resources are plentiful.

There are a huge number of images (last count on Docker Hub has it at 100,000+) to pull from for free from Docker Hub. These images are blueprints to creating your containers which will house all of your services for your application. You will find varying success on the level of documentation each of these images provide, but for the most part they are ready to plug into your application.

 

Cons

Despite all of these good things, everything is not sunshine and rainbows with Docker, though. There are many things about Docker that can be tricky to grok with and it helps to have someone on your team familiar with the technology.

Docker has some gaps in documentation.

Docker is moving at a fast pace and it is very hard to keep up with the latest advances and changes. While there is some fantastic documentation, there seem to be some gaps specifically within the docs for the abstraction layers (Docker for Mac, Docker for Windows).

Docker has performance problems on non-native environments.

Despite Mac being based on the native Unix layer, Docker still requires the actual Linux kernel (usually Ubuntu) in order to perform its operations. What Docker for Mac provides for you is an abstracted VM containing the kernel, which you never interact with directly. You will interact with the containers within that VM, but they are networked together with your host in a way that you will rarely need any information about the VM itself.

Using Docker (or any VM-based architecture) locally has one drawback which can be significant based on your stack and your team’s needs. The disk mounting / volume shares Docker for Mac provides are getting better with every release, but I have seen some pretty substantial drop offs of performance when doing intensive database operations.

Docker for Mac performance: A Test Case

My test case was using Docker for Mac and installing a web application framework (in my case, Drupal). On natively-compiled services (ie Homebrew), I could install Drupal in 45 seconds. Originally, with settings out of the box, Docker for Mac clocked in at 20 minutes and 32 seconds. I knew this couldn’t be reality, so I ended up twisting some knobs and flipping some switches (probably worth a follow-up article at some point) and got it down to 6 minutes, 47 seconds. While this is a significant improvement over the factory settings, it still leaves a lot to be desired in terms of performance.

There can be a significant learning curve to migrating to Docker.

Learning Docker is a significant time sink. There are a lot of concepts that are just different enough from a Virtual Machine infrastructure to cause confusion and make unlearning concepts from other areas a bit more of a challenge, even for experienced developers.

 

Is Docker right for me?

Identify your pain points.

Determining whether Docker is right for you is really up to your team. If you find yourself using a fairly reliable stack and on consistent development platforms, perhaps Docker is an unnecessary abstraction to add to your team’s workflow. However, if your team struggles with developer support on a platform level, Docker just might be the thing that makes the difference.

Docker can involve a large paradigm shift on a team.

Having a Mac on a local, sandboxed workstation can be beneficial— and many development teams stop here. However, should your use case and expertise on the team allow for it, it is also possible to have Docker as a deployment target on your servers. You will find this best suited for when you have supporting architecture to aid in your CI/CD processes. As expected, most of these components require a specific level of expertise. If you have an infrastructure based on VMs or even bare metal, this approach will come with a significant effort to build up the stack and train the developers/DevOps on how to use it.

Reliable, but at a cost.

Ultimately, I find Docker to be extremely reliable and use it in my projects which require cross-platform support. In many open source projects, for example, it is great to meet people where they are at and allow them to bring whatever they have access to. Docker can be extremely beneficial in this cases.

Let us know how you are using Docker.

Please reach out and let us know how Docker has been working for your team and leave a short pros/cons list of your own for the edification of others!

 

(Whale image courtesy of Max Pixel)

Feb 26 2016
Feb 26

I remember back at Drupalcon Portland 2013 asking Angie Byron (webchick), Why not ship the new Drupal 8 with a blank canvas theme, so front end developers could do their job without the needing to shop for a third party theme? She said that was a good argument and should be taken into consideration, and it looks like my requested was fulfilled. Thank you.

Creating a new theme

Classy and Stable are just pure CSS and HTML code, exactly the blank canvas that you need to create your new theme -- and they are part of core. The main difference between them is "classes". Classy adds class attributes so you can use them to build up your CSS code. Stable, on the other hand, is more purist and won't give you any of this, so you will have to override basically every single template to add HTML attributes.

Do not expect to find clutter like SASS, LESS, Grunt, Bower, npm, gems, Compass, Foundation, Bundler... All of these tools are awesome, but after years of using them, considering the evolution that they have gone through, I'm starting to doubt if they are making our lives easier or more complicated (this goes for you Ruby and your gem versions -- thankfully libSass came to the rescue). It's up to you "the developer" to decide what tools you are going to use and set up your environment.

Enable the debug mode

You will not get really far on your first D8 sites unless you enable the debug mode. Floyd explained how to do this very well on a previous post: Ten Pointers for new Drupal 8 developers.

Where do things belong?

This site being my first D8 attempt I approached the new theme as I did on any other Drupal site: Display Suite, Context, preprocess hooks, and you don't need to touch a single PHP template. I'm well known for writing my themes without touching a single template, but on D8 that's no longer the desired path to take. In D8 your HTML code should be allocated in Twig templates, but that leaves you with the dilemma of the Node Display user interface and modules like Display Suite. I'm ok using these modules to layout my fields but if I need to add a class I have to edit the template? Confusing, right?

We had the opportunity to have Angie recently at our office, and I cornered her (sorry Angie) to ask about these issues. She in fact confirmed the trend was to put your HTML code into Twig template as the community made huge efforts to remove all this code from the D7 functions.

It is clear to me that you should send an object from the controller to the template with all the needed values, and then build your HTML code there by using the variables you just passed. However, it's really quick using Display Suite to layout your node fields -- moreover if you take into account multiple displays. It's also intuitive to the user because s/he can sees the fields layout on the admin interface, and you can export it into code, which makes you less dependent on the database.

This line has became more blurred than ever, and perhaps I'll have to wait some time until a trend is established. If you already have any input please let us know about it.

Checking for emptiness

Be careful when you try to find out if the output of {{ foo.bar }} is empty. You are dealing with renderable arrays on your Twig templates and as it's explained over here on Drupal.org, it's not easy to know when it's truly empty.

Responsive images

D8 Responsive Images

Another core feature is the responsive images, under "/admin/config/media/responsive-image-style" -- you will be able to create different styles of responsive images, and if you combine them with the <hook_name>.breakpoints.yml file you will have better control of them.

Here is what our ab2016.breakpoints.yml file looks like:

ab2016.mobile:
  label: mobile
  mediaQuery: ''
  weight: 0
  multipliers:
    - 1x
    - 1.5x
    - 2x
ab2016.tablet:
  label: tablet
  mediaQuery: 'all and (min-width: 768px) and (max-width: 992px)'
  weight: 1
  multipliers:
    - 1x
    - 1.5x
    - 2x
ab2016.desktop:
  label: desktop
  mediaQuery: 'all and (min-width: 992px) and (max-width: 1200px)'
  weight: 2
  multipliers:
    - 1x
    - 1.5x
    - 2x
ab2016.desktop-lg:
  label: desktop-lg
  mediaQuery: 'all and (min-width: 1200px)'
  weight: 3
  multipliers:
    - 1x
    - 1.5x
    - 2x
ab2016.1vw:
  label: Viewport Sizing
  mediaQuery: ''
  weight: 4
  multipliers:
    - 1x
    - 1.5x
    - 2x

When creating a new style for a specific breakpoint you have three options:

  • Select multiple image styles and use the sizes attribute: This option will output an img tag with a srcset, something similar to this:

    <img property="schema:thumbnailUrl" 
         srcset="/sites/default/files/styles/max_768x768/public/my-image.png?itok=ibfcP6IN 768w, 
                 /sites/default/files/styles/max_992x992/public/my-image.png?itok=4QuVPlhU 992w, 
                 /sites/default/files/styles/max_768x768_1_5x/public/my-image.png?itok=Mg4Cdx65 1152w, 
                 /sites/default/files/styles/max_1200x1200/public/my-image.png?itok=6m11wbwz 1200w, 
                 ... 
         sizes="100vw" 
         src="https://affinitybridge.com/sites/default/files/styles/max_2000x2000/public/my-image.png?itok=Hhut0ibw" 
         alt="My image" 
         typeof="foaf:Image">
    
  • Select a single image style: This will give you a picture tag with your style in it, something similar to this:

    <picture>
    ...
      <source srcset="/sites/default/files/styles/max_992x992/public/my-image.png?itok=nJ6RWBL6 1x, 
                      /sites/default/files/styles/max_992x992_1_5x/public/my-image.png?itok=wQl9JTHH 1.5x, 
                      /sites/default/files/styles/max_768x768_2x/public/my-image.png?itok=MGLwtJwj 2x" 
              media="all and (min-width: 768px) and (max-width: 992px)" 
              type="image/png">
      <source srcset="/sites/default/files/styles/max_768x768/public/my-image.png?itok=R04MFCuT 1x, 
                      /sites/default/files/styles/max_768x768_1_5x/public/my-image.png?itok=01HL1dPa 1.5x, 
                      /sites/default/files/styles/max_768x768_2x/public/my-image.png?itok=MGLwtJwj 2x" 
              type="image/png">
    ...
      <img property="schema:image" srcset="/sites/default/files/styles/max_992x992/public/my-image.png?itok=nJ6RWBL6" alt="My image" typeof="foaf:Image">
    </picture>
    
    
  • Do not use this breakpoint: Can't get more clear than that title.

Kick old browsers

For real, draw the line on old browsers and develop only for those that support the current standards. What's the point of jumping into Drupal 8 when you can't use responsive images, flex layouts, etc.

Be patient

Everything is still in early stages, and we collaborated to fix the bugs we encountered:

Conclusion

I'm still puzzled about D8, I've been working for many years with Drupal and this is not at all my first project with Twig (we have worked with Laravel and Symfony, both with Twig templates before), but somehow I feel like D8 is not any one of them but instead a hybrid of all of them.

Although it's easy to see when the template is rendered thanks to the developement feature, it's not easy to find out where this template is called from, and once you find the caller you may still have to understand about controllers, entity, services, and so on. We all have read about this but coming from the "procedural programming" that Drupal has been is quite a jump, and making the connection is not as simple as it sounds.

So far Drupal 8 looks stronger and more robust to me, and I kind of like its complexity, but this is no longer a tool for beginners. Perhaps that was the intention the community with the development of this version but I'm wondering how easy will be for new programmers to join this wagon.

Feb 10 2016
Feb 10

We've been experimenting with Drupal 8 for a while now, but this site is the first project we've put into production built on it.

For folks who've worked with Drupal 6 or 7 previously and are just getting started using 8, here are a few tidbits worth being aware of when getting started.

1. Sooo much contrib is bundled in core now.

Those who’ve been in the Drupal community for quite a while remember the "Small Core" initiative. The idea was to strip Drupal core down to a minimalist framework, then use distributions to provide bundles of modules and configuration that would make Drupal a useful application in a particular scenario. Drupal Core itself would not be a useable product.

Small Core lost. Drupal 8 core contains major functionality that previously was provided by contrib modules, including Views, Views Datasource, Services, Entity API, Image Cache, Responsive Images, Breakpoints, Migrate, and WYSIWYG. Drupal 8 is a much more complete product out of the box then any previous version of Drupal.

A few features were removed from Drupal 8, most noticeably the blog module. It is easy to reproduce the blog functionality with the blog contrib module or even just by using the core components to add a custom content type and views to display the content. The latter approach is what we decided to do here.

When Drupal 7 was released many developers got burned by how many contrib modules weren't ready yet. The inclusion of so many essential modules in core has significant mitigated that issue, but there are still a number of "essential" contrib modules that aren't stable yet, notably token, pathauto, rules, and webform. There are prerelease versions of most of these modules in varying degrees of stability, and certainly trying them out, and contributing feedback, patches, and bug reports is the best way to help them get to a stable release. But sitebuilders under tight deadlines should check their requirements carefully and verify that any contrib modules they need are stable enough to use before promising complex functionality in Drupal 8 right now.

2. The authoring environment is massively improved.

The lack of a consistent author environment and authoring tools has, for years, been one of Drupal's biggest weaknesses when compared to other CMSes like WordPress. Developers love Drupal, but authors usually prefer WordPress.

A major effort was put into Drupal 7 to improve the authoring experience (UX), which Drupal 8 builds on. The seven administration theme is responsive now. Dropbuttons have been introduced for more economical use of screen space and to prioritize primary actions.  The CKEditor WYSIWYG editor is tightly integrated with core, providing a huge gain in consistency between one Drupal 8 environment and another.

Quickedit in-page editing is a whiz-bang feature of Drupal 8.  

Using QuickEdit while writing about QuickEditUsing QuickEdit while writing about QuickEdit

Did I mention the improvements to preview? Node previews that actually preview as they will render reduce the guesswork previously required when authoring any content containing layout code or different typefaces from the admin interface.      

3. Drupal 8 is a Symfony application

Symfony logo

Drupalcon Denver is where I first heard about the plan to build Drupal 8 with Symfony components. I’m not sure whether my recollection is flawed or if I just didn’t understand what I was hearing, but the move to Symfony didn’t sound like too big a deal: just few minor changes to the plumbing, the inclusion of a Kernel and  HTTPRequest and HTTPResponse classes that’d improve testablity. I recall a neighbour at one of the talks telling me it sounded like ASP to him.  

The move to Symfony is a very big deal.  Drupal 8 is near total rewrite of Drupal, with almost all core functionality now provided through services and dependency injection and thus overrideable.  Interfaces and routing, classes and annotation, all idioms common in the Symfony world have been adopted in Drupal 8.

What is familiar to a Drupal Developer who hasn't worked with Symfony?  Well, the core business objects like nodes, blocks, taxonomy terms, and fields still exist, though they are implemented quite differently. The hook system isn’t too different either, but be warned that what you receive in an array of parameters is quite different.  And the Form API, though not totally unchanged, is one place where the Symfony solution was rejected, though perhaps not outright: there is an active discussion on drupal.org right now about whether to integrate the Symfony form component into Drupal 9

4. API Docs and Change records are your friend

The documentation team has worked incredibly hard to keep the documentation up with the changes that have occurred. Api.drupal.org has fantastic information about the classes and concepts you’ll encounter now.

One big issue is that many things have new names now.  This is where the change records are essential.  Something like “menu_get_object” from Drupal 6 or 7 has been replaced.  Searching for a old command in the change records will often bring up the exact code snippet you need to update your code.

5. Config management

Config Management

The configuration management inititiative is one of the biggest changes in Drupal 8. All configuration, whether it be a view, a content type, or the site title, can now be managed via importing and exporting yml files, either through a UI or via Drush. The Features module, which was the best method of synchronizing configuration across development and production sites in D6 and D7, is considerably less essential now

All the nitty gritty details of configuration management are beyond the scope of this introduction, but I will share a few useful tips that we learned in development.

  • Start your local dev site with a config export and settings file from your staging site. You need to make sure the hash salt in your settings.php and the UUIDs in your config files are in synch, otherwise you will not be able to synchronize the configuration accross sites.
  •  Move the config directory out of sites/default/files. Not absolutely necessary, but it is becoming a best practice to move the config out of the site directory. It is an easy enough change to make and increases site security considerably.
  • Only commit production config to master, merge dev config in from branches. Config management does not include a method of identifying and merging changes between two different config directories or changes that were made in one branch that are absent in another: it is up to your source control system to handle that. If you start committing development config to your master branch, then expect to export changes from production before importing the dev changes, you will be in trouble!  It doesn't work like that. The better method is commit to your dev branch, then export production config to master and commit. Next merge dev into master, and finally import the merged config into production. Confusing? It is a bit, and the diagram on this Pantheon blog post doesn't make it less so. The short version is just don't commit anything but production config to master and you'll usually be alright.
  • It isn't always obvious what will be considered config and what is content. This has been an on-going problem for Drupal Developers to wrap their head around. For example, block placement is  configuration and thus stored in yml but block content is not.  It mostly makes sense once you get in there, but undeniably there is a bit of a learning curve here. 

As I mentioned, the Features module is still available; it looks like it will come into play for bundles of configuration that need to be pushed live jointly but independent of an entire site's config. Configuration management best practices are still evolving, however, so it remains to be seen exactly how wide Features for D8 adoption is or if other tools emerge.

6. Drush and the Drupal Console

Drush is still around and as essential as ever.  A few key new command in Drush 8:

drush cr - Cache rebuild, the replacement for drush cc all in Drupal 7

drush cim - import config, similar to drush fra in 7

drush cex - export config, similar to drush fu in 7

drush cedit - edit the config yml in your editor. This can be really handy sometimes.

Drupal console output

The Drupal Console is a new option for command line users. It is based on the Symfony Console and an extremely exciting new arrow to have in our quivers.

Drupal Console can do many of the things drush can, but the focus of this project is a bit difference than Drush's. Most significantly is the effort in making Drupal Console generate pristine boilerplate code. That may not sound like much, but given the adoption of OO, PSR-4 namespaces, and autoloading, being able in a few keystrokes to generate a block plugin with all of the correct annotation, namespacing, and file placement is a huge timesaver. It also makes becoming productive in Drupal 8 much less daunting: you don't initially need to understand all of the changes that have taken place. Instead, let Drupal Console take care of registering your new plugin so you can focus on just the logic you need to deliver. Later you can come back and learn about what the annotation means, what the parent class does, and so on. 

7. Managing your build

We usually use Drush Make files to manage our builds, including contrib modules, patches, and libraries.

Traditional Drush Makefiles still work in Drupal 8. They have also introduced a makefile in .yml format

Embracing a trend that has spread across the wider PHP work, Composer can also be used to manage your Drupal install, and already is being used by Drupal itself to manage dependencies. There is good documentation on how to use Composer to manage your install on Drupal.org, making use of the Drupal Composer/Drupal Project and Drupal's Packagist host.  

Whether this approach will take off or whether Drupallers will stick with Drush again remains to be seen.

8. The theme layer is now in twig.

Another element that comes to Drupal from Symfony, Twig has been adopted for templating.

Twig

Twig is easy to use and significantly more secure than PHPTemplate was. It also introduces a new template extension model to Drupal, which I'm quite excited about. I suspect it is going to take a while for existing Drupal themers to embrace this model, but folks who come to Drupal from Symfony or Django will be pleased to have extendable templates available to them. And there are promising signs that some contrib themes are starting to embrace it

9. Theme debugging tools now in core.

In Drupal 8 in your services.yml file, you can enable theme debugging. Once you do you'll see output like this:

Theme debugging

with all the candidate templates listed. The template that was actually used is indicated with the x rather than the *.  

One thing to watch for is that with theme debugging enabled some JSON calls break.  

You know what is extra cool? Theme debugging was backported into Drupal 7 too. Try drush vset theme_debug true some time and you'll similar output there.

More advanced debugging still requires the devel module. {{ kint(var) }} is my favourite twig debugging snippet.  The arrays are much more deeply nested than what you are used to from previous Drupals, so be sure to give php enough memory! Also, enable autoreload and disable template caching in your settings.yml file before debugging front-end code. Otherwise you need to clear cache on every page load.

10. A new approach to release management promises more rapid improvements in the future.

Drupal release schedule

No longer will we have to wait five years for any significant changes in core functionality!  As is explained in the release cycle overview on Drupal.org, "Starting with Drupal 8.0.0, Drupal core releases will move to a new release cycle schedule" with the intent being for minor releases to come out twice a year. Drupal 8.0 has been out since October, thus we are only about two months from the release of 8.1.0. A beta release of 8.1 will be available in less than a month!

8.1 is going to add, among other things, improvement to migration support from Drupal 7 and the migration UI. It appears BigPipe will also make it into 8.1. Will there be upgrade issues for site builders or compatibility issues for contrib modules moving from one minor release to the next? Will poorly maintained sites end up stranded on minor releases? Again, time will tell, but it is exciting to think there will be significant improvements to core available to us a couple of times a year.

Aug 19 2013
Aug 19

Hi, we're Affinity Bridge. You might remember us from such PNWDS 2012 sessions as Mapping and GIS on the server side in Drupal, and Going further with D7 Organic Groups. Today we're here to tell you about the 2013 Pacific Northwest Drupal Summit, and why you, fellow Drupal professional, should be in Vancouver for it on October 5th and 6th of this year. As Cypress sponsors of the Summit, , and founding sponsors of the PNWDS, we're excited to tell you more about what's in store for the Summit.

Whether you'd like to take in some sessions, attend Birds of a Feather gatherings, attend or join a panel, or socialize after hours, are an expert Drupalist or are new to Drupal, then the Summit will have something for you.

Registration is now open, and session proposals from registered participants will be accepted until August 24th, so register now if you want to submit a session proposal.

Here are the sessions we're hoping to present this year:

Once you're registered you can vote on which sessions you'd like to see (wink wink, nudge nudge).

If you're still not convinced, learn more about the summit before you register, and submit your sessions before the 24th. Keep an eye out for us when you arrive; maybe we'll see you there!

Aug 19 2013
Aug 19

Will we see you at the Pacific Northwest Drupal Summit 2013 in Vancouver?

08/19/2013 - 14:43

PNDWDS Vancouver 2013

Hi, we're Affinity Bridge. You might remember us from such PNWDS 2012 sessions as Mapping and GIS on the server side in Drupal, and Going further with D7 Organic Groups. Today we're here to tell you about the 2013 Pacific Northwest Drupal Summit, and why you, fellow Drupal professional, should be in Vancouver for it on October 5th and 6th of this year. As Cypress sponsors of the Summit, , and founding sponsors of the PNWDS, we're excited to tell you more about what's in store for the Summit.

Whether you'd like to take in some sessions, attend Birds of a Feather gatherings, attend or join a panel, or socialize after hours, are an expert Drupalist or are new to Drupal, then the Summit will have something for you.

Registration is now open, and session proposals from registered participants will be accepted until August 24th, so register now if you want to submit a session proposal.

Here are the sessions we're hoping to present this year:

Once you're registered you can vote on which sessions you'd like to see (wink wink, nudge nudge).

If you're still not convinced, learn more about the summit before you register, and submit your sessions before the 24th. Keep an eye out for us when you arrive; maybe we'll see you there!

Feb 06 2013
Feb 06

Overview

In January, our team worked in partnership with ProMotion Arts to launch a conference website for the Bill & Melinda Gates Foundation. The Gates Foundation is currently very active in examining the U.S. school system looking for effective ways to leverage its strengths. Our team had the opportunity to participate in building online tools to help stakeholders connect at the annual Teaching is Learning 2013 conference (they call it their "convening"). 

As a front-end developer, this project was interesting because we were building a fully responsive website in a short time frame. This meant we needed clear mobile-first designs with clear layout guidelines. 

The design we received met the requirements: it was clean, fit into the 960 grid, provided a style for every display layout (mobile, tablet and screen), and considerable thought went into how each block would flow on the adaptive theme. This significantly reduced the amount of CSS code we needed to create and helped us to style "general elements" instead of focusing in "singular custom blocks". Mega props to Brandon who designed the site! 

Rubber Meets the Road

The rest of this blog post is meant for theme designers or anyone interested in learning about techniques for responsive layout. It gets pretty technical.

So how did we create a responsive theme ready for production in a week? Let me tell you. 

To accomplish this we had to work fast and be confident that our tools would work. By letting Omega 3 and Display Suite 2 take care of the responsive layout I could concentrate in the creation of the custom theme where I used Compass for my style sheets.

To accommodate the use of two responsive slideshows on the front page (one for images, one for Twitter) I used Flex Slider. This automatically resized our images and block sizes for each responsive layout without us having to resize anything ourselves. We did have to add a small amount of custom code to have the sliders shift out of synch (in synch they looked a little funny). 

Responsive Main Navigation

The main navigation menu was a bit of a challenge. For the mobile layout the menu had to display in a clickable dropdown style. To add some hot and spicy to the recipe we wanted to include a breadcrumb on the mobile navigation as well. 

In the diagram above, you can see the menu in the various responsive layouts (the top is the normal screen size). By using Menu Block 2 we were able to display the 2nd level of the navigation in a separated region when needed (normal screen size). I decided to keep the full HTML menu structure intact to use it to my advantage on the mobile layout where some jQuery magic was needed to complete the metamorphosis. The expanded menu is just a simple inline menu. Below, you can see the Compass and JQuery code I created for the mobile menu.

Compass code


nav.compact-menu {
  background: url("../images/arrow-down.png") no-repeat 98% 14px $dirty-white;
  border: 1px solid $light-black;
  
  &.expanded {
    background-image: url("../images/arrow-up.png");
  }
  
  > h2 {
    @include adjust-font-size-to(14px);
    @include trailer(0);
    background: url("../images/tools.png") no-repeat 6px center;
    font-weight: 700;
    margin: 4px 0;
    padding: 0px 4px 12px 38px;
    position: relative;
    text-transform: uppercase;
    
    &:hover {
      cursor: pointer;
    }
    
    .mini-breadcrumb {
      @include adjust-font-size-to(10px);
      bottom: 0px;
      display: block;
      color: $light-black;
      font-style: italic;
      font-weight: normal;
      left: 38px;
      position: absolute;
      text-transform: none;
    }
  }
  
  > ul {
    display: none;
  }
  
  ul {
    list-style: none outside none;
    margin: 0;
    padding: 0;
    
    li {
      list-style: none outside none;
      margin: 0 !important;
      padding: 0;
    }
    
    a {
      border-top: 1px solid $dark-gray;
      display: block;
      padding: 5px 10px;
      
      &:link,
      &:visited {
        color: $light-black;
      }
      
      &:hover {
        background: $secondary-link-color;
        color: $dirty-white;
      }
      
      &.active-trail {
        font-weight: 600;
      }
    }
    
    ul {
      margin: 0 10px;
      
      a {
        padding: 2px 10px;
      }
    }
  }
}

jQuery code


/*
 * Responsive menu
 */
var bmgf_responsive_menu = function() {
  // Main menu
  // TODO: Improve this section
  //       This should have been done on php and not here, but I wasn't able to find a nicer solution
  if ($('#region-menu nav > h2').length == 0) {
    $('#region-menu nav').prepend('<h2 class="element-invisible">Main menu</h2>');
    $('#region-menu nav > ul').attr('id', 'main-menu');
  }
  // End of TODO
  if (Drupal.omega.getCurrentLayout() == 'mobile' || Drupal.omega.getCurrentLayout() == 'narrow') {
    $('#region-menu nav').addClass('compact-menu');
    $('#region-menu nav > h2 .mini-breadcrumb').detach();
    $('#region-menu nav > h2.element-invisible').removeClass('element-invisible');
    $('#region-menu nav > ul').removeClass('inline clearfix').hide();
    // build breadcrumb
    var breadcrumb = '',
        active_trail = $('#region-menu nav > ul').find('li a.active-trail');
    for (var i = 0; i < active_trail.length; i++) {
      breadcrumb += $(active_trail[i]).text();
      if (i != active_trail.length - 1) {
        breadcrumb += ' > ';
      }
    }
    $('#region-menu nav > h2').append('<span class="mini-breadcrumb">' + breadcrumb + '</span>');
  }
  else {
    $('#region-menu nav').removeClass('compact-menu');
    $('#region-menu nav > h2').addClass('element-invisible');
    $('#region-menu nav > ul').addClass('inline clearfix').show();
  }
  
  $('#region-menu nav > h2')
    .unbind('click.bmgf')
    .bind('click.bmgf', function() {
      $(this).siblings('ul').slideToggle();
      $(this).parent().toggleClass('expanded');
    });
}

Remember that on screen hover events need to be replaced by click actions for touch screen devices. 

Finishing the Fiddly Bits

Some other quick fixes included the Callout Boxes at the top of the content pages - these are the green boxes that say "Learn everything you need to know about...". Now I know you shouldn't use tables to control your layout, but this is an occcasion where it was the optimum solution for us. By using a background image I was able to control the size of the displayed icon at each responsive layout size.

One of my biggest challenges was to make these pretty boxes below. When you hover over the boxes the background colour changes and the entire box can be clicked. Nice and intuitive functionality, but a small challenge to implement (see code sample below). 

There is no JavaScript here, since HTML5 allows anchor tags to have block and inline elements inside. Something like <a href="#"><h3>I'm an h3 tag</h3></a> is now completely acceptable, however the wysiwyg editors haven't caught up just yet. Unfortunately, disabling the wysiwyg editor was not an option for us so we simply needed to disable it when saving these blocks. 

Wrapup

As I've said before, the mobile Internet is here to stay. Building fully responsive themes is a logical and necessary progression in our theme development. Working with the right tools like Omega 3Display Suite 2 and Compass for my style sheets can make the process of developing responsive layouts easy and fast. However, to be successful the key ingredient in our secret sauce is an awesome website graphic design that considers mobile first! This makes all the difference. 

Oct 24 2012
Oct 24

The problem

Imagine never having to click around your website after a site update, worrying that something may have broken. Imagine never getting a call from a client after a site update, telling you that something in fact has broken.

What if instead you could script all the actions that you would normally do by clicking and have those automatically run each time you push new code to your code repository?

All this is possible using Behat, Mink, Drupal Extension, and Travis CI.

This past week I've spent some time creating a proof of concept test suite for a Drupal distribution. I began with the Drupal 7 standard install profile. This will be a walkthrough of the additions I made to add testing to the distribution.

The code

Follow along with the code in my Classic GitHub repository.

The tools

Behat

Behat is a PHP framework for Behaviour Driven Development (BDD). BDD allows for tests that describe the behaviour of a website to be written in English.

Here's an example test (written in the Gherkin syntax) that Behat understands:

Scenario: Logging into the site
  Given I am logged in as a user with the "authenticated user" role
  And I am on "/"
  Then I should not see "User login"

This test uses three step definitions, Given.., And.., and Then...

Mink

Mink is a PHP framework that abstracts browser simulation. For fast and simple queries we'll use the Goutte PHP web scraper. For testing that our ajax is working correctly we'll use Selenium.

Mink Extension

Mink Extension connects Behat and Mink. It also allows us to write our own step definitions.

Drupal Extension

Drupal Extension connects Behat and Mink to Drupal. It provides a number of step definitions that are useful for working with Drupal sites.

Travis CI

While optional, no testing plan is complete without continuous integration. Travis CI is hosted continuous integration that works with GitHub. It is free for open source projects and recently soft launched their paid private plans. Simply enable testing to your GitHub repository in their UI and include a .travis.yml file in the root of your project and each time your push your project to GitHub, Travis will run your tests and report back.

Tying it all together

Drush make files

First I copied the Drupal 7 standard profile into a new directory and initialised a git repository. Then I created the following drush make files:

  • build-classic.make
  • drupal-org-core.make
  • drupal-org.make

To test ajax I added the dialog contrib module and enabled it in classic.info. At first I tried testing ajax with the core overlay module but Selenium couldn't see the popup. It was able to see the dialog popup just fine.

Install dependencies

Next I created the folder tests/behat/ to store the tests as well as the needed test frameworks. The file composer.json is used by composer to download and install all the dependenices we'll need.

The following commands will install the dependencies (from the directory tests/behat/:

curl -s https://getcomposer.org/installer | php
php composer.phar install

behat.yml

The file behat.yml is used for configuring Behat.

FeatureContext.php

We've defined a custom step definition for testing ajax. It will wait either 2 seconds or until the id #user-login-dialog has been loaded. We defined our context class in behat.yml and it goes in the features/bootstrap/ directory.

Test files

Our test files have the .feature extension and are placed in the bootstrap/ directory.

I've used tags to tell Mink to run one of the tests with the drush driver in Drupal Extension (using the @api tag). I've used another tag to tell Mink to run one of the tests with Selenium to test ajax (using the @javascript tag).

Selenium

The following commands can be used to download and run Selenium. You need to have Selenium running when you run these tests.

wget http://selenium.googlecode.com/files/selenium-server-standalone-2.25.0.jar
java -jar selenium-server-standalone-2.25.0.jar

Running the tests

From the tests/behat/ directory run ./bin/behat. Passing tests will be output to the screen in green. Failing tests will be red.

.travis.yml

The .travis.yml file tells Travis CI how to set up an environment and how to run the tests. The setup includes:

  • creating a MySQL database
  • downloading drush
  • running composer install
  • running drush make to create the Drupal codebase
  • installing the classic distribution
  • placing a drush alias file in the proper place
  • Using xvfb to simulate a window for the browser to run in
  • running the built in PHP web server
  • downloading and running Selenium
  • running the tests

Each time I push new commits to GitHub Travis CI will perform these actions and report back whether the tests passed or failed. Travis CI can be configured to only run tests on certain branches.

Summary

There once was a time when we'd need to click around our sites each time new changes were done or module updates were made. Things break. Sometimes it would take hours, days, or even months before it was obvious that something had broke.

Testing with Behat, Mink, and the other tools described here has made it possible to script exactly what we would do when we were clicking around, and have those actions automatically performed before each deployment. What a time saver!

Sep 25 2012
Sep 25

Have you recently checked out your website on a cellphone, or a tablet, or another mobile device? You may have found yourself scrolling and zooming in and out in order to be able to read its content. All this headache can be avoided by adding a responsive design to your site, but what exactly is a responsive design?

What exactly is a responsive design?

Wikipedia defines responsive design as

...an approach to web design in which a site is crafted to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from desktop computer monitors to mobile phones).

A nice technical definition, but what exactly does this mean? Well, imagine that we have the following set of devices: a nice big monitor with big resolution, a tablet, and a cellphone.

The ability to present the content in an elegant manner within the available screen space is what makes your design responsive. To do this you may have to resize the page layout, re-organize the content, and/or remove some content on your website. The image below shows an example of a generic website layout that looks different depending on the device or screen size.

Challenges

Browser support

Responsive design is achieved by using CSS3 media queries, a technique that is not supported for all the browsers. However, this is not something to worry about as all the modern browsers work perfectly fine with it. Only Internet Explorers 8 and below fall short and display the default layout of your site.

Content resize

With resizing your layout comes the need to resize your content, and images and videos need special treatment. For the images there are 2 possible solutions:

For videos the only solution that I know is the FitVids library that can be easily integrated into Drupal with the FitVids module.

Layout adaptability

Here is where the Omega theme comes in very handy. Omega use the 960 CSS framework to control the layout of your site and adjusts the grid size to 3 different layouts:

  • Narrow: For screen with a resolution between 740px to 800px wide.
  • Normal: For screen with a resolution between 980px to 1024px wide.
  • Wide: For screen with a min-width resolution of 1220px.

So what happens with the devices with a screen that is narrower than 740px? On these scenarios the grid is not used anymore and most likely your content will display in a single column.

Educating clients

It is important that our clients understand the value of a responsive design and that building this needs to be incorporated into the website design budget. In the past few years we considered building with a responsive layout to be "future-proofing" websites; today we consider this baseline functionality. The vast majority of our clients' websites will be viewed on many different mobile devices; that being said, there are a handful of our clients' projects that do not require a responsive design because the web application is meant for larger screens only. 

In order to have an adaptive design we put mobile first, this means that the first design and content layout to be created is the one that is going to be seen on the narrowest screen. From there you design the layout to fit the widest screen. At the end you will have a design with 4 different ways to display the content of your site.

Our Solution

As you might have guessed for us Drupal, Omega, Client-side adaptive image, and FitVids are the pillars of responsive website design. Obviously these are not the only tools that we use however so far these are the best combination that we have worked with. We are also experimenting with Bootstrap framework but don't have enough experience for a conclusion yet.

Another neat ingredient that we use for the content re-sizing and layout is Display Suite. By creating a template for our nodes and attaching it to the grid, we can control how to display the different fields. An image is worth a thousand words

Allow us to put all this information into a visual example by way of a case study of a non-responsive site and one that it is responsive.

City of Vancouver

The non-responsive design means that on the various screen sizes the layout is simply scaled. On the mobile version of this design your website visitor needs to zoom in and pan in order to read any content. The mobile experience is cumbersome and can be frustrating to your vistors. 

Open Media Now

In contrast, the responsive design layout adapts to the screen size of your visitor. This design always displays the content in a readable and easily navigatable format. Even on the smallest screen of a mobile device the experience is simple and intuitive. The user on an iPhone is presented with a readable font size and simply needs to scroll down to read all the content.

You can appreciate the differences between both sites on the cellphone better in a magnified version comparing the two websites. 

What the Future Holds

Mobile Internet is here to stay. According to Canadian statistics there are 8 millions mobile Internet subscriptions in Canada and it is expected that this number will reach 15 millions by 2015.

Projected mobile Internet subscriptions in Canada 2006-2015

Proactively developing our websites to be mobile-ready means that our clients are prepared for the continually evolving Internet landscape, and we always like to hear from them "my daughter loaded the site on her iPhone and it looked fantastic!" This always warms our hearts and fuels our passion for what the future holds. 

Aug 07 2012
Tom
Aug 07

We have several projects that involve processing large geospatial datasets (geo-data) and displaying them on maps. These projects present some interesting technical challenges involving the storage, transfer and processing of geo-data. This post outlines some of bigger challenges we have encountered and our corresponding solutions.

The challenge

In the past we have used the GMap and OpenLayers libraries and their equivalent Drupal modules on our mapping projects. They are effective solutions when you have a small or even moderately sized collection of entities containing some simple geodata (points, lines, polygons) that you want to present as vector overlays on a map. Unfortunately they tend to fall apart fast when you attempt them with larger datasets. There are two main reasons for this:

  1. Geospatial data can be large, particularly as we tend to encode it in text-based formats such as WKT or GeoJSON when we are sending it to a web browser. The larger the data, the longer it takes to transfer from server to client.

  2. The information being sent is raw data which means that the client needs to parse and process the data before rendering it on the screen. The more data there is, the longer this takes.

Making things worse, the geo-data is often sent at the beginning of the html document (via Drupal.settings or similar). Most browsers will wait until they have downloaded and parsed this data before they begin to render the rest of the page, increasing the delay.

As a result of the above, it doesn't take much to have a serious negative impact on page load times and little more to actually crash your visitor's browser.

Heavy lifting server-side

A good solution to these issues is to process and render the geo-data as image tiles on the server. Tiles can then be cached and served to the client when requested and the data is only rendered whenever it is changed instead of each time the page is loaded. Bandwidth is also reduced as the image tiles are relatively consistent in size regardless of the complexity or amount of data used to produce them.

As a demonstration we have created two maps containing some sample road data:

I recommend testing these examples in a variety of browsers as their performance varies on the different platforms - particularly for the first example.

There are several components involved in a server-side tile rendering pipeline. They can be loosely categorised under storage, rendering, and caching.

Storage

Geo-data can be stored in a variety of places and formats each with it's own advantages. Here are some that are common:

ESRI Shapefiles

ESRI Shapefiles (commonly known as shapefiles) are a popular file format for storing and transfering geo-data. They are comprised of a .shp file and often bundled in a zip file with a collection of other files containing related information.

Well known text (WKT) & GeoJSON

WKT and GeoJSON are formats used to encode geospatial data in plain text, making them convenient to read and parse at the expense of increasing file size.

GeoJSON is a relatively new format. As it is just JSON and therefore easily parsed in Javascript, it is an increasingly popular format to use when passing raw data to browser-based clients.

PostGIS

PostGIS is a spatial database extension to the PostgreSQL database management system. The relational database gives you the ability to index, query, and manipulate your data with SQL and an extensive API of geospatial functions.

In Drupal it's common to store your data in fields attached to entities using the Geofield module; however the data is stored formatted as WKT in a column of type LONGTEXT and when compared to PostGIS it not very flexible.

We have therefore developed Sync PostGIS which allows site developers to flag entity types with geofields to have their data mirrored in a PostGIS database. The source data in Drupal's main database is retained and treated as best-copy, but all changes (insert, update and delete) are reflected in the PostGIS version. This gives us the ability to utilize PostGIS's rich geospatial features on our Drupal-managed geo-data!

Rendering

Once we have our raw geo-data stored somewhere we need a method of converting it into the images that we will display on our maps. Mapnik is an excellent tool for the job.

Mapnik

Mapnik is an open source C++ library designed to generate map images from a variety of data sources and configurable style rules. Language bindings are available for Python and Javascript (Node.js) as well as an XML-based stylesheet format.

TileMill

TileMill is a desktop application for creating web maps. It is developed by Development Seed to complement their MapBox service. Powered by Mapnik and Node.js it allows users to define style rules using a CSS-like language called CartoCSS. With each change, the rules and data sources are passed to Mapnik and a preview map is rendered giving immediate feedback.

TileMill's main output will render tiles and package them in the MBTiles format. However it can also be used to generate a Mapnik XML stylesheet which can be passed to Mapnik by other applications to render tiles.

MapBox has a great collection of resources to get you up and running with TileMill. I recommend starting with their crash course.

Caching

So far, we have resolved the bandwidth issues discussed at the beginning of this post by rendering our data into tiles on the server with Mapnik. This also alleviates the visitor's web browser from the strain of processing large amounts of raw geo-data. However generating tiles on the server is also a resource-intensive process; depending on the area and zoom levels you wish to cover, rendering a set of tiles at once can take anywhere from a few seconds to more than a week.

Obviously we don't want to be rendering tiles from scratch with every request. Instead it is much more efficient to cache the tiles somewhere after they have been rendered and serve requests directly from the cache, only resorting to rendering when a cached tile doesn't exist. There are many ways to cache tiles on your server. Here are some methods that we use:

MBTiles

MBTiles is a file-format specification pioneered by Development Seed. It is essentially a SQLite database containing a whole set of rendered map tiles. Known as tilesets, these files are portable and lightweight and can be generated by TileMill. They are great for caching base layers or layers comprised of data that doesn't change frequently. However they require tiles to be rendered in advance, making them less useful for maps covering large areas and zoom levels, or data sources that often require updating.

File system

Map tiles are individual image files, usually 256x256 pixels in dimension and rendered in a compressed image format such as .png. In most situations storing them directly on a file system is satisfactory.

Memcache

If you are expecting a lot of requests concurrently, you may want to avoid the file system and cache tiles in memory. Memcache or similar systems are made for this task.

All together

There are a plenty of options available for tile servers including TileCache, TileStache, TileLite, TileStream and Mod Tile. We have been using TileStache as it has an excellent balance of features and simplicity.

TileStache

TileStache is a server application that handles requests for tiles and serves and caches tiles generated from Mapnik or other rendering providers. It's implemented in Python and designed to be extended with a solid plugin system.

Out of the box, its features include:

  • Rendering Mapnik maps
  • Serving MBTiles tilesets
  • Caching tiles to file system, MBTiles, Memcache or Amazon S3
  • Composite 'layers' into single tilesets

The compositing feature in particular is very powerful. In TileStache's configuration you define a set of 'layers', each layer being a different tileset and effectively its own map. You can then define composite layers which are new tilesets comprising of other layers on top of one another. This allows you to do things like combining a pre-rendered tileset stored in an MBTiles file with a tileset of features stored in PostGIS and serving them to your visitors browser as one flat set of tiles.

Shifting constraints

The range of tools and techniques described provides plenty of flexibility when we are working on mapping projects. It is all achieved without wasting bandwidth or bogging down our visitor's machines with redundant computation.

Previously we had a strict upper-limit on the amount of geo-data we could manage and serve, based on the limits of the network and our visitor's hardware. As evident in this final example, our challenge now is deciding how much data can we can fit into our maps without sacrificing their readability.

Mar 13 2012
Mar 13

DrupalCon Denver 2012 - I'm Going!

It's that time of year again! Next week, DrupalCon descends on Denver so we Drupallers can teach each other cool things, and get some well needed face time with our friends and collaborators. I'll be representing AB solo this time round, so don't be shy if you want to come chat about what we've been up to lately.

It's hard to believe, but this will be my 7th DrupalCon since spring of 2008, and it'll be a pretty different one for me. Late last year I stepped down from my position as Drupal Documentation Co-Lead, and have been taking some time to reset priorities and goals for myself careerwise. I'm not giving any sessions, or hosting any sprints or BOFs - for the first time in a while, I'm going to be purely *gasp* an attendee. It will be a welcome change of pace to just be able to decompress and soak it all in.

Where the week will take me will be a surprise as always, but I am hoping that I'll have time for a good mix of technical and project management-related sessions. I used to go to a lot more intermediate to expert level developer-oriented sessions (even ones well over my head), as a way to learn a bit about new technologies that the team has been interested in, or to bring back to them. But lately I've been attending almost exclusively project and client management sessions to help hone my own skillset. I've picked out a good mix for my schedule this time round, as there are a lot of interesting new things going on that I'd like to wrap my head around.

As always there are too many good sessions overlapping (and also some Drupal Community ones I'd like to try and get to), but overall the sessions I'm really looking forward to are:

Project Management

Not a lot of PM track sessions this year but these all sound like great topics - I'll try and get to all three of them plus the one BOF on the schedule (Project Management / Ops BoF). 

Business Development/General Interest

At AB we work with a lot of Non-Profit, Government, Academic, Research, and Social/Environmental Change groups. We've been thinking a lot about how to service them better by harnessing the power of Drupal Distributions, so these should be some thought provoking sessions.

Technical

There are always far more fantastic looking technical sessions than a (sane) person can try and attend! I've picked some sessions that are geared towards the kinds of technologies that we've been focusing on learning and working with lately at AB such as Node.js, Backbone.js, and mapping tools. Also looking forward to brushing up on some old friends: OG and Operations/Monitoring.

If it's your first DrupalCon...

I highly recommend a glance at my tried and true guide to your first DrupalCon: DrupalCon Abridged. Also, I hear that the combination of altitude and ridiculous dryness in Denver is a recipe for dehydration and sluggishness, so don't forget to pack a reusable water bottle, some kind of electrolyte replacement powder/goop, and a bunch of lotion/lip balm!

Hope to see you there!

Oct 13 2011
Oct 13

Affinity Bridge team at the PNW Drupal Summit

10/13/2011 - 13:43

PNWDS Logo

Most of the Affinity Bridge team will be heading south to this year's PNW Drupal Summit (which rotates between Seattle, Vancouver, and Portland, and is in Portland this year). There will be two official sessions that our team members will be presenting that you might want to check out if you're coming to the conference:

Best practices for running a Drupal-based business (Panel)

10:10am - Saturday, October 15, 2011

Mack is going to be part of a business panel that will be addressing various questions and ideas related to running a Drupal based company. Topics they'll be discussing include: how to find good clients, how to find and keep a good team, pricing, and best practices. More details on the session page.

Using Build Kit & the Kit specification to build Drupal distributions

2:30pm - Sunday, October 16, 2011

Shawn will be doing a solo session on using Build Kit and Kit, and how to apply these when building distributions (particularly when using Features). If you're interested in improving your development methods, or learning how to build Drupal distributions, then this will be a great practical session to attend. More details on the session page.

Hope you can make it to the sessions if you're going to be at the Summit. Otherwise, watch for us at the conference and in the Hacker Lounge - don't be afraid to stop and say hello!

Sep 28 2011
Sep 28

A great opportunity arose a few months back when Tom Nightingale and I were each beginning work on client sites needing advanced search functionality: the Search API module, providing "a framework for easily creating searches on any entity known to Drupal, using any kind of search engine" was becoming more awesome and more stable by the day, and the FacetAPI module had just been announced, but there was no integration between the two.

The attraction to Search API, for the project I was working on at least, arose from some initial uncertainty around whether we'd be using Sphinx or Apache Solr for the backend, and from the fact that we definitely wanted to use a Views front-end for displaying search results.

Collapsible facets

As for faceting, both Tom's project and mine had "collapsible" facets in their designs, something that had already been done on a previous Affinity Bridge client site that used Searchlight on Drupal 6, but it had been implemented as part of the theme. FacetAPI module was offering, among other great features, the possibility of recreating these collapsible facets as a widget that could be re-used across multiple sites.

An issue had already been posted in the Search API issue queue about integrating with FacetAPI, so I jumped in and declared my eagerness to work on it. After several initial iterations of the patch, with tremendous support from Thomas Seidl, author of Search API, and Chris Pliakas, author of FacetAPI, the work was moved into a separate sandbox project where it was easier to collaborate on; and now, a mere 3 months later, the two modules are happily joined in API matrimony.

So what?

So, these are two incredibly awesome modules that you can now use together to create a really great search experience. While the whole is certainly greater than the sum of its parts, it is worthwhile extolling the virtues of each part. I should point out, however, that neither module as a full release yet - Search API being held up mainly by Entity API.

Search API

This is a beautifully architected module that abstracts out all of the concepts involved in the process of site search. Implementations for common backends, e.g. the database and Apache Solr, are provided as separate modules, and the admin UI allows you to add as many search servers and indexes as your site could possibly require (by index here I mean a definition of how your site content is to be indexed - and although each index is associated with a particular server, the latter can be swapped out). To illustrate, your site could have a faceted View page powered by Apache Solr, another one powered by the database, and a block in the sidebar on both of these pages powered by Xapian.

Each backend declares which features it supports, e.g. faceting, autocomplete or spellcheck; and all Search API needs to know is that it supports a given feature, not how it supports it (it is an API module, after all). One of the main things this meant for me, once I'd decided on Solr for a backend, was that contrary to my expectation to have to tweak the solrconfig.xml and schema.xml files (files governing the configuration and index definition, respectively, of the Solr instance) to get the exact behavior I needed, I could actually do everything via the UI - and everything I did via the UI could be exported to my search feature. Perfection.

FacetAPI

The FacetAPI module deals with the creation, management and display of facets for search results, independently of the search engine providing these results. Facets can be defined in different realms - a realm being a way of grouping facets together. The only realm that comes with the module is the block realm, allowing you to have each facet in its own block. For each facet you can decide such things as whether it uses OR or AND filtering, the maximum number of facet options to display, whether to display a facet link for documents that don't have a value for that field (the "missing" facet), which searches the facet should appear on, etc.

So now we have converted the "collapsible facets" css and javascript from our old D6 theme to a FacetAPI widget, Facetapi Collapsible, and written another add-on module that provides a new realm which allows you to put all facet links into one block, Facetapi Block, the latter being still very much a work in progress.

Putting it all together

I've set up an install profile with a search feature that you can use to get up and running with Search API and FacetAPI very quickly. These instructions assume you are familar with drush and drush make, and will help you build a brand new Drupal install with everything you need pre-configured, with the exception of the Solr instance. If this is your first time using Solr, don't worry - it's outrageously simple to set up and there are great instructions for doing so here - just do the bit under "Setup Apache Solr", no need to do the rest of it as it's all done for you in the searchtastic feature that comes with the install profile. So, from within a directory that will be the parent directory of your search site, run this little command:
% drush make --working-copy "https://raw.github.com/affinitybridge/d7search/master/stub.make" d7search
This will grab Drupal core and the install profile, which includes another make file which grabs all the required contrib (for more information on building sites this way, read Shawn's recent Build Kit Abridged post). Now cd into the newly created d7search directory and install the site with the following command (making sure to substitute your real db username and password for [username] and [password] respectively):
% drush site-install d7search --site-name="D7 Search" --db-url=mysql://[username]:[password]@localhost/d7search

Assuming you have set up a vhost for this site at d7search.local, login to the new site with the admin/admin creds and go to admin/config/search/search_api and you will see a server and an index set up. If you have solr up and running, then the "Solr Search" server should be able to contact it - click on its name to view its configuration and a message should be displayed at the top telling you that the Solr server could be reached.

Of course, we'll need some content on our site if we want to have anything to index, so go ahead and create some dummy content - from the command line, run:% drush genc --types=story,page,article 100

Then go back to admin/config/search/search_api and click the index's "edit" link and then select "status" from the options popup. On the status page it should tell you that all of your site's content needs to be indexed. Click on the index button as many times as it takes to index your content.

To see everything in action, you can now just go to d7search.local/search and see a page listing your content, with a keyword search block and a facet block in the right sidebar. Play around with keyword searches and facets and you should see that it works nicely.

To see and alter the facet configuration, go back to admin/config/search/search_api and once again click "edit" beside the "Content Search" index and then select "facets" from the options popup. You will see the list of enabled facets. You can enable some more (the list displayed corresponds to the list of fields that is being indexed, which itself can be altered by clicking on the "fields tab" - just remember to re-index again if you do make changes there). Click on the "configure display" link for one of the enabled facets. Here you can change pretty much everything you could possibly want to about the facet's behaviour and display. For example, change the widget used to display the facet - select "links" for just a regular old list of facet links instead of the collapsible list.

With Solr as a backend, you can take advantage of some great search features that it supports, such as spelling suggestions, "more like this" and "OR"-based facets. Let's change the country facet to an OR facet. From the facet settings page, click the "configure display" link for the "Country" facet. Under global settings on the facet config screen, you should see an option to change the filter operator from AND to OR. Go ahead and make this change and click "Save configuration". Now, back on the search page, when you select a country to filter by, you'll still have the other options visible, and clicking on another option will mean seeing results where country is option1 OR country is option2.

There are tons more configuration options both on the Search API side and on the FacetAPI side, but hopefully this has given you a flavour for how great these two modules are, both individually and combined.

Aug 23 2011
Aug 23

Five years ago Dries blogged about distributions becoming an integral part of the Drupal ecosystem. He saw a future for Drupal that included tailored products for many different markets. Today there are a large number of distributions targeting many categories. In this post I'm going to discuss Build Kit and how it can be used as a platform for building and maintaining distributions and custom site builds.

Build Kit

The philosophy of Build Kit was introduced in the blog post by Development Seed, Features and Exportables on Drupal 7. As outlined in this blog post the components are:

  1. Every project is described by a makefile.
  2. Every project is an install profile.
  3. Build with exportable components and manage them with Features.

By describing the entire project in a .make file maintenance time is decreased. In one place it is possible to view all the modules, themes, and libraries of a project and any patches that are in use.

Build Kit 7.x-2.x can be installed using a single line from your shell. Be sure you have drush_make installed and run:

drush make --working-copy "http://drupalcode.org/project/buildkit.git/blob_plain/refs/heads/7.x-2.x:/distro.make" buildkit

This will create a patched version of Drupal 7 core and will download a small number of contrib modules necessary for building sites with exportables including features, context, and strongarm.

Drupal 7 core patches

Build Kit includes the make file distro.make to keep track of Drupal 7 core patches that are necessary for building with exportables. Some of the patches that were tracked when Development Seed blogged about the project have since been committed to core. Remaining patches are:

Both of these patches are being worked on for the 8.x branch of core and are tagged needs backport to D7.x. Build Kit includes working 7.x versions of the patches. Any help towards getting these two issues committed will be greatly appreciated by those building distributions.

Creating your own distribution or custom site build

At Affinity Bridge we build each of our new custom site builds as if it were a distribution. Follow the Extending Build Kit steps in Build Kit's README.txt to see how to extend Build Kit yourself. Also take a look at a very simple distribution to see how we structure our files in version control (note the inclusion of .gitignore and rebuild.sh that help us when rebuilding the code base in place).

Adjusting to a new development methodology

When using a distro oriented approach for development there are a number of differences that may seem counter productive compared to the "old school" method:

  • sites/all/ directory remains empty. All contrib and custom modules, themes and libraries live in profiles/profilename/. Some of the paths that are important to ignore from version control are profiles/profilename/modules/contrib, profiles/profilename/themes/contrib and profiles/profilename/libraries. All projects within these directories are defined in the .make file and put in place when running drush_make.

  • Rebuilding takes a few minutes. This was the biggest adjustment to me. But after getting into the flow of using this approach it becomes easier to minimize rebuilds, only needing to do so when updating projects or introducing a new project.

  • Tracking much less in version control. At first it felt dangerous but soon we realized the extent of duplication happening. Why track all of Drupal core when you can track the line projects[drupal][version] = "7.7"?

Learn more at DrupalCon London

I'm organising a BoF presentation on Wednesday, August 24 at DrupalCon London, Understanding Build Kit and the Kit specifications. Along with Build Kit I'll also be discussing Kit which is a set of specifications for building interoperable Features modules and themes. Hope to see you there.

May 30 2011
May 30

This post is part of our Abridged series, which aims to explain the basics of some of the more ominous yet awesome Drupal projects in simple and practical terms. We hope these posts will help demystify some of these projects for people who have been hesitant to try them out!

Here, we'll take a look at Boxes module, including a review of its history within the Drupal project, the current state of the module, how to start using it, how we use it at Affinity Bridge, and some resources. Special thanks to Tylor who recently did a sitdown (team discussion/learning session) on Boxes module, and wrote the technical sections of this post.

Background

Boxes module is a Drupal project that was originally built by Jeff Miccolis from Development Seed. It's been around for quite a while, but many people don't venture into using it, largely because it's not clear upfront what the benefits are over core Blocks.

History

Jeff Miccolis summed up the history:

Earlier this week I added the Boxes module, which provides custom blocks that play nicely with Spaces and Ctools, to Drupal.org. I initially wrote the module for recent work we did on the World Bank's Open Atrium-based intranet, and at first I wasn't sure if it would be something that we'd want to maintain and release on Drupal.org, as it seemed rather project specific. However after using it on this project, it became apparent that we'll want to use the Boxes module on many projects and that the module could play a large role in making Open Atrium stronger out of the box.

It has since become part of the "Dev Seed stack" (ie. Features, Context, Spaces, Boxes, Searchlight, etc.), used by many teams who subscribe to their methodology of saving as much configuration in code as possible, and simplifying deployment processes.

Block vs. Boxes

Boxes is really a full replacement for block; as it says on the Boxes module page, "Boxes module is a reimplementation of the custom blocks (boxes) that the core block module provides. It is a proof of concept for what a re-worked block module could do."

That said, it's extremely similar to the core Block module but also gives you:

  • Exportability (configuration can be captured in features + deployed easily, module is dependent on CTools)
  • The ability to easily extend blocks and add custom fields (all you need is a delta/machine name and description, and then add new fields using the Drupal FormAPI).
  • The option to edit in place (and with WYSIWYG editor).
  • Solid integration with Context and Spaces, i.e. a block can be overridden and defined for each space, for example per group.

Unlike Block and hook_block, Boxes uses its own database table to automatically handle storage and retrieval of custom field values, rather than making you define it for yourself. Traditionally with Block, this is done with variable_set and variable_get or a custom table. Also, it's interesting to note that Boxes uses hook_block() and is simply building on top of what Drupal already provides for defining blocks.

When to use boxes

  • When you want to create a structured custom block with multiple fields.
  • When you want to create a block that will be used in a multilingual site (so you don't have to create one block per language).
  • When you want to create a custom block that displays data from an API, Searchlight, or gRaphael.
  • When you want your block configuration to be exportable.

When to still use block or hook_block()

  • When you want to create simple blocks with just a title and description, block module is very good at that.
  • When you want to create a block whose content is generated programmatically, highly custom, doesn't need to be edited, and only ever used once (or not easily generalized).

Current state

The Boxes module is now maintained by Frank Febbraro from Phase2 Technology, and is still in active development. It is one of the modules that is used in the Open Atrium Drupal distribution, which they now own.

How to use it

The easiest way to get familiar with Boxes is to look at the boxes_simple Box, which is included with the module. This custom box type is a replacement for Drupal's default blocks and simply provides a body field and text format (admin description, title, and machine name are always provided). You can create a new boxes_simple Box at Administer > Site Building > Blocks > Add box, and once created it will show up in the normal block list. The box can then be placed on a page using Drupal's block visibility (place it in a region and click 'configure' next to the box name to restrict where it shows up) or the Context module. The box will also show up on the Features module admin screens under the 'Boxes' section and allow exporting the configuration into code. If the box has changed, the feature will show up as overridden.

Extending

Unfortunately there is very little documentation for how to extend boxes and create custom box types; it isn't too difficult but there are a couple things to keep in mind. Extending boxes is done by defining plugins for boxes which extend the boxes_box base class (boxes are objects). To define plugins, you need to implement:

  • hook_ctools_api() - Tells ctools that there are Boxes plugins in the module.
  • hook_boxes_plugins() - Tells Boxes the box machine name (the key of the plugin array), human readable title, implementing class, parent class (usually boxes_box), class file name, and where the class file is located.

The easiest way to see how this works is to look at boxes_ctools_plugin_api() and boxes_boxes_plugins() in boxes.module and look at how boxes_simple is defined. The actual class plugin code to implement a custom box usually resides in its own file, for example boxes_simple is in plugins/boxes_simple.inc. This class extends the boxes_box class and implements three key methods:

  • options_defaults() - Tells Boxes the form defaults and what is stored in the database.
  • options_form() - Tells Boxes the form to use, which is built with the Drupal FormAPI. Defaults and existing values can be accessed from the $this->options variable.
  • render() - Return an array containing the delta (machine name), title, subject, and content of the block. This is returned directly to hook_block and follows the same format. Defaults and existing values can be accessed from the $this->options variable.

Once you've implemented these functions, the new box type will show up in Administer > Site Building > Blocks > Add My custom box.

How we've used Boxes at Affinity Bridge

Multi-field blocks
We needed site admins to be able to create and update icons on blocks on a whim through the admin UI, so we created a custom box type with an image field. The blocks on the Teachers Without Borders (TWB) Get Involved page each have the same structure, but have custom fields for the icon, link, and link text so they can also be themed consistently.

Welcome to the CTM! | Teachers Without Borders Groups Collaboration AreaBlocks on a multilanguage site 
If you've had to configure blocks on a multilingual site, you know how painful it can be setting up a separate block for each language, then linking them together manually. On the TWB Groups site (built with Open Atrium), we had blocks that needed to be translatable into multiple languages. To solve this, we used Boxes to provide different fields for each language, which allowed us to expose that block content to the user in the right language context.

You can see this in action on the "Collaborate" block in the sidebar of the TWB Groups site - the English and Spanish versions display the exact same block, but show the appropriate language text field by checking the $language global on render. The editing form is straightforward, a single block configuration page with a field for each language (click image on right to zoom).

Displaying Searchlight driven gRaphael content in blocks
On the Center for Health Market Innovations (CHMI) homepage, Development Seed (who originally built the site) extended the base box to contain configuration options for Searchlight-driven gRaphael blocks. The configuration includes a graph display type (pie, bar, line, etc.) and facet info (taxonomy term facets to show), and it counts and graphs the results. This configuration is all set in the blocks, and is fully exportable and integrated with Searchlight. We've extended this functionality to add new blocks on the Programs page sidebar.

You can read more about the use cases for Boxes on the Development Seed blog. A lot of the extensions that are seen on the CHMI site are also used on http://data.worldbank.org, and there's more detailed info on their implementation on these posts about building data browsers with Drupal and mapping public health data.

Resources

(Boxes Monster graphic used for homepage thumbnail for this post c/o Development Seed)

Mar 13 2011
Mar 13

At DrupalCon Chicago this past week, there was a "Core Conversations" session track, made up of sessions pitched by contributors to the core Drupal project. A wide range of topics were covered from the Butler project (a new system for context and blocks), to the built-in Help system, to Deployment strategies, to redesigning the issue queue. These sessions were shorter presentations followed by a discussion period for the attendees to give input on the topics.

The final conversation on Thursday by Dries Buytaert (Drupal's project lead) focused on discussing the development process for Drupal 8. (It was also exciting to witness the Drupal 8 branch being created live during the session!) During the presentation portion, Dries described in more detail the process changes he had suggested during his keynote. He then opened the floor up for everyone to bring up issues they felt needed some attention/discussion.

Discussion points

Here is the list of discussion points (that Dries noted during the talk) we core enthusiasts came up with:

  • More structured sprints - project management
  • Sandboxes (aka. Samboxes, since Sam Boyer was a huge contributor to Drupal's CVS to git migration) and locations
  • Timeline of release cycle
  • Hybrid development cycle
  • Ubuntu model?
  • Gate details; performance testing, etc.
  • Roles of initiative owners
  • Backporting
  • Non-initiatives / small changes / bugfixes -- different process?
  • Tools for usability/accessibility
  • Process around new initiatives / proposal
  • Documentation gate - workflows

Many, but not all points were discussed, and as we progressed through the conversation, I began to see parallels between some of the process changes we've implemented here at Affinity Bridge and what's going on in the Drupal development process. When I first began my position as Project Manager here, the first task I was assigned was to figure out how to make our development process work better, especially for larger, ongoing projects. This was sparked by the pain of working on very large projects, with huge issue backlogs and many feature requests, and no set launch date. 

After a lot of reading and research, our team started experimenting with a more agile process. Picking smaller chunks of work, completely finishing them, picking a new chunk of work, completely finishing it, etc. helps to make progress faster and at a better quality. Many times now, the benefits of this process have proven themselves, and I am beginning to see how some of the main points we discussed at the core conversation could be addressed with some incremental changes towards a more agile development process. 

A sidenote, as I was discussing this over the weekend, Bruno De Bondt from DeWereldMorgen.be pointed me at a blog post from one of his old coworkers from Krimson (a development shop in Belgium), Kristof De Jaeger (aka. swentel). The post is from about a month and a half ago, and outlines a proposal for an agile Drupal 8 development process. He goes into much more detail about the process and different roles involved than I will here, but it is well aligned with my thoughts (and potential long term vision), so I recommend you read it!

Specific ideas about sprints and agile for core

Before I delve into details here, I would like to preface this by saying I am not a religiously devoted agile follower. So far, what I've seen work is using elements of the agile methodology that fit for the team and project. Some devoted capital-A "Agile" followers might find that to be a flaw, but I don't think it's necessary to shoehorn people into a process. It's important that anyone adopting this feel comfortable with it, and not forced into it. Also, speaking briefly with Dries after the core conversation, he noted that it's important with such a large community not to try and undertake too much change at once (referring to the git migration and also the new idea of core "initiatives"). I very much believe those are wise words, and so am more keen to suggest some practices that the community can experiment with. If these are successful, then perhaps they can be applied more overarchingly.

Now, to go through a few of the main discussion points and others that came up, and relate them to how a more sprint-based/agile process might help address them.

New opportunites from sandboxes and git

The move to git is obviously going to open up many doors as far as more collaborative work and changes to workflows. Any Drupal.org user can now have their own development sandbox, and as Dries mentioned anyone can now fork core (ie. make a duplicate version and add improvements to it), though it's not always recommended! 

As many others noted in the session, this is going to allow multiple people to easily work on different parts of core and then merge their work back together. It could also allow multiple people working on different approaches to the same parts of core, to then compare and combine the best pieces. In any case, it will allow for a lot more safe experimentation and collaboration.

Keeping the criticals count below 15

This was something Dries brought up in both his keynote and the session. The Drupal 7 release cycle was very drawn-out because of the large quantity of criticals accrued through the three years of work. Longer release cycles tend to lead to burn-out and work that doesn't get fully completed.

Having the sandboxes and git will allow for a totally new approach here, one which is common to a more "agile" process, which prioritizes keeping the master branch clean and ready to launch at any time. To work like this, the master branch for core would be kept in a launchable state, and any changes or new work would be done in branches/sandboxes which would have to be completely done to be merged back into core. The "gates" would protect the master branch from bugs or incomplete work, so that we could be relatively certain that core is always ready to ship.

As a result, we wouldn't be aiming for under 15 criticals, but rather a constant of zero criticals in the master branch.

Small changes and bug fixes

Small changes and bug fixes would also be done in branches, but merged back in more frequently. There would be no necessity to save up batches of small fixes, though they could be done in batches related to different pieces of functionality. 

We would want to aim to avoid incurring technical debt, ie. fix problems as we go, as much as possible. Never delay working on bugs that will need to be fixed before launch. If that's the case, the work is not "done" and should not go into the master branch. This may mean more refactoring, but it helps keep the master branch ready to ship at all times.

Gates

The "gates" Dries talked about are still being defined, but the general idea is that there would be some steps that need to be completed before any branch would be merged into the master. Examples of what gates there might be include:

  • Documentation: making sure all of the code/API documentation is complete/updated, and ideally make sure the online documentation is also complete/updated. Possibly could go through review by Documentation Team members or other developers.
  • Accessibility: making sure that basic accessibility standards are met. Possibly could go through review by Accessibility Team.
  • Testing: making sure automated tests are written and pass. Possibly could go through manual testing as well.
  • Performance: making sure performance meets certain standards.
  • Design/UI: having design or usability reviews.

Making this concept of "gates" work will require defining a set of requirements and standards, and possibly finding people willing to do frequent reviews. But it will also mean that this additional work is really done before merging into the master branch, and that "done" refers not only to code being complete, but a more holistic interpretation of when work is ready to ship.

Timeline of release cycle

Dries suggested a shorter release cycle for Drupal 8, with likely a more focused and smaller set of initiatives (which he's outlined as areas of focus that will have initiative leaders). Keeping the master branch ready to ship at all times will mean that the release cycle can be as short or long as we want, and that we are not limited by half-finished functionality in the master branch (since only finished functionality would ever be merged in).

More structured sprints and phases

A lot of these goals and ideas lend extremely well to working within more structured sprints, and using phased development for larger initiatives. My initial suggestion (Kristof's post suggests two month sprints) would be for one month sprints. Despite the issue of working with a spread out team of mostly volunteers, I find that shorter sprints lead to better momentum. I also feel that it will be easier to pick out clear sets of issues to work on per sprint if they are relatively short.

Another benefit to shorter sprints would be the potential to attract more help from people who aren't interested/able to work on core more consistently. It's a lot easier for someone to commit to working on some functionality for a month rather than for three years. I would bet that once someone helped with one sprint, they would be far more likely to end up helping on another one down the road.

For larger pieces of work, we'd want to work in phases that are several sprints long. Each phase would have a set of functionality that can be merged with the master branch when complete. And the end of each phase would be a great place to work through the "gates". It would be nice to think we could go through the gates at the end of each sprint, but I don't think we currently have the resources to do this. It might take many phases to get an initiative fully complete, but the length of the initiative wouldn't necessarily delay a potential release.

A note on infrastructure...

Structured sprints would be much easier to do if we were able to add "sprint" and/or "phase" fields to the issues, but even without this we can always organize what is in each using tags, as was done with the git migration.

It would also be ideal to be able to relate issues in a richer way like in some issue tracking systems such as Unfuddle (see image below). Despite Unfuddle not being created specifically for agile development, it's easy to use it this way. I tend to create "parent" tickets for meta-issue/discussions and then create specific work item tickets as "children". You can also mark "related" issues, which aren't part of the batch of tickets, but are somehow related. And you can have parent tickets that have child tickets that are parents to others, so it's possible to create a hierarchy of what needs to be done to move onto another piece of work. Then, I create a "Milestone" for each sprint, with start and end dates, and add and prioritize tickets in the milestone appropriately after discussing priorities with the product owner and the development team.

If this interests you, there are issues filed for redesigning the issue queue and creating functionality to support meta-issues that you can add your opinions to.

Project management

This all seems to beg the question: does the Drupal project need a dedicated project manager (or scrum master)? 

At this point, I would say no. Between Dries (who tends to fill the product owner role), the core maintainer he appoints (who tends to act somewhat as a project manager, and does a lot of QA), and now the new Initiative leads, I don't feel that this would be necessary during a period of more experimental adoption.

If teams and individuals try this method, and find it works well enough that it could be adopted fully for all core development, it would be a very good idea. A much more structured process takes a lot of work to keep organized and stick with, and I believe it's best to keep developers' time free to be dedicated to development as much as possible. So at this point we would want to create some overall structure in the issue queues to be able to manage sprints efficiently, and have someone who can oversee the project's organization as a whole.

Realistically, I don't think this would be necessary very soon. For now, I would suggest anyone interested start with:

  • Running set sprints (recommending one month). Meeting at least once per sprint (on IRC or by voice) to review and align priorities amongst whoever is working on a given issue/set of issues/initiative. Communicating frequently on IRC (like we always do!).
  • Defining work to be done at the start of the sprint, and doing your best not to introduce new work into the in-progress sprint. In the beginning, underestimate how much can be done, until you get a feel for the momentum possible in a single sprint. If some things don't get finished, they go back into the "backlog" (or main queue of issues) and are reevaluated in regards to whether they will be in the subsequent sprint.
  • Making sure at the end of the sprint (or phase for larger initiatives), that you pass through the "gates" and deliver functionality that is completely "done" and ready to merge into the master branch.
  • Making sure that no bugs get into the master branch so it can always be ready to ship. As a result being able to make the release cycle any length, and launch on short notice if desired.
  • Avoiding technical debt (things that you put off that will need to be done later); making time for refactoring or bug-fixing as you go. Possibly even having a technical debt recovery sprint early on.
  • Defining the "gates" clearly and deciding who is responsible for signoff before merging into the master branch of core.
  • During this experimental phase, making sure there are some resources that teams and individuals can refer to and learn more about these methods. Possibly looking for project managers in the Drupal community who are familiar with agile development, and open to informal advising on IRC.
  • Remembering that this is a pilot project of sorts. Not putting too much pressure to follow capital-A Agile strictly. Letting this be an organic process to see what fits the community best, while leading to a more efficient, clean, and smooth process.

I was extremely happy to hear how open and interested the other attendees of Dries' core conversation were, and hope this helps clarify some ideas. This should be an open discussion rather than anything too prescriptive, so I would love to continue the conversation and hear your feedback, questions, and concerns below.

(ps. The Drupal 8 wordmark used with this post on the homepage is from Dries' slides, just for the record!)

Jan 07 2011
Jan 07

Drupal 7 is out!

Late Tuesday night was one of the most exciting Drupal events I've yet to be part of: the official release of Drupal 7. I've been working on the Drupal 7 documentation for over a year at this point, and crunching really hard the last few weeks on the Install and Upgrade guides, and core module documentation. I knew the day was coming, but it was even more inspiring than expected being a part of those last few days and hours leading up to the launch.

When Angie (aka. "webchick", Drupal 7's core maintainer) started rolling the release, it was quite obvious how excited everyone was on IRC, as this went on for about five minutes straight after she created the final release:

D7 Release IRC log

After some final cramming to get the rest of the launch related pages on Drupal.org updated (Lisa Rex, Bojhan Somers, Steve Karsch and Neil Drumm really deserve a special thank you, a they did a fantastic job getting that finished!), the launch page was finally published. Then the official tweet went out and the community rejoiced!

You can see a great list of all the contributors to the core Drupal 7 code on the http://drupalgardens.com homepage (for as long as it's up). We're really proud to have four of the Affinity Bridge crew listed on there among the almost 1000 contributors: Shawn (langworthy), Tylor (tylor), Tom (thegreat), and I (arianek) all contributed core patches along the way. You can see all the stats on the Growing Venture Solutions Contributors for Drupal 7 - Final Numbers post.

I know I've only been a party to a fraction of all the hard work that has gone into making Drupal 7 as fantastic as it is, but it's been such a great learning experience and wonderful way of getting familiar with the changes in Drupal 7 even before it launched.

What this means for us

We started planning for, and developing in, Drupal 7 ahead of the launch as soon as it was feasible. This has helped us start to get up to speed on the changes early, as well as provide our clients the benefit of one less upgrade to do in the next few years (especially since the Drupal 6 to 7 upgrade has a lot of big changes). It has also allowed us to help out a bit with finding bugs with the early Drupal 7 release candidates.

Drupal 7 core has a ton of improvements that impact both us (as a development team) and our clients (as end-users), and we're excited to transition to it. If you're relatively technical, I strongly encourage you to watch this fun and informative Drupal 7 Overview video from Lullabot (which is available free until Jan. 12, 2011). Some of the highlights of what I'm most excited about are:

  • There's been a ton of improvements to the user interface (UI). 
    • The new administrative overlay, allows site administrators to perform some administrative tasks without navigating to the admin section of the site. 
    • There are also new contextual links, which allow you to do things like modify the content of a block right from any page of the site, without having to go to the admin section of the site.
    • The new admin dashboard and shortcuts bar make navigating the admin area more intuitive, and allow you to set up a collection of custom links for admin pages you visit frequently.
  • Fields in core!
    • The Custom Content Kit (CCK) module from previous versions, which allowed you to configure additional content types is now part of core as the Field module/Field API, and updated Field UI.
    • Image handling in core! At long last, you can add image fields (with fancy cropping, resizing, and other manipulations) to content types in core without installing a single additional module! Fantastic!
  • Update manager is the new and improved version of the Update status module. Now, not only will it tell you what's out of date and provide a link to the recommended version, but you can install and update modules and themes through the admin interface! How cool is that?! 
  • The improved installer and default install profiles make installing Drupal 7 so quick and easy! The standard install profile takes care of a bunch of extra configuration that it didn't used to, and you can also select the minimal profile if you are a developer and want to start with a stripped down to the basics install.
  • We now have both public and private files at the same time by default. This means that it's easy to set up the site so some or all of the files you upload are private and can't be downloaded off the website from an unauthenticated user. This will be great for people who want to have collections of private or internal documents for their organizations, without having to worry someone could find them through a URL, or that they might get indexed in search engines.
  • RDFa in core. I've been working on wrapping my head around the semantic web and RDF for a while, and finally had an "aha" moment at Lin Clark's talk at DrupalCon Copenhagen in the summer. This is really exciting stuff! The basic idea is that RDFa adds markup that turns sites into structured data that can then be referenced by other sites. It basically turns all websites using RDF into a big shared database that can be cross-referenced. This is fascinating as far as the future of the web, as it will continue to increase collaboration between different sites and organizations, and help allow for single sourcing of information on the internet (ie. if you change something in one place, it self-updates wherever that information is being referenced).
  • On the much more technical side, there are things like:
    • The minimum of PHP 5.2.4 for the server requirements has allowed for a more object oriented (OO) system. This means that more kinds of content, not just nodes (pages, articles) but also users, taxonomy (ie. "tags"), and comments are now what are referred to as "fieldable entities". So before, where we used to only be able to add extra fields to nodes, we can now for example add extra fields to taxonomy terms, or comments.
    • More options as far as what kind of database you can use because of the DBTNG ("Database The Next Generation", the new db abstraction layer in Drupal 7).
    • Various security improvements
  • Theme improvements mean nicer looking, more interactive, efficiently built themes.
    • jQuery in core means lots of nice looking fancy UI and design elements that don't need Flash.
    • Fantastic new default themes ship with core, makes for great looking admin backend, and nice looking, usable out-of-the-box theme for your site.
    • New theme system opens up even more doors for the possibilities when you build custom themes (like we do for all of our clients).
  • Automated testing coverage! Drupal 7 core has fantastic coverage by the built in Testing (aka. Simpletest) framework - sleep well knowing that all of core is working as it should!

For more details, you can also watch the overview video on the Drupal 7 release page, or there's also a pretty detailed article on Linux User Magazine (just make sure to find the pager at the bottom of the body text, there are 4 pages to the article).

Finally, for anyone in and around Vancouver, come celebrate the Drupal 7 Release tonight at the D7 Release Party!

Nov 23 2010
Nov 23

This post is a followup to a presentation I gave at the Pacific North West Drupal Summit in Vancouver held back in October.

Background

We've discussed using the Simpletest module in a previous post, Drupal Simpletest Module Abridged. Simpletest is a powerful way to know that the code powering your Drupal site is operating correctly and that new functionality is not breaking what has already been implemented.

Running tests in the browser is time consuming. In this post we'll look at ways to automate the testing process. By the time you're finished reading, you'll know how to receive notifications with test results for your custom code each time you make a commit to your remote version control repository. And you'll also know how to configure daily test runs that cover all available tests including core and contrib.

Along with explaining how to test projects where the entire codebase is stored in version control, I will also describe testing a distribution. The examples given can be used for both Drupal 6.x and Drupal 7.x.

Assumptions

  • Your code is stored in a Git repository
  • Your Git repository supports POST callbacks (more on that to come)
  • You are using the latest version of drush, and for distributions drush_make
  • You will install a testing environment that is a functional copy of your Drupal project. For the following examples this site is http://test.example.com

Introducing automated testing

To get a good understanding of the workflow we will be creating, here is an overview of a common development workflow when running tests in the browser.

  1. Code new functionality
  2. Write a test for that functionality
  3. Run the test in the browser
  4. Wait for test results
  5. Possibly make fixes and return to item 3
  6. Eventually run all the available tests
  7. Wait a long time

In an ideal world you would run all available tests after you are satisfied with your new code and test. But then you'd end up staring at the browser for an hour or so.

Here is the development workflow we will create using automated testing.

  1. Code new functionality
  2. Write a test for that functionality
  3. Run the test using drush
  4. Wait for test results
  5. Possibly make fixes and return to item 3
  6. Commit code into version control and move to next task
  7. Receive a notification with the results of all your custom tests

Lets take a look at how we'll use drush to run tests from the command line.

Installing a drush test runner

UPDATE: Nov. 29, 2010: Drush HEAD now contains an updated version of the test.drush.inc file mentioned below. The drush command is now "test run" not "test". If you are using Drush HEAD >= Nov 29, 2010 or a Drush version > 3.3 please review the updated usage.

A test runner is a command used by a human or a continuous integration server to run tests. We're going to install a drush command that can be used both for test automation and for running tests on the command line. Drush commands live in your ~/.drush directory. If you do not already have one, create it and download the following drush command:

$ git clone git://github.com/sprice/drush_test.git ~/.drush/drush_test

This version of test.drush.inc is based on the Drupal 6.x command created by Young Hahn. It has been updated to work with both Drupal 6.x and Drupal 7.x.

Running tests with Drush

To get a list of all the tests available cd to your webroot and type:

$ drush test --uri=http://test.example.com

Here is a look at a number of tests available:

Using drush you can run a single test, a class of tests, or all tests.

$ drush test --uri=http://test.example.com BlockAdminThemeTestCase will test a single test.

$ drush test --uri=http://test.example.com Block will test all six of the block tests.

$ drush test --uri=http://test.example.com all will test all tests. This includes all the core and contrib tests and all the custom tests you have written for your project code.

When creating custom tests, it is a good idea to use the same test class for each one. That way drush can run all your custom tests with a single command.

Note that there currently seems to be an issue testing the included D7.x Standard profile with this drush test runner. There is no problem testing the D7.x Minimal profile or the custom profile I include later in this post.

Choosing a continuous integration server

A continuous integration server is the primary tool that makes test automation possible. There are many options available though the choice I have seen most often used with Drupal projects is Hudson.

For this post we will be using the small and easy to configure CI Joe. I have enjoyed using CI Joe on projects because it is simple to install and configure and because it does the things that I want and nothing more.

Installing CI Joe

OSX 10.6

It can be helpful to have a basic test setup on your local development environment. However this won't be an automated system.

$ gem install cijoe

$ echo "export PATH=/Users/username/.gem/ruby/1.8/bin:\$PATH" >> ~/.profile

$ source .profile

Linux

The following has been tested on Ubuntu 10.10. Similar instructions will probably work for other distributions though package versions may vary.

$ sudo apt-get install ruby

$ sudo apt-get install rubygems

$ sudo gem install cijoe

Add /var/lib/gems/1.8/bin to your PATH.

$ sudo nano ~/.bashrc

PATH=$PATH:/var/lib/gems/1.8/bin

$ source ~/.bashrc

Installing a test environment

It's important to note that you want to have a Drupal site running that is for testing only. Never test your production site (Simpletest should not be installed on production). And it's a good idea not to test your development or staging sites either. Automated testing will slow the sites down and the purpose of introducing the practices is to increase efficiency.

As outlined in the assumptions of this post I expect you are familiar with installing and configuring a Drupal site. To begin, create a new duplicate of your project site with it's own codebase and own database.

The following examples will assume you will install your site to /var/www/test.example.com and that the site is available at http://test.example.com.

Configuring CI Joe

We need to add some configuration options to CI Joe. All configuration is done via Git config options. We're going to define a test runner command, select a branch of our version control repository to test, and protect the CI server with HTTP Basic Auth. We're also going to configure CI Joe to queue multiple test runs. For more information about CI Joe configuration see the documentation.

For this example I'm going to clone my project code and configure CI Joe to run tests within the CustomTestCase.

$ cd /var/www

$ git clone git://github.com/user/example.git test.example.com

$ cd test.example.com

$ git config --add cijoe.runner "drush test --uri=http://test.example.com/ CustomTestCase"

$ git config --add cijoe.branch develop

$ git config --add cijoe.user username

$ git config --add cijoe.pass secret

$ git config --add cijoe.buildallfile tmp/cijoe.txt

Start CI Joe on port 4567

$ cijoe -p 4567 .

Visit CI Joe at http://test.example.com:4567 using your username and password for access.

Click "Build". You'll need to refresh your browser to see when CI Joe has finished building.

When you click "Build", CI Joe checks out the latest code from the develop branch and executes the test runner command. CI Joe is agnostic when it comes to what runner to use. As long is it returns a non-zero exit status when it fails and a zero exit status when it passes, it just works. This also means that the run-tests.sh PHP script included in Drupal 7 core won't work as it doesn't exit in a way that CI Joe and some other continuous integration servers expect.

Getting notified

Two git hooks handle notifications. Depending on the test outcome, .git/hooks/build-failed or .git/hooks/build-worked will be executed after a build run. These are shell scripts and can do whatever you want them to. For our example we will simply have them send an email. Make sure they are executable.

$ nano /var/www/test.example.com/.git/hooks/build-failed

$ chmod +x /var/www/test.example.com/.git/hooks/build-failed

$ nano /var/www/test.example.com/.git/hooks/build-worked

$ chmod +x /var/www/test.example.com/.git/hooks/build-worked

The examples included in the CI Joe project include scripts that use the mailutils package.

Now when a build runs you'll receive an email letting you know whether your tests passed or failed. Go ahead and click "Build" again to see.

Automation

We now have the testing system configured so that clicking the "Build" button is all that is needed in order to be notified of test results. The last step is to automate the build process so that every time commits are pushed to the remote repository, a build will be triggered.

Most hosted Git repositories support POST URL's. GitHub calls them Post-Receive URL's, Unfuddle calls them Repository Callbacks. When configured, any time the git repository receives a commit, it will POST information about the commit to the URL provided.

Luckily for us, CI Joe doesn't care how the POST is formatted. If CI Joe receives any POST at all a build will be triggered. That means that this will work:

$ curl -d "This is a POST" http://username:[email protected]:4567

Enter the URL of your test server as described in the above command, including the username, password, and port into your Git repositories POST URL option. In GitHub that is found in Admin -> Service Hooks -> Post-Receive URL's.

That's it! You've now configured your project to include automated continuous integration. Push some commits to your remote repository, wait for an email and then pat yourself on the back.

Testing a Drupal distribution

I'll include a note about testing distributions since that's how many are building their Drupal projects these days. A distribution contains a manifest of the code used in the project, an installation profile and likely custom code as well. When you make a change to your project you're sometimes only updating a module version in the .make file. So if you click "Build" in CI Joe it will check out the new .make file but the related code in your project will still be the same.

To solve this problem we will rebuild the codebase on each test run. This may sound crazy to some, but it's important to remember that it's best to commit locally as often as you can and only push to your remote repository a few times a day. If you really want to speed things up you can always use Squid as a caching server for drush module downloads.

Here is a demonstration Drupal 7 distribution so you can get a good understanding of how it works.

Copy the simple_distro.make file to your system and use drush_make to install your project codebase.

$ drush make --working-copy simple_distro.make /var/www/test.example.com

Now one more Git hook needs to be configured that will trigger the project rebuild before running the tests, .git/hooks/after-reset. Make sure the script is executable.

$ nano .git/hooks/after-reset

$ chmod +x .git/hooks/after-reset

This script will cd to your profile directory and run the rebuild.sh script to trigger a rebuild. I've added the -y flag as I'm passing that to my rebuild.sh script.

Configure your git hooks noting that your project repository is in /var/www/test.example.com/profiles/profilename

Complete daily test builds

Earlier I discussed that running all available tests is time consuming. Given that Drupal core and many contrib modules have good test coverage we're not going to worry about running those tests on every remote commit. We'll use cron to run all available tests once a day. Install a second testing platform for this purpose.

Configure the second test environment at: /var/www/daily.test.example.com Make changes to your notification scripts so that it's clear that these are daily test runs. Finally change the test runner git config option to be:

$ git config --add cijoe.runner "drush test --uri=http://daily.test.example.com/ all"

You can run many CI Joe apps on a single machine as long as they are on separate ports. Start CI Joe as described above using port 4568.

Now use cron to enable the tests to run at 1:00am.

Setup the cron job:

$ sudo crontab -e

* 1 * * * curl -d "POST" http://username:[email protected]:4568

Each morning you'll know whether your entire test suite is passing of not. Get ready to start sleeping easier each night.

Conclusion

This post has described how to configure your development environment to include automated continuous integration. By writing tests for the important functionality of your project you'll ensure the integrity of your code over time. By automating the testing process you reduce to zero the individual effort to run tests, and ensure your team is notified if there is a problem. Happy testing.

Nov 16 2010
Nov 16

Big news in Affinity Bridge offices today! The announcement went out on Drupal.org this morning that myself (arianek) and Jennifer Hodgdon (jhodgdon) from Seattle have been appointed by Dries Buytaert as the new Drupal Documentation Co-Leads! 

This has been a long time in the making. It's really exciting to have the work I've been doing recognized, and also have the ability to leverage that work to improve documentation and help build a stronger Docs Team. On top of that, my community spotlight was also posted today, which is incredibly heartwarming.

As someone who loves both working with Drupal and giving back to the Drupal community, participating as a member of the Docs Team has been an incredible experience for me. I've learned so much through this work, and been privileged to learn from some of the brightest Drupal contributors, and to call them friends after many hours spent on IRC and at various conferences and camps working on docs.

Why do I love working on docs?

  • I like editing. Style guides, consistent and clear language, and correct formatting and grammar make me happy.
  • I like writing. I have always enjoyed it, and this has been a great opportunity for me apply that towards something that will be useful to many people.
  • I like learning! I have learned an immense amount about Drupal's functionality and the rationales behind much of how it works by working on docs. Helping out with docs is one of the best ways to learn more about Drupal, because it always gets you testing and tinkering. It's also a great opportunity to learn from the community's developers and themers, who are always happy to answer questions for Docs Team members.
  • I like mentoring. I'm still cultivating this skill, but I really enjoy helping others learn how to contribute, and trying to get other people enthused about working on docs.
  • I like giving back to the Drupal project. Open source projects depend heavily on the work of passionate and dedicated contributors, and Drupal has given so much to me, it's the least I can do.

Big thank-you!

I'd like to say a huuuuuge thanks to Mack. His ongoing support for my work in the community has been incredibly generous and motivating for me. As of about six months ago, he's increased Affinity Bridge's sponsorship to cover all of my work on Drupal docs/core, community event organization, and running docs sprints. Without his ongoing support I'm sure I wouldn't have been able to dedicate as much time as I do to this work. He's certainly walking the walk when it comes to being a nurturing business owner within the Drupal community, and deserves some major mackh++'s.

Want to work on docs too?

Oct 06 2010
Oct 06

This past weekend we attended the Pacific Northwest Drupal Summit and I gave an introduction to open data and beautiful maps. I talked about open data, covered the creation of a map in under 10 minutes, and discussed how to create beautiful maps using advanced techniques like custom tilesets. The video is already online thanks to the hard work of Justin Carlson, posted on his blog here and embedded below:

Paraphrasing some of the questions and comments at the end of the video:

Question 1: How does generating views with OpenLayers differ than with GMap?
Question 2: Can you use the Google Maps API when using OpenLayers and a Google Maps tileset?
Question 3: How does location.module differ from others storage methods? How do you decide which storage method to use?
Question 4: Can I use tiles to display polygon data and still interact with it?
Question 5: If I have a database of address how can I convert them into latitude and longitude?
Question 6: What other input data types are supported by mapping modules?
Question 7: Can I use a shapefile to generate an overlay?
Question 8: What other tilesets can I use with OpenLayers?
Question 9: Have you played with polygons and highly granular shapefiles?
Question 10: How did you get the Google Map API entry step into the install profile?
Question 11: Ben comments that using geo.module instead of text fields is helpful if you have a lot of data because it decreases the server load by speeding up your queries.

For the talk I created an install profile and drush make file to build a simple and lean Drupal mapping distribution, which for now I have named Quickmaps. The source code for the distribution is available at github.com/tylor/quickmaps. I am making the slides available as a PDF here and have been tracking my Mapnik and Quantum GIS source files at github.com/tylor/vancouver-mapping.

The inspiration for this talk comes from my Water! drinking fountains map for Vancouver. This is a map I created just over a year ago now and it has been really engaging to see it being discussed and used in so many different ways. Here is the original screencast showing how to set up a water fountain map in under ten minutes:

I had a great time sharing this presentation and it led to some great conversations throughout the rest of the summit. Thanks to all of the organizers for putting on such a successful event!

AttachmentSize 318.42 KB
Oct 05 2010
Oct 05

It was an incredibly jam-packed weekend for Drupallers here in the Pacific Northwest, with the 2nd annual PNW Drupal Summit in Vancouver. The Summit is a weekend conference that is targeted towards people already working with Drupal (moderate to advanced level), and is done in a regional mini-DrupalCon style: pre-scheduled sessions/tracks, keynotes from Drupal 7 maintainer Angie Byron (aka. webchick) and Chapter 3's Josh Koenig (aka. joshk), and Drupal 7 code sprints (that resulted in bringing the Drupal 7 criticals count from 13 to 8 over the course of the weekend, HOOAH!)

PNW Drupal Summit

We had 240 people altogether for 2 and a half days of awesome Drupal geekery. In addition to all of the attendees from BC, Washington, and Oregon, we had Drupalfriends from Idaho, Montana, Nebraska, Colorado, California, and Minnesota travel here to come join us in the fun!

The Affinity Bridge team was out in full force, presenting 5 sessions and one BOF over the course of the weekend, and co-organizing Friday's Drupal 7 code sprint, which was kindly hosted by our friends and neighbours at The Jibe. All of the sessions we presented were focused on sharing knowledge about some of the cutting edge technology and methods we've been using, and also about business/management/development practices specific to Drupal.

Shawn Price spoke on Simple Continuous Integration with git and CI Joe, demoing some of the tools and practices he's been using to set up continuous integration for one of our recent projects. Shawn continues to push the envelope for automated integration and testing at Affinity Bridge, and has been laying the groundwork for implementing these sorts of practices more broadly for our other projects.

PNW Drupal Summit

Mack Hardy talked about keeping Everything in Code! The importance of storing configuration in code (and, of course, in version control), and how it helps make deployment a much smoother process. He did an extensive review of using .make files, best practices for pushing changes between development/staging/production environments, and how to set up all the various tools needed for this. (Zoë helped prepare the fantastic slides for this, but was hit with the flu and wasn't able to co-present as planned.) Over the last year, we've become fairly religious about these processes, so it was great to be able to share them.

PNW Drupal Summit

Tylor Sherman's session was on Open Data and Beautiful Maps. He demoed how to set up a map from open data using Gmap and Views in under 10 minutes (I believe he clocked in at under 7 minutes on this), and then showed some of the more advanced tools (OpenLayers and MapBox) you can use to theme and customize maps. Tylor has been doing really cool things through his Drupal open data and mapping research, and actually had a map he built for fun used by the City of Vancouver prior to them building their own.

PNW Drupal Summit

Robin Puga, our team's Aegir expert, talked about how to use Aegir and Drush - Harder, Better, Faster, Stronger to make your development life easier. He reviewed the improvements that have been made in Aegir over the last year, especially the migration and multi-server management tools, and how with the help of Drush you can easily and quickly manage and migrate sites.

PNW Drupal Summit

Finally, Mack and myself rounded it all out with a session on Doing Business in an Open Source Ecosystem, and a BOF (birds of a feather) session on Agile Development and Project Management. In the business and open source session, we talked a lot about how working with Drupal can and should differ from working with proprietary software. Notable highlights include: the opportunities that working with and contributing to the Drupal project can afford you, why contributing should be a part of your workflow, how to incorporate/educate your clients on open source (which led to interesting discussions about the GPL and licensing), impacts on corporate culture, and personal benefits. In the Agile BOF, we mainly discussed some of the challenges in sticking to Agile methodologies, how to make it work with clients, and some of the nuances of team dynamics and development processes.

Ariane and Mack presenting on the Business in an Open Source Ecosystem

Keep an eye on the AB blog for more detailed posts from everyone with notes from their talks, links to slides etc., and I'll be sure to update this with links to the video from Tylor, Mack, and Robin's sessions when it's up. This was most of our first times presenting on these topics (especially to such a large audience) so we'd love any feedback you have on the sessions (feel free to post in comments or send in via the contact page).

PNW Drupal Summit

All in all, an extremely fun and fascinating weekend! Everyone's photos can be viewed on Flickr #pnwds. Thanks so much to the other organizers and sponsors, as well as everyone who travelled near and far to attend. It was a huge team effort, and turn out even better than I think any of us had hoped! See you all next year (or in Chicago)!

Sep 14 2010
Sep 14

A few weeks ago, I embarked on my first overseas trip to go to Copenhagen for this year's European DrupalCon. It was my 4th DrupalCon to date, but I've been wanting to attend one of the European ones for a while, as they have a reputation for having a different vibe than the North American ones (and of course so I could finally see some of Europe!)

The Core Dev Summit (+ Code Sprint Day)

Like the last conference in San Francisco, it was prefaced with the Core Developer Summit, which is a full day of presentations, discussions, and code sprinting on the core Drupal platform. The Core Dev Summit is the single day (twice a year at this point), where a good number of the people who work on Drupal core come together to take a step back and discuss in-depth any ideas or concerns. This often leads into some dedicated sprinting on core related issues (as well as some of the most crucial contributed modules).

I attended mainly for two purposes: to keep on top of what all the core developers are up to and get some face time with them (since I usually only talk to them online), and to make sure there was some representation from the Drupal Docs team there.

I've been working on the online Drupal documentation a lot lately, helping to prepare the it for the Drupal 7 launch, and ended up leading an impromptu docs sprint when several people volunteered to work on the handbook for the second half of the day. It was great to get some help from both people who were new to docs as well as a couple fairly hardcore long time developers. Big thanks go to Djun Kim (aka. puregin) for working on the handbook page for the new-to-Drupal-7 File module, and to Ken Rickard (aka. agentrickard) for working on the new-to-Drupal-core Field and Field UI handbook pages. It was fantastic having help from some great developers writing these, and Ken actually found a pretty big permissions bug while writing the page.

...when you write documentation, you are forced to take a bit of code and really understand it. You [read] through it, make sure it does what you're saying it does, and test it. Guess what happens when you dig into code that deeply? You find bugs!

And because it's so encouraging (and true), I have to add this other bit he posted:

If you are interested in getting involved in core, working the docs queue is the single best way to do it. You find bugs other people miss, the patches are generally easy to get committed, you get used to the issue queue and creating patches, and best of all the patches are enormously valuable. Get to it!

(Off-but-on-topic, Angie Byron, aka. webchick, just put up a great post on contributing documentation on the Lullabot blog today, go read!)

Neil Drumm (aka. drumm) who works on the API docs and is currently helping manage the Drupal.org redesign was there as well, so I got to review some of the docs.drupal.org in-progress redesign with him. The redesign team has been doing a fantastic job, and I'm really looking forward to the relaunch and some of the freedom that will be afforded by having a separate subdomain for documentation.

I was also really pleased to get the opportunity to participate in a discussion about the CVS application process, which was has been a hot topic recently. Sam Boyer (sdboyer), who is working on the Drupal git migration, led a discussion to get feedback from many long time core contributors. Mainly, we talked about what is still broken in the process, what needs to change, and what small but effective changes could be made during the git migration to help improve matters. Main suggestions focused around ideas about how to manage namespace and numbers of modules, how to mentor new applicants, and the need to recruit more reviewers.

The post-conference Code and Docs Sprint Day was also extremely productive even though I was feeling a bit off and had to lead the docs sprint from back at the apartment! We did a kickoff over Skype then worked over IRC the rest of the day, and powered through a TON more of the core module handbook docs and some work on the install and upgrade guides. I really missed not being able to work in-person with everyone, but still want to thank all who turned up and cranked out some awesome docs work, namely: Steve Kessler (DenverDataMan), Alex Pott (alexpott), Barry Madore (bmadore), Marika Lundqvist (marikalu), Miro Scarfiotti (smiro2000), Paul Krischer (SqyD), Carolyn Kaminski (Carolyn), Khalid Jebbari (DjebbZ), and last but not least Boris Doesborg (batigolix) who I am really sad not to have met in person, as he worked a bunch with me on the D7 Help initiative over the winter. Next time! You all rock, hope to see you around the docs queue and IRC till the next con. 

The rest of DrupalCon...

I had to agree with what I'd heard about the European cons, as I did feel a lot more of a community vibe (probably due to the smaller size, being the same size as my first DrupalCon in Boston in 2008), and did not see a lot of the corporate aspects that have become part of the North American cons of late. Those are, of course, part of Drupal's growth, but they do change the atmosphere.

The sessions I went to were all really fantastic. I think my three favourites had to be:

  1. The Managing a Drupal Consulting Firm panel (video) - Todd Nienkerk and Aaron Stanush (Four Kitchens), Thomas Barregren (NodeOne), Vesa Palmu (Mearra), Matt Cheney (Chapter Three), Liza Kindred (Lullabot), Eric Gundersen (Development Seed), and Tiffany Farriss (Palantir) sharing stories and tips for how to be a successful and happy Drupal consulting firm. Great ideas, and bonus high comedic value!
  2. Jeff Miccolis' (jmiccolisFor Every Site a .make File - great review of .make files and associated development practices (couldn't find the video, if anyone knows where it is, comment please!)

And though I didn't attend it, Amitai Burstein's session on Group, which is the Drupal 7 iteration of Organic Groups (video) was the crowd favourite, and highly recommended as one to watch online.

What else can I say? It was a fantastic week with a bunch of fantastic people. As @timbertrand put it:

"Dear Proprietary Social SW Vendor -
this is only a taste of our development team"

See you next time!

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web