Feeds

Author

Jan 15 2020
Jan 15

CLI tools is a great way to automate certain tasks for your CI needs. For example you re-use same shell scripts project to project. Or maybe you are integrating with some third-party service. Abstracting these tasks into your own CLI tool could be a great way to go.

You can think about some examples from Drupal world -- each hosting provider has some sort of CLI tool. European Commission has one to automate certain DevOps tasks (https://github.com/openeuropa/task-runner).

In my case I was building one for interacting with Diffy -- visual testing platform. Repository of the CLI tool is https://github.com/DiffyWebsite/diffy-cli

First, when I initially thought about idea of building a CLI tool I thought about the tools I used myself. Among them there were Drush and Terminus (Pantheon’s CLI tool). They both use https://robo.li/ so my choice was obviously to use this framework as well.

Getting started

One of the best parts of Robo is that they is a Starter project (https://github.com/g1a/starter) that will create a code for you and also push it to your github.

Once you’ve done that you can start creating your commands right away.

Command syntax

And that is simply creating classes that extend \Robo\Tasks class.

For example here is a command that saves a configuration variable:

You can definitively recognize annotations if you have build custom Drush commands in the past.

Configuration

As for configuration Robo already promotes having a config file in your home folder as a YAML file. So for Diffy we store it in ~/.diffy-cli/diffy-cli.yml file.

There is a component https://github.com/consolidation/config/ that is used in Terminus and Drush. It is really great. Allows merging configs (imagine providing defaults and allowing override them) and also getting nested properties (get(‘foo.bar.baz’) type syntax).

But for my case I just needed to save an API key. So I went with a custom Config that simply saved or loaded parsed YAML file https://github.com/DiffyWebsite/diffy-cli/blob/master/src/Config.php.

Distribution as a PHAR file, self:update

Best part of the starter project is an example how you can pack the tool into single PHAR file.

Main idea is to use Travis to build and publish your PHAR file. Main idea is that you will need to use Travis CLI tool and run “travis setup releases”. Make sure to deploy released for tags only.

I have found more detailed instructions here and here.

Because of the starter project sets the setSelfUpdateRepository() for the runner whenever in the future next release will be available -- you can simply run yourcommand self:update to self update the tool.

Dec 17 2019
Dec 17

When we are talking about tools for local development -- there is a huge variety of options. It is less and less common to set up LAMP stack locally. Instead more developers use docker based environments. In Drupal world there are quite a few of them: docker4drupal, Lando, Docksal, DDEV.

In my last few years I used to work with Docksal and recently also started using DDEV. In this article I would like to demonstrate the differences between the two.

Maintainers

Docksal was initially built by FFW team. DDEV is maintained by Drud Technology. Both projects have online communities and leaders.

Docksal

This is the first docker based system I started playing with.

What I really liked is convenience. You can run fin drush <command> or fin drupal <command> to run drush or drupal console commands.

Another big thing is auto-starting projects. You can simply hit the URL of the project in your browser and it will start the environment for you. Meanwhile, you still can manually run fin up or fin stop (fin project start / fin project stop) to start and stop projects.

You can control things like enabled/disabled XDEBUG from editing docksal.env file. Main idea that you use docksal.env for all your configuration. Meanwhile, if you need more granular configuration like custom environment variables or custom php extensions you can edit docksal.yml file and extend appropriate section.

If you like to enable / disable XDEBUG for example you can run fin config set --env=local XDEBUG_ENABLED=1 && fin project start or edit docksal-local.env file manually and then run fin project start.

Docksal comes with a concept of addons. These are shortcuts to install additional containers. For example if you need PHPMyAdmin simply run fin addon install pma and you’ll get appropriate container added to your configuration.

Here is a list of all integrations that Docksal provides https://docs.docksal.io/tools/.

DDEV

I got introduced to DDEV after I worked with Docksal. Because they are based on the same technology stack there are a lot of things in common.

For example same as with Docksal, configuration is stored in two files: /.ddev/config.yaml and /.ddev/docker-compose.yaml that looked very familiar. Meanwhile there are way more configuration options by default in “config.yaml” file. Definitively recommend to look through them.

DDEV by default already comes with Mailhog and PHPMyAdmin containers.

In DDEV all commands inside of container are triggered with ddev exec command example: ddev exec drush <command>.

DDEV doesn’t have autostart feature, so you need to run ddev start every time.

What I really liked about DDEV is that you can install custom php extensions and packages without the need to dive into docker internals. There is a configuration option “webimage_extra_packages” that accomplishes that.

In order to enable / disable XDEBUG you simply run ddev exec enable_xdebug and ddev exec disable_xdebug.

Which one to choose? DDEV or Docksal?

It is hard to say. Specially for me as maintainers of both platforms are very good friends of mine.

Both platforms are based on the same technological stack so you find them really similar.

I found Docksal being more friends towards less technical people. Its auto start feature rocks. Also it definitely has a lot of options hidden from user that is a good thing for beginners.

DDEV is tailored towards more experienced users -- it expose more things in configuration and make things easier if you need to customize your environment.

Can you use both?

Absolutely! The only place where projects overlap is NFS mounting. What I usually do when I need to switch between projects is enabling the proper mount.

For that on my Mac I edit /etc/exports file

# DDEV
/Users/ygerasimov -alldirs -mapall=501:20 localhost
# <ds-nfs docksal
#/Users/ygerasimov/Projects 192.168.64.1 192.168.64.100 -alldirs -maproot=0:0
# ds-nfs>

And then restart nfs with sudo nfsd restart.

But make sure you check documentation Docksal, DDEV if you are on Linux or Windows. I am sure milage will vary.

Happy coding!

P.S. If you liked this article you can support my side project Diffy.

Dec 16 2018
Dec 16

Drupal updates can be very different. Some of them -- easy patches that you just roll out and forget. Some of them -- break your site. Tricky part is you never know how updates will behave on your site until you actually tried them out.

This is why it is very tricky to give estimates to clients how long it is going to take. They usually do not appreciate answer 1 to 20 hours depending on some random facts.

In this way rolling out updates got delayed and delayed. And then we get to situations after half a year or a year that we know for sure site will be broken after updates. And now hero time begins.

Would it be nice if site would tell you not only the fact that it needs updates but also if it is going to break or not after rolling them out.

Nowadays, thanks to Pantheon's multidev, it is technically possible to automate checking how your updates will behave on the site.

Main idea is to regularly check updates (using drush command) then if updates are found create a separate environment and roll updates there. Afterward to ensure that they didn't break the site (at least visually) we could run some visual regression testing. So in result we have way more predictable answer about "how much efforts it will take to roll updates out".

Here is a full article tutorial about how to set it up http://docs.diffy.website/tutorials/put-your-sites-updates-on-autopilot-with-pantheon-multidev-and-visual-testing.

For sure fixing smaller updates is much easier than fixing big break after year of delays.

Jul 08 2016
Jul 08

When you set up your Continuous Integration you really would like to set your deployments automatically. If you use Acquia hosting for your website it does make a lot of sense to use all environments in your workflow. But how you can automate deployments to these environments without touching UI (copying database, files, deploying code)?

The answer is in Cloud API

You can call them either with drush command or curl request. We will touch the drush commands approach in this post.

I personally was heavily involved in workflow called CIBox that uses separate from Acquia github repo.

I used 'master' branch to deploy to DEV environment. But both STAGING and PRODUCTION environments are tag based.

DEV environment deployment

First step of deployment for me was to sync the repositories.

cd /var/git/acquia
git pull github master
git push origin master 
 # Sleep for 30 seconds. We expect Acquia to update the code.
sleep 30

Little note: all these commands are run on CI server for me. So you will find plenty of ssh and scp to Acquia servers later.

Repository /var/git/acquia is a clone from hosting repo but with set up remote to our own private repo. If you use hosting repo as primary you probably won't need this step.

In Acquia UI I have set up DEV environment to follow master branch. So code gets deployed automatically.

In my CI set up, I keep copy of project's database on CI server to use it in all builds. So I deploy this db to DEV environment as next step. Workflow diagram looks like this.

Code snippet

# Copy db to DEV server
scp /var/www/backup/DBNAME.sql.gz PROJECT.dev@staging-XXXX.prod.hosting.acquia.com:/home/PROJECT/proddump.sql.gz
# Deploy db on DEV server
ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'rm -rf /home/PROJECT/DBNAME.sql && gunzip /home/PROJECT/proddump.sql.gz && cd /var/www/html/PROJECT.dev/docroot && drush sql-drop -y && `drush sql-connect` < /home/PROJECT/proddump.sql'

Remember in order to run this operation you need to set up your ssh keys so jenkins user (I use Jenkins as CI tool) can go to Acquia servers without password being requested.

And the last step is run all the updates, registry rebuilds and whatever your project requires.

# Run registry rebuild and clear caches
ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'cd /var/www/html/PROJECT.dev/docroot && drush php-eval "registry_rebuild();" && drush cc all -y'
 
# Run hook updates
ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'cd /var/www/html/PROJECT.dev/docroot && drush -y updb'

These commands actually can be set up with drush aliases. But I used terminal approach as already using it for deploying database. Just consistency reasons.

Another part I skip here is copying files over. We don't do that. Instead we enable stage_file_proxy on DEV and STAGE environments and point them to PROD so files got copied over upon request. This saves plenty of space.

STAGE environment deployment

As staging environment uses tags we need to change our code deployment part.

In order to use Cloud API you need to set up special private key in Acquia UI. Please review https://docs.acquia.com/cloud/api/auth for more details.

After setting up the key, ssh to DEV server and run command 'drush ac-api-login' and provide your email and key to it. In this way you will set up all your credentials and be able to run Cloud API drush commands.

And now, we can deploy the code.

ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'cd /var/www/html/PROJECT.dev/docroot && drush @PROJECT.dev ac-code-deploy test --verbose'
 
# Sleep for 30 seconds. We expect Acquia to update the code.
sleep 30

This will deploy the code from DEV environment to STAGE and adds the tag automatically. Basically it mimics the operation of dragging the code bar in Acquia UI.

All other steps (database deployment and cache clear) are the same as with DEV environment.

PROD environment deployments

Production deployment is going to be the same as STAGE with only difference we need to ssh to STAGE server to deploy the code. And of course we do not to deploy database anywhere.

I am sure in your projects you might need to add some more steps. Maybe reindexing solr, or clearing varnish caches. All these can be done with drush commands.

How do you do deployments? Please share your experience in comments.

Apr 20 2016
Apr 20

Visual testing is a great technique to keep styles of your website under control. But what other things visual testing can catch? Maybe some problems with functionality?

It is always best to see visual testing on real life projects. In this article we have done testing of Drupal.org website by comparing it with its staging environment and found some interesting issues.

Read full article on BackTrac's blog

Please leave your comments on BackTrac's blog instead of here. Thanks!

Jan 19 2016
Jan 19

Testing stale designs

Usually visual testing is very effective when website has stable design that should not be broken.

There are projects the whole idea of those is to keep design the same but to migrate the content to new platform. This is where visual regression testing can help a lot to identify very fast what pages are not yet complete and which ones are 100% accurate and can be shown to client.

Real life example Migrations

Consider the following example. We had a project where we needed to migrate from custom written CMS to Drupal 8 and we had to keep designs as they are. Because of the nature of the older CMS migration process was most complex part of the project. Legacy system was page based with possibility to place multible blocks around each page. In Drupal we had to migrate each block separately and place them in similar way on pages.

So our strategy during migration was to tackle low hanging fruits first. In this way we were able to migrate some of pages pretty quick and then evaluate more complex scenarios.

Visual testing of migrated page

The moment we were able to migrate all blocks on one page including its URL we were able to start using visual testing to find differences and fix them.

So we have quickly identified that our styles are little bit off, breadcrumbs are missing and header and footer menu items were not migrated fully.

Now we can run this test as often as we like and develop migration and styles of this page till they match exactly. This is something like Test Driven Development when first you write failing test and then implement code to make test pass. This is pretty much the same. We compare two pages and work on migrated on till it matches 100%.

Please share your thoughts about this way of using visual regression testing. Do you know any other interesting ways to use it? Please let us know in comments below.

Register at backtrac.io to try the tool yourself.

This is a cross post from http://backtrac.io/blog/visual-testing-migrations

Dec 11 2015
Dec 11

The Idea of visual testing of your website has been around for a while. This can range from someone manually checking a web page or more for visual defects or automating the process of detecting visual changes. The cause for unwanted visual changes usually comes from CSS stylesheet changes.

For developers, it is important to understand how to incorporate the visual testing into your existing workflow. As usual there are multiple options out there and we will be listing them below. from the easiest to the most complex.

Single Environment (Shared Hosting)

If you website is hosted on a shared hosting environment (i.e. godaddy, bluehost, etc) and that you make changes directly on that environment, you could create visual snapshots of your website before you make the changes. After the changes has been made you could create snapshot and compare it to your previous snapshot.

Multiple environments

If you are hosting on Acquia, Pantheon, Platform.sh or other providers that provide development and staging environment you could leverage them. The main idea is to compare your environments. So the best you can do is to do release to staging environment (including pulling in fresh database from production) and then compare your production and staging environments. Because of having same database your content related changes should be minimal so you will be in much better shape to identify visual changes caused by release.

Local development

You could also create “snapshots” of your own development environment locally. In this way any moment you can check what visually has changed on your site. You could even keep your reference set of screenshots and compare each of your change against it. In this way you will be 100% sure what pages got affected by your css changes.

CI workflow

If you are using Continuous Integration development workflow where you have environment per your pull request (if you are using github or similar workflow), you could run your visual tests per your build. Because this is most flexible workflow you could compare your builds either against your local set of screenshots or against your development / staging environment.

Tools

There are multiple tools you could use for each of workflows described above. There are plenty open source solutions like wraith, galen framework and others.

I would like to share the platform we are actively developing and using -- http://backtrac.io. This is SaaS platform that keeps all your screenshots in the cloud and has APIs for CI integration (http://backtrac.io/documentation/rest-api). All you need to get started is to have list of your site’s URLs ready. Meanwhile tool can build the list by crawling the site on its own.

Some of advanced features of the platform are:

  • compare environments
  • exclude specific elements form the page (i.e. banners)
  • extract element from the page (if you want to test only specific part of the page)
  • scrolling the page and delay before making screenshot
  • automated monitoring set up

For more information about features please visit http://backtrac.io/documentation

You are more than welcome to give this tool a try. We would be very happy to hear your feedback about the service and happy to talk about it.

Jan 12 2015
Jan 12

Logstash is a great tool to centralize logs in your environment. For example we have several drupal webheads that write logs into syslog. It would be really nice to see those logs somewhere centrally to find out about your system's health status and debug potential problems.

In this article I would like to show how easy to start using logstash for local development.

First of all in order to run logstash you need to follow instructions http://logstash.net/docs/1.4.2/tutorials/getting-started-with-logstash.

Logstash has following concepts:

  • inputs -- where we grab logs from. This can be files on local files system, records of database table, redis and many more.
  • codecs -- way you can serialize/unserialize you data. Think about it as json decode when you get records or running json encode when you are saving log message.
  • filters -- instruments to filter particular log records we want to process. Example -- syslog has many records but we want to extract only drupal related.
  • outputs -- where we are passing our processed log records. It can be a file (multiple different formats), stdout or what is most interesting elastic search

Tricky part comes when you need to install Elastic Search to store your logs and Kibana to view them. There is very nice shortcut for development purposes -- to use already built docker image for that.

I have found very handy to use https://registry.hub.docker.com/u/sebp/elk/ image.

So you need docker to be installed (http://docs.docker.com/installation/ubuntulinux/). Then you import docker image and run it.

sudo docker pull sebp/elk
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -it --name elk sebp/elk

Now we have docker image working plus it has port forwarding to our localhost.

In order to send your logstash logs to elastic search you need to use elasticsearch output. Here is logstash configuration file example that can be run for testing.

input { stdin { } }
output {
  stdout { codec => rubydebug }
  elasticsearch { 
    host => "localhost"
    port => "9200"
    protocol => "http"
  } 
}

Now when you run logstash and enter couple of messages they will be fed to elasticsearch. Now you can open http://localhost:5601/ to see kibana in action.

Next step would be to set up your own rules of extracting drupal (or any other type) logs and pushing them to elastic search. But this is very individual task that is out of the scope of this guide.

Jan 08 2015
Jan 08

Sometimes for our front-end development we need to have very granular control about how our form buttons being rendered. So instead of standard drupal markup we want to have something like

<button class="bird-guide-zip-submit button pea-green">
  <span class="hide-for-medium hide-for-large hide-for-xlarge">
     <i class="icon-magnifier"></i>
  </span>
  <span class="hide-for-tiny hide-for-small">Ok</span>
</button>

You would think that something like:

$form['submit'] = array(
  '#type' => 'button',
  '#value' => '<span class="hide-for-medium hide-for-large hide-for-xlarge">
      <i class="icon-magnifier"></i>
    </span>
    <span class="hide-for-tiny hide-for-small">' . t('Ok') . '</span>',
  '#attributes' => array(
    'class' => array('bird-guide-zip-submit', 'button', 'pea-green'),
  ),
);

would do the job but that is not the case as #value is being sanitized (that is great from security perspective). In order to change this behavior for one particular button we should use

 '#theme_wrappers' => array('mymodule_button'),

And then define your custom theming function

/**
 * Implements hook_theme().
 */
function mymodule_theme() {
  return array(
    'mymodule_button' => array(
      'render element' => 'element',
    ),
  );
}
 
/**
 * Custom button theming function.
 */
function theme_mymodule_button($variables) {
  $element = $variables['element'];
 
  $element['#attributes']['type'] = 'submit';
  element_set_attributes($element, array('id', 'name'));
  $element['#attributes']['class'][] = 'form-' . $element['#button_type'];
  return '<button' . drupal_attributes($element['#attributes']) . '>' . $element['#value'] . '</button>';
}

Be aware that when you use this technique you take responsibility for making sure you do not display any potentially harmful html in the #value as you do not sanitize it.

Dec 23 2014
Dec 23

Behat is great testing tool that there are already has a lot of documentation.

In drupal we have extension that helps us to build tests. Behat tests are configurable in yaml file (like url of your website and other options). Lately I needed to set custom cUrl options (goutte driver) and because Drupal extension 3 uses Guzzle 4 library it was not that obvious how to do that.

Trick is to check how Guzzle expects options http://guzzle.readthedocs.org/en/latest/faq.html#how-can-i-add-custom-cu... and then place in behat.yml file in the following way:

Technically options being added at https://github.com/guzzle/guzzle/blob/4.2.3/src/Adapter/Curl/CurlFactory... so if you need to make sure your options were passed, debug that function.

Hope this will save you some time.

Dec 22 2014
Dec 22

Panels standard renderer has very flexible undocumented feature of controlling the sequence of panels being rendered.

By default you have possibility to use 'render first' and 'render last' in your content type definition so you already can control what pane should be rendered for example last. Undocumented part is more interesting that is hook_panels_panes_prepared_alter() that runs after all panes were set up. So you can alter the array of panes in which way you can control sequence of panes being rendered. This feature is super handy when you have dependent panes.

Example can be if you have several panes with list of news articles. Lets say one block displays 3 items and another block displays other 5. But the problem is that editors can place blocks independently and you do not know whether you have two blocks on the page or only one. But you know that block that have 3 items should be rendered first and then block of 5 so first block has more recent news articles. Using properties 'render first' and 'render last' you can do the trick. When you render 3 items block you can save some static variable so when you render 5 items block you can check that variable and if previous block set it to TRUE you need to shift your list to 3 items so you won't duplicate news articles in both lists.

Meanwhile if you have multiple combinations of similar blocks you can use hook_panels_panes_prepared_alter() and control which block renders first and then you will still have very nice lists of articles.

I would like to thank Andrii Tiupa for pointing me to this brilliant feature.

Dec 26 2013
Dec 26

Current project I am working on has user profiles using profile2 module. So it is pretty common task to replace all links/references on the site with user's proper name from Profile instead of his drupal username.

This is really easy to achieve using hook_username_alter()

<?php
/**
 * Implements hook_username_alter().
 */
function mymodule_username_alter(&$name, $account) {
  $contact_profile = profile2_load_by_user($account, MYMODULE_PROFILE_CONTACT_TYPE);
  if (isset($contact_profile->field_name[LANGUAGE_NONE][0]['value'])) {
    $name = $contact_profile->field_name[LANGUAGE_NONE][0]['value'];
  }
}
?>

And if you will want somewhere in the code display user's name do this using function format_username().

Feb 26 2013
Feb 26

There are a lot of articles on internet about functional PHP. There is very nice presentation about it http://munich2012.drupal.org/program/sessions/functional-php that was held in Munich.

I was thinking about applying techniques to every-day coding and came up with two quite useful examples.

Cleaner code in form builders

Usually functions that generate form arrays are big. Sometimes they are very big and it is getting complicated to understand fast what elements are in the form and what are their properties.

This example is trying to address this problem.

<?php
  $textfield = array('#type' => 'textfield');
  $required = array('#required' => TRUE);
  $size = function($number) { return array('#size' => $number); };
  $size_maxlength = function($number) { return array('#size' => $number, '#maxlength' => $number); };
 
  $form['first_name']     = $textfield + $size(20);
  $form['last_name']      = $textfield + $size(20);
  $form['business_name']  = $textfield + $size(20);
 
  $form['account_number'] = $textfield + $required + $size(20);
  $form['pin'] =           $textfield + $required;
  $form['ssn'] =           $textfield + $required + $size_maxlength(4);
 
  $form['phone_number_part1'] = $textfield + $required + $size_maxlength(3);
  $form['phone_number_part2'] = $textfield + $required + $size_maxlength(3);
  $form['phone_number_part3'] = $textfield + $required + $size_maxlength(4);
?>

Now it is pretty clear what elements are and what properties they have. If you will ask about default values or titles – default values are assigned to elements using

<?php
foreach ($form as $form_key => &$element_form) {
    if (!isset($default_values[$form_key])) {
      continue;
    }
    $value_key = ($element_form['#type'] == 'value') ? '#value' : '#default_value';
    $element_form[$value_key] = $default_values[$form_key];
  }
?>

And all the titles are set up in theming function for this form.

In this example I like how compact are form constructors and how easy they are to read.

Recursive iterating in form arrays

Second example is also related to forms.

Once I have needed to set all form sub elements to be not required. And there was another function that disables all elements.

So functions looked like:

<?php
function custom_module_form_element_recursive_disable(&$element) {
  if (!is_array($element)) {
    return;
  }
  $element['#disabled'] = TRUE;
  foreach (element_children($element) as $key) {
    custom_module_form_element_recursive_disable($element[$key]);
  }
}
 
function custom_module_form_element_recursive_not_required(&$element) {
  if (!is_array($element)) {
    return;
  }
  if (!empty($element['#required'])) {
    $element['#required'] = FALSE;
  }
  foreach (element_children($element) as $key) {
    custom_module_form_element_recursive_disable($element[$key]);
  }
}
?>

As you can see it is pretty simple code but there is a lot of duplicate in it.

Using functional php we can avoid duplicating:

<?php
function _custom_module_recursive_form_visitor(&$element, $function) {
  if (!is_array($element)) {
    return;
  }
 
  $function($element);
 
  foreach (element_children($element) as $key) {
    _custom_module_recursive_form_visitor($element[$key], $function);
  }
}
 
function custom_module_form_element_recursive_disable(&$element) {
  $function = function(&$element) {
    $element['#disabled'] = TRUE;
  };
  _custom_module_recursive_form_visitor($element, $function);
}
 
function custom_module_form_element_recursive_not_required(&$element) {
  $function = function(&$element) {
    if (!empty($element['#required'])) {
      $element['#required'] = FALSE;
    }
  };
  _custom_module_recursive_form_visitor($element, $function);
}
?>

Now we can add other functions that need to perform actions recursively on form elements.

If you have some other usages of anonymous functions in PHP please share.

Thanks for reading.

By the way I have submitted a session for DrupalCon Portland http://portland2013.drupal.org/session/clean-code where I will touch this topic as well from perspective of keeping your code clean. So welcome to comment on it.

Sep 12 2012
Sep 12

We all know that Symfony is already in the core of Drupal 8 but how it works and how both systems work? Not that many people understand fully this integration. Me neither but I would like to pubilsh my research notes about how Drupal 8 works now. Surely this material is based on the snapshot of beginning September and I really hope that more things will happen so this information is relevant only for some time.

I don't have any real life experience of the building projects with Symfony so my knowledge is very close to majority of drupal developers.

So lets start.

First changes we see in index.php that bootstrap is done on the level of DRUPAL_BOOTSTRAP_CODE instead of DRUPAL_BOOTSTRAP_FULL like in Drupal 7. From documentation about phases of bootstrap of Drupal 8 we can see that phase that “initialize language, path, theme, and modules” is not done yet.

Next thing we see that we instantiate object of the DrupalKernel class (inherits from Symfony Kernel class).

Kernel class in Symfony is responsible for building Dependency Injection Container and registering bundles. We will come back to Dependency Injection Container later. Bundles are like “modules” of drupal world.

Next thing that happens that we instantiate Request object (instance of Symfony HttpFoundation component). This is the object that grabs all global variables form the request and has methods to retrieve them. The idea behind it is that we do not use any global variables in the code anywhere but interact only with this object as a source.

Next part is simple ask kernel object to handle our request

$response = $kernel->handle($request)->prepare($request)->send();
$kernel->terminate($request, $response);

Now lets take a look of internals of the kernel object and what it does to handle our request.

Kernel handle method calls $this->boot() method and then $this->getHttpKernel()->handle($request, $type, $catch);

Booting of the Kernel consists of following steps:

  • Registering Bundles. Bundles are like modules in drupal world. DrupalKernel overrides method registerBundles() to register CoreBundle and allows other modules to provide Symfony-like bundles to the system.
    • Initialize Dependency Injection Container. Container is the object that handles all information about dependencies between objects. So when you want to initiate object of class A but in order to do that system should pass object B to constructor of the object A, Container knows about this dependency and does it for you. In Symfony code this is also called Service Container. Documentation I have found about it is here http://symfony.com/doc/current/components/dependency_injection/introduct.... Also Container is statically cached with drupal_container() function so we can access it in any place of the code. Minimal available Container will consist of information about config system of drupal (config.storage.options). Every bundle also registers new components to Container (see CoreBundle:: build() method for that. It registers services: 'request', 'dispatcher' (http://symfony.com/doc/current/components/event_dispatcher/introduction....), 'resolver', 'http_kernel', 'language_manager'. We can also see that plenty subscribers registered. In terms of this is similar to our hooks system. Here event is fired and dispatcher knows what subscribers are 'registered' for which events and executes their correspond methods (see Drupal\Core\EventSubscriber\PathSubscriber for example).
    • Pass Container to all registered bundles (it is saved in bundle's container property) and call boot() method on bundles. boot() method doesn't do anything at the moment.

    After booting the kernel we run its handler method. It comes down firing KernelEvents::REQUEST event where our routing system plays its role. I believe new routing system deserves separate article. After event we have controller we fire event KernelEvents::CONTROLLER that resolves menu callback (in terms of our old menu system). After executing menu callback we fire KernelEvents::VIEW event and our subscribers prepare $response object that is finally returned.

    Having response object avaialble we run prepare( ) method that prepares headers, and run send( ) method that prints rendered output.

    Finally we run kernel's terminate method that in the end fire KernelEvents::TERMINATE event.

    This is very brief overview from the beginner point of view to the system. I hope it made some feeling of understanding of how things work now or at least triggered your interest to learn more about it. Also please remember that things are changing as this are parts that are in active development right now.

Aug 01 2012
Aug 01

I would like to announce fifth DrupalCamp Kyiv. Official website of the event is http://camp12.drupal.ua/en. Twitter @drupalcampkyiv #dckyiv12

Camp will be held on 14-15 September 2012 at i-Klass education center that is located in the spectacular part of old Kiev in front of Pechersk Lavra

Previous times we gathered about 400 people from Ukraine, Russia, Belarus and other countries. And this year we are pretty sure we won't have less people :)

We already have 16 sessions submitted (http://camp12.drupal.ua/en/program/session) including one in English (http://camp12.drupal.ua/uk/content/cache-king-english). We very welcome new speakers and attendees to come. If you have any questions about being speaker please contact me directly yuri.gerasimov(at)gmail.com.

This event is one of the biggest in Eastern Europe and can be great opportunity to find cooperation with local companies. If you are interested in sponsorship here is information for sponsors http://camp12.drupal.ua/en/content/why-be-sponsor-drupalcamp-kyiv-2012

Looking forward to see you all in Kyiv!

Jun 01 2012
Jun 01

Services module is a great tool for exposing API's of your website to other applications.

When we work with Services module most of of the changes can be done only in code. But sometimes clients ask to have configurable interface for GET calls we expose. For example client needs some "export" call for his another application that imports data from our site. This is where Services Views module can play its very nice role.

This article about very exciting feature of the Services Views module.

So please take a look at the demo:

[embedded content]

Hope you have enjoyed this functionality and will take a try (and report any bugs to the issue queue) :)

There is alternative way to accomplish similar functionality -- to use Views Datasource but it will not be possible:

Dec 04 2011
Dec 04

Last weekends we held great drupal event -- DrupalCamp Donetsk 2011. This is one of the places of next year UEFA EURO 2012 and it was really great place for our meetup.

About 170 attendees have arrived and were around during 20 sessions. It is very interesting that more and more new companies in Ukraine start developing using drupal and shows huge interest in local meetups. We had people from different cities of Ukraine and even several people from Russia and Belarus.

We have had beautiful bar party as well. Rock group in one of the bars were playing couple of drupal songs that made huge success!

You are welcome to read more about figures of the event here http://donetsk.drupal.ua/en/news/finishing-drupalcamp-donetsk-feedback-a...

Codesprint

As I was one of organizers of the codesprint, I would like to share our experience about it. We didn't have super rockstar developers at our codesprint but managed to do very nice job -- rewrite module draggableviews. There are plenty of very good developers in Ukraine but it is a pity that not many people understand importance of the contributing. So we started from ground level of using issues queue and creating patches and finished with workable code. We had several teams (every team not more than 6 people) who were working on different topics: draggableviews, couple views issues, porting api.drupal.ru to drupal 7, setting up development environment (for beginners). In this way we were able to attract new people to contributions but keeping our concentration for the final result.

I would like to thank all our sponsors and hope to see more people comming to Ukraine for drupal events!

Photos are available here:
https://picasaweb.google.com/115447083216533594560/DrupalCamp2011?authke...
https://plus.google.com/u/0/photos/100373468049652530283/albums/56805220...
https://plus.google.com/photos/111053704258459064817/albums/567977011279...
https://plus.google.com/photos/111053704258459064817/albums/567978260531...

Dec 04 2011
Dec 04

When we are writing our own module we, as good developers, should allow other people extend/modify some parts of it. Yes we are talking about defining our own module's hooks. But what we can do if we need to "extend" our module in several places but we should be sured that other module that implements one hook should also implement another one? What we should do if we have a lot of such cases and we should take care about consistency of implementations of other modules? Also sometimes we would like user to decide what implementation to run (so we want some kind of administration page where we select what extention to be active).

One of the solutions for such situation is to define ctools plugins as the way to extend our module's functionality. In this article I would like to explain how to do this and how to take care about consistency.

First of all of course we need to have ctools as dependency. But truly to say it should not be a problem as nowadays it is nearly a must dependency for every project.

For the practical example we will take very simple task -- we write a form that calculates different operations with two numbers. Every operation should be implemented as plugin as we need it to do several tasks:

1. Validate input
2. Calculate operation
3. Display nice message with result

So lets see how we can define our "operation" plugin and how we should use it in our code.

To define the plugin we should implement hook_ctools_plugin_type.

<?php
/**
 * Implements hook_ctools_plugin_type().
 *
 * Has plenty options. See ctools/help/plugins-creating.html
 */
function example_ctools_plugin_type() {
  return array(
    'operation' => array(
      'use hooks' => TRUE,
    ),
  );
}
?>

This hook has a lot of various options. In order not to rewrite help document I would recommend to look at it when you will decide to create your own plugins.

Now lets see the code that uses plugins (main module).

<?php
/**
 * Form constructor for Calculations demo.
 */
function example_calculation($form, $form_state) {
  // Load all plugins type "operation".
  ctools_include('plugins');
  $operations = ctools_get_plugins('example', 'operation');
  $operation_options = array();
 
  foreach ($operations as $id => $operation) {
    $operation_options[$id] = $operation['label'];
  }
 
  if (empty($operation_options)) {
    $form['message'] = array(
      '#markup' => t('Sorry no operation plugins available in the system.'),
    );
    return $form;
  }
 
  $form['operations'] = array(
    '#type' => 'checkboxes',
    '#title' => t('Please choose Operations'),
    '#options' => $operation_options,
  );
 
  // Form elements...
 
  return $form;
}
?>

Our form has checkboxes of operations that are available in the system, two textfields for numbers and submit button. Every operation plugin has 'label' property that is shown as label of the checkbox.

This is how we validate the form:

<?php
/**
 * Validate handler.
 */
function example_calculation_validate($form, &$form_state) {
  $fv = $form_state['values'];
  $operations = array_filter($fv['operations']);
 
  foreach ($operations as $operation) {
    if ($instance = _example_get_instance($operation, $fv['number_a'], $fv['number_b'])) {
      $instance->validate();
    }
  }
}
?>

Here I would like to explain a bit more in details. Every plugin defines a class that perform main job for us: validate, calculate and show the message. Every class should inherit abstract class "example_operation" that allows us to ensure that plugin class is consistent.

Here how _example_get_instance() works:

<?php
function _example_get_instance($id, $number_a = NULL, $number_b = NULL) {
  $instances = &drupal_static(__FUNCTION__);
 
  if (!isset($instances[$id])) {
    ctools_include('plugins');
    $plugin = ctools_get_plugins('example', 'operation', $id);
    $class = ctools_plugin_get_class($plugin, 'handler');
    $instances[$id] = new $class($number_a, $number_b);
 
    // Check that plugin class has ingerited proper 'example_operation' class.
    if (!is_subclass_of($instances[$id], 'example_operation')) {
      $instances[$id] = NULL;
    }
  }
 
  return $instances[$id];
}
?>

So here we explicitly check whether plugin's class is inherited from our example_operation class and if not, we just don't return object.

Now lets take final look at our form processing -- submit handler:

<?php
function example_calculation_submit($form, &$form_state) {
  $fv = $form_state['values'];
  $operations = array_filter($fv['operations']);
 
  foreach ($operations as $operation) {
    if ($instance = _example_get_instance($operation, $fv['number_a'], $fv['number_b'])) {
      drupal_set_message($instance->resultMessage());
    }
  }
}
?>

This is very nice, but how we should implement plugins in our modules? There are two ways to do that. First is to implement hook_MODULE_PLUGIN. CTools automatically creates this hook for every plugin defined (there is an option to not accept hook implementation of plugins). In our case this is hook_example_operation.

<?php
function multiple_divide_example_operation() {
  return array(
    'multiple' => array(
      'label' => t('Multiple'),
      'handler' => array(
        'class' => 'example_multiple_operation',
      ),
    ),
    'divide' => array(
      'label' => t('Divide'),
      'handler' => array(
        'class' => 'example_divide_operation',
      ),
    ),
  );
}
?>

Here we implement two operation plugins: provide label and handler class (we will come back on classes a bit later).

Another way to implement plugin is to define folder where ctools should look for files that are plugin implementations (hook_ctools_plugin_directory):

<?php
/**
 * Implements hook_ctools_plugin_directory().
 */
function sum_ctools_plugin_directory($module, $plugin) {
  if (($module == 'example') && ($plugin == 'operation')) {
    return 'plugins/operation';
  }
}
?>

This means that module sum tells ctools to look at plugins/operation folder for plugins implementation. Here is file that will be found:

<?php
/**
 * Operation plugin for Example module.
 *
 * Calculate sum of two numbers.
 */
 
$plugin = array(
  'label' => t('Sum'),
  'handler' => array(
    'class' => 'example_sum_operation',
  ),
);
 
class example_sum_operation extends example_operation {
  public function calculate() {
    return $this->a + $this->b;
  }
}
?>

So in case of file implementation we should provide $plugin variable as array of properties. Of course best to do it in the beginning of the file.

These two ways to implement plugins are very convenient and used. For example panels uses file based plugins, but feeds module recommends implementation via hook. I would say that if you expect one module to provide a lot of plugin implementations it is better to have them as separate files as hook implementation array will really look too big. But it is up to you how to do this.

Now lets come back to classes. You already see that in case of sum operation class just implements method calculate().

This is how abstract class looks like:

<?php
abstract class example_operation {
  // Numbers we make calculations on.
  protected $a;
  protected $b;
 
  /**
   * Save arguments locally.
   */
  function __construct($a = 0, $b = 0) {
    $this->a = $a;
    $this->b = $b;
  }
 
  /**
   * Validate arguments. Return error message if validation failed.
   */
  public function validate() {}
 
  /**
   * Main operation. Calculate operation and return the result.
   */
  public function calculate() {}
 
  /**
   * Return result string for the operation.
   */
  public function resultMessage() {
    return t('Result of !operation with arguments !argument_a and !argument_b is !result.', array(
      '!operation' => get_class($this),
      '!argument_a' => $this->a,
      '!argument_b' => $this->b,
      '!result' => $this->calculate(),
    ));
  }
}
?>

So in order to provide operation that will work we really just need to inherit this abstract class and have calculate method in place (this is how "sum" operation is implemented).

For divide operation we have also implemented validate and resultMessage methods:

<?php
class example_divide_operation extends example_operation {
  public function validate() {
    if (empty($this->b)) {
      form_set_error('number_b', t('Can\'t divide by zero!'));
    }
  }
 
  public function calculate() {
    return $this->a / $this->b;
  }
 
  public function resultMessage() {
    return t('!argument_a divided by !argument_b is !result.', array(
      '!argument_a' => $this->a,
      '!argument_b' => $this->b,
      '!result' => $this->calculate(),
    ));
  }
}
?>

For multiplication operation we implemented only calculate method but haven't inherited class from our base abstract class to ensure it won't be working:

<?php
class example_multiple_operation {
  public function calculate() {
    return $this->a * $this->b;
  }
 
  public function resultMessage() {
    return t('Multiply !argument_a on !argument_b is !result.', array(
      '!argument_a' => $this->a,
      '!argument_b' => $this->b,
      '!result' => $this->calculate(),
    ));
  }
}
?>

One of the most important thing about defining your own plugins system is documentation. As it can be very hard for other developers to get themself familiar how plugin works and what it should implement in order to work. In this case defining plugins as classes that should inherit some abstract class is quite convenient as we can put our documentation in comments about methods of abstract class.

I am sure that this artificial example can be implemented easier but I hope it made clearer for you how to define your own ctools plugins and use them. A lot of modules use this system and surely you will need to write your plugins implementation for other modules.

Example module is attached.

This article is based on my presentation during DrupalCamp Donetsk 2011. Slides are available on http://www.slideshare.net/ygerasimov/drupal-camp-donetsk-c-tools

Dec 02 2011
Dec 02

In this note I would like to share solution for quite common task: show node fields titles even if field are empty. By default if the field is empty it is not included in the node. But practically sometimes client would like to see title of the field even if the field value is not set. As this task has made me debugging for a while I hope it will save someone else's time.

So solution is to use hook_field_attach_view_alter(). This hook is invoked after field module added fields renderable arrays to the content of the node.

<?php
/**
 * Implements hook_field_attach_view_alter().
 *
 * Show titles of empty fields.
 */
function example_field_attach_view_alter(&$output, $context) {
  // We proceed only on nodes.
  if ($context['entity_type'] != 'node' || $context['view_mode'] != 'full') {
    return;
  }
 
  $node = $context['entity'];
  // Load all instances of the fields for the node.
  $instances = _field_invoke_get_instances('node', $node->type, array('default' => TRUE, 'deleted' => FALSE));
 
  foreach ($instances as $field_name => $instance) {
    // Set content for fields they are empty.
    if (empty($node->{$field_name})) {
      $display = field_get_display($instance, 'full', $node);
      // Do not add field that is hidden in current display.
      if ($display['type'] == 'hidden') {
        continue;
      }
      // Load field settings.
      $field = field_info_field($field_name);
      // Set output for field.
      $output[$field_name] = array(
        '#theme' => 'field',
        '#title' => $instance['label'],
        '#label_display' => 'above',
        '#field_type' => $field['type'],
        '#field_name' => $field_name,
        '#bundle' => $node->type,
        '#object' => $node,
        '#items' => array(),
        '#entity_type' => 'node',
        '#weight' => $display['weight'],
        0 => array('#markup' => ' '),
      );
    }
  }
}
?>

Hope you find it useful and please let me know if you know any other nicer way to accomplish this task.

Oct 14 2011
Oct 14
I would like to introduce new module Search API Location that provides possibility to do spatial search using Apachesolr. At the moment we can search by radius on the map putting the center of the circle. You are welcome to test on Demo site. Technical details Search API Solr module is used for integration with Solr. Search API Location module exports location data to Search API via hook_entity_property_info_alter(). Apachesolr itself uses LocalSolr library that should be compiled into apachesolr. You can follow instructions in README to compile apachesolr or download ready apachesolr from demo site. Demo site Location Search is built as a view. Search API Location module provides exposed filter for radius with slider. Hope you enjoy and welcome to participate in development of spatial search functionality for drupal.
Sep 22 2011
Sep 22

There is more or less standard way of building Slideshows in drupal -- Views Slideshow module.

Module really works great and I would like to thank all people involved in it.

But what we should do when we want to change some of the behaviors of the javascripts of it? In my case the task was to change image style (imagecache preset) of the active page thumbnail. By default image style all thumbnails where grayscale, but we needed to make active thumbnail colorful. Like on screenshot image below.

After digging into the code of the views slideshow I have found that it is extensible via widgets. So here I would like to share experience writing own widget for this module.

First in our custom module we should implement hook_views_slideshow_widget_info().

/**
 * Implements hook_views_slideshow_widget_info().
 *
 * Adds widget for views_slideshow view on frontpage.
 */
function custom_views_slideshow_widget_info() {
  $widget = array();
 
  $widget['custom_change_thumbnail'] = array(
    'name' => 'Change image style of thumbnail',
    'accepts' => array(
      'transitionBegin' => array('required' => TRUE),
    ),
    'calls' => array(),
  );
 
  return $widget;
}

Method transitionBegin is triggered when slide is changed no difference whether it is changed automatically or user clicks on another pager thumbnail. The idea is that we also change src attribute of the new active thumbnail. In this way we change the image style. Our grayscale image style is slideshow_thumbnail. Our colored thumbnail style is slideshow_thumbnail_color.

We need to add javascript to the page where our slideshow is shown.

(function($) {
 
Drupal.customChangeThumbnail = Drupal.customChangeThumbnail || {};
 
/**
 * Custom widget customChangeThumbnail reaction on transitionBegin.
 *
 * Change the src of the thumbnail images to make them color on active.
 */
Drupal.customChangeThumbnail.transitionBegin = function (options) {
  // Remove color image style from all pager image items.
  $('[id^="views_slideshow_pager_field_item_frontpage_slideshow-block"] img').each(function(){
    var src = $(this).attr('src');
    var nocolor_src = src.replace('/styles/slideshow_thumbnail_color/', '/styles/slideshow_thumbnail/');
    $(this).attr('src', nocolor_src);
  });
 
  // Change image style to color version for the next active item.
  var image = $('#views_slideshow_pager_field_item_' + options.slideshowID + '_' + options.slideNum + ' img');
  var src = $(image).attr('src');
  var color_src = src.replace('/styles/slideshow_thumbnail/', '/styles/slideshow_thumbnail_color/');
  $(image).attr('src', color_src);
}
 
})(jQuery);

Please pay attention to the name of the key of defined widget "custom_change_thumbnail" and js method "customChangeThumbnail".

I am sure in similar way we can add much more effects to our slideshow when needed.

Hope you found this useful and thank you for reading.

Aug 02 2011
Aug 02

In this article I would like to create some notes about architecture of the Feeds module. This is really great module and it is real “must know” when we are talking about regular import of the data from some sources.

There is great documentation available on http://drupal.org/documentation/modules/feeds. Here I try to create a cheatsheet for 7.x version of the module that let developers brief understanding about how things work.

Diagram is clickable

More detailed explanation

FeedsSource object that contains settings of the source.

FeedsFetcher fetch() method should retrieve content of the feed that has been fetched (for example long string received from URL) FeedsFetcherResult object is created from this content (it is located in raw property) and passed to FeedsParser.

FeedsParser parse() method parse the raw content of the FeedsFetcherResult object and creates array of the items. FeedsParserResult object is created from these items and passed further to FeedsProcessor. Also it is very important to define mapping sources in method getMappingSources so your items fields will be visible in mapping.

FeedsProcessor process() this is where target entities got created. The idea is that we iterate through all items passed to us in FeedsParserResult, check the hashes of each item (in order to not reimport data that hasn't changed), map the fields (using map() method) and save target entities (using newEntity() or entitySave()).

As you might notice we also pass FeedsSource to each of above mentioned methods. This is needed for tracking the progress of the batch operations.

One of my last tasks I had was to import some data from external source that has REST interface. Only things I needed to implement was to write FeedsFetcher plugin where in fetch() method I did REST calls to my application. And I needed to implement FeedsParser where I did some manipulations with data and defined mapping sources.

Hope this cheatsheet will be handful for new people trying to use feeds and developers who already used feeds to remind the internal architecture.

Diagram has been created with Dia v0.97.1

Jul 21 2011
Jul 21

One of the modules I am involved in is Selenium module. In brief it integrates Selenium2 to simpletest framework of Drupal. From the beginning it was possible to run test in Firefox that opens pages of the simpletest sandbox. As we are in full control of browser we are able to test javascripts in it. This is the main goal of the Selenium integration.

The great achievement in the work on Selenium module is adding another browser to the framework -- Google Chrome. Now we can write tests that will run in either Chrome or Firefox.

In order to run tests in Chrome we need to download two Selenium products: Selenium standalone server (http://seleniumhq.org/download/) and Chome Driver (http://code.google.com/p/chromium/downloads/list). Then we should add ChromeDriver to PATH variable and run Selenium server. When Selenium server receives request to open Chrome browser it will connect to ChromeDriver and communicate all the commands in this way.

Unfortunately there are some limitations in ChromeDriver comparing to Firefox. Main thing is that ChromeDriver cannot upload files (reported issue 1 and issue 2. I hope these issues will be solved soon. Another minor issue is that in ChromeDriver we cannot select options in selectbox using native select() method. But we can do that with click(). I am sure that we will face more differences between FirefoxDriver and ChromeDriver.

At the moment Selenium module already implements most of the simpletest methods (like different asserts, drupalGet, drupalPost etc.), so writing tests should be familiar thing for everyone who wrote simpletsts. But please remember that Selenium testing has completely different nature than unit testing. Selenium testing is automating of manual testing. So we should try to write tests like if user manually does testing.

Example tests are available in Selenium module itself.

More information about ChromeDriver on the project's wiki.

My previous posts about Selenium:

Jun 29 2011
Jun 29

Today we are going to talk about really awesome module Search API

One of the most popular tasks I have regarding customizing apachesolr search results is adding new fields to index and to have custom facet for them. So lets discuss how it is done if we are using Search API module.

First of all I expect that you have Search API and Search API Solr search modules installed. Solr node index is created and search page is in place to have search within our index.

As example our task is to create two additional properties of the node: week day of creation of the node (single value text) and random text multiple value property.

In order to have new properties of the node available we need to use hooks of Entity API module. In our case all we need is to use hook_entity_property_info_alter().

/**
 * Implements hook_entity_property_info_alter().
 */
function example_search_api_property_entity_property_info_alter(&$info) {
  $info['node']['properties']['created_week_day'] = array(
    'type' => 'text',
    'label' => t('Week day of node creation'),
    'sanitized' => TRUE,
    'getter callback' => 'example_search_api_property_weekday_getter_callback',
  );
  $info['node']['properties']['test_multiple_field'] = array(
    'type' => 'list<text>',
    'label' => t('Test multiple text'),
    'sanitized' => TRUE,
    'getter callback' => 'example_search_api_property_random_text_getter_callback',
  );
}

In code above we define two new properties of the node. First one is single value and the second is multiple value.

Implementation of getter functions can be following:

/**
 * Getter callback for created_week_day property.
 */
function example_search_api_property_weekday_getter_callback($item) {
  return format_date($item->created, 'custom', 'D');
}
 
/**
 * Getter callback for multiple field.
 */
function example_search_api_property_random_text_getter_callback($item) {
  $strings = array('one', 'two', 'three', 'four', 'five');
  $number = rand(1, 5);
 
  $values = array();
  while ($number > 0) {
    $values[] = $strings[rand(0, 4)];
    $number--;
  }
 
  return $values;
}

After enabling our module we can go to Fields of our index and find two new text fields.

After creating facets of these fields and adding facets blocks to our search page we will see facets working.

Now you can see how it is easy to add new properties to entities in Search API. If we create custom field we should in similar way advise Entity API about them. As example you can see patch to add location field data.

You are welcome to download our example module below.

Jun 26 2011
Jun 26

In this article I would like to share another interesting task connected with upload files in D7.

Task is to have custom form that allows to upload multiple files at once and create zip archive from these files on finish.

I really liked the way google mail handles attached files form and tried to implement similar behavior but without writing custom javascript. Thanks to Form API and #ajax property it is very managable. So lets dive into code!

Big thanks to Examples module that helped me with example of dynamically adding form elements.

/**
 * Form builder.
 */
function example_zip_file_form($form, &$form_state) {
  // Init num_files and uploaded_files variables if they are not set yet.
  if (empty($form_state['num_files'])) {
    $form_state['num_files'] = 1;
  }
  if (empty($form_state['uploaded_files'])) {
    $form_state['uploaded_files'] = array();
  }
 
  $form['file_upload_fieldset'] = array(
    '#type' => 'fieldset',
    '#title' => t('Uploaded files'),
    // Set up the wrapper so that AJAX will be able to replace the fieldset.
    '#prefix' => '<div id="uploaded-files-fieldset-wrapper">',
    '#suffix' => '</div>',
  );
 
  for ($i = 0; $i < $form_state['num_files']; $i++) {
    // Show upload form element only if it is new or
    // it is not unset (name equal to FALSE).
    if (
      !isset($form_state['uploaded_files']['files']['name']['file_upload_' . $i]) ||
      (isset($form_state['uploaded_files']['files']['name']['file_upload_' . $i]) && $form_state['uploaded_files']['files']['name']['file_upload_' . $i] !== FALSE)) {
      $form['file_upload_fieldset']['file_upload_' . $i] = array(
        '#type' => 'file',
        '#prefix' => '<div class="clear-block">',
        '#size' => 22,
        '#theme_wrappers' => array(),
      );
      $form['file_upload_fieldset']['file_upload_remove_' . $i] = array(
        '#type' => 'submit',
        '#name' => 'file_upload_remove_' . $i,
        '#value' => t('Remove file'),
        '#submit' => array('example_zip_file_remove'),
        '#ajax' => array(
          'callback' => 'example_zip_file_refresh',
          'wrapper' => 'uploaded-files-fieldset-wrapper',
        ),
        '#suffix' => '</div>',
      );
 
      // If file already uploaded we add its name as prefix.
      if (    isset($form_state['uploaded_files']['files']['name']['file_upload_' . $i])
          && !empty($form_state['uploaded_files']['files']['name']['file_upload_' . $i])) {
        $form['file_upload_fieldset']['file_upload_' . $i]['#type'] = 'markup';
        $form['file_upload_fieldset']['file_upload_' . $i]['#markup'] = t('File: @filename', array('@filename' => $form_state['uploaded_files']['files']['name']['file_upload_' . $i]));
      }
    }
  }
 
  // Add new button.
  $form['add_new'] = array(
    '#type' => 'submit',
    '#value' => t('Add another file'),
    '#submit' => array('example_zip_file_add'),
    '#ajax' => array(
      'callback' => 'example_zip_file_refresh',
      'wrapper' => 'uploaded-files-fieldset-wrapper',
    ),
    '#limit_validation_errors' => array(),
  );
 
  // Submit button.
  $form['submit'] = array(
    '#type' => 'submit',
    '#value' => t('Create zip archive from uploaded files'),
  );
 
  return $form;
}

First of all we keep information about already uploaded files in $form_state['uploaded_files'] variable and number of file form elements to show in $form_state['num_files'] element.

On the form we have two ajaxified buttons "Add another file" and "Remove file". Their submit functions should add / remove new file form and keep information about already uploaded files. And of course mark form to be rebuild.

/**
 * Callback for Remove button.
 *
 * Remove uploaded file from 'uploaded_files' array and rebuild the form.
 */
function example_zip_file_remove($form, &$form_state) {
  $form_state['uploaded_files'] = example_zip_file_array_merge($form_state['uploaded_files'], $_FILES);
  $file_to_remove_name = str_replace('_remove', '', $form_state['clicked_button']['#name']);
  $form_state['uploaded_files']['files']['name'][$file_to_remove_name] = FALSE;
  $form_state['rebuild'] = TRUE;
}
 
/**
 * Add new form file input element and save uploaded files to 'uploaded_files' variable.
 */
function example_zip_file_add($form, &$form_state) {
  $form_state['num_files']++;
  $form_state['uploaded_files'] = example_zip_file_array_merge($form_state['uploaded_files'], $_FILES);
  $form_state['rebuild'] = TRUE;
}

Function example_zip_file_array_merge merges recursively arrays. It was not possible to use array_merge_recursive in this case as it create array element in hierarchy instead of replacing it. To be clear lets see example from official documentation page:

$ar1 = array("color" => array("favorite" => "red"), 5);
$ar2 = array(10, "color" => array("favorite" => "green", "blue"));
$result = array_merge_recursive($ar1, $ar2);
print_r($result);

This leads to result:

Array
(
    [color] => Array
        (
            [favorite] => Array
                (
                    [0] => red
                    [1] => green
                )

            [0] => blue
        )

    [0] => 5
    [1] => 10
)

But we need:

Array
(
    [color] => Array
        (
            [favorite] => green
            [0] => blue
        )

    [0] => 5
    [1] => 10
)

The ajax callback of above mentioned two buttons "example_zip_file_refresh" just returns fieldset with file form elements of rebuilt form.

/**
 * AJAX callback. Retrieve proper element.
 */
function example_zip_file_refresh($form, $form_state) {
  return $form['file_upload_fieldset'];
}

Now lets take a look at form submit handler, where we create Zip archive.

/**
 * Form submit handler.
 */
function example_zip_file_form_submit($form, &$form_state) {
  // Merge uploaded files.
  $form_state['uploaded_files'] = example_zip_file_array_merge($form_state['uploaded_files'], $_FILES);
  $_FILES = $form_state['uploaded_files'];
 
  // Walk through files and save uploaded files.
  $uploaded_files = array();
  foreach ($_FILES['files']['name'] as $file_key => $value) {
    $file = file_save_upload($file_key);
    $uploaded_files[] = $file;
  }
 
  // Create Zip archive form uploaded files.
  $archive_uri = 'temporary://download_' . REQUEST_TIME . '.zip';
  $zip = new ZipArchive;
  if ($zip->open(drupal_realpath($archive_uri), ZipArchive::CREATE) === TRUE) {
    foreach ($uploaded_files as $file) {
      $zip->addFile(drupal_realpath($file->uri), $file->filename);
    }
    $zip->close();
    drupal_set_message(t('Zip archive successfully created. !link', array('!link' => l(file_create_url($archive_uri), file_create_url($archive_uri)))));
  }
  else {
    drupal_set_message(t('Error creating Zip archive.'), 'error');
  }
}

So thats it. Now we can let users create their own Zip archives.

In my task I had to build custom form that is part of multistep form. In ideal situation I would probably go for letting user to submit node creating form with unlimited value filefield to handle all ajax file uploads for me. And then just add hook_node_presave implementation where I would create Zip archive and added this zip file to another filefield.

But this example is more to show how ajax forms work on real life task. Hope you find this interesting and useful.

You are welcome to test module attached to this article.

Jun 17 2011
Jun 17

One of the new features of the Drupal 7 core is new File API. I would like to show how easy it become to use private file system.

Our task is quite simple: user should be able to upload private files that only he has access to download. Also we will add permission for "administrator" role to download any private file that users have downloaded.

So lets start. First of all we create separate page with form and defining our custom permissions.

/**
 * Implements hook_menu().
 */
function example_file_menu() {
  $items = array();
 
  $items['example_file'] = array(
    'title' => 'Upload private file.',
    'page callback' => 'drupal_get_form',
    'page arguments' => array('example_file_upload_form'),
    'access arguments' => array('upload private files'),
    'type' => MENU_NORMAL_ITEM,
  );
 
  return $items;
}
 
 
/**
 * Implements hook_permission().
 */
function example_file_permission() {
  return array(
    'upload private files' => array(
      'title' => t('Upload private files'),
    ),
    'download own private files' => array(
      'title' => t('Download own private files'),
    ),
    'download all private files' => array(
      'title' => t('Download all private files'),
    ),
  );
}
 
/**
 * Private file upload form.
 */
function example_file_upload_form($form, &$form_state) {
  $form = array();
 
  $form['private_file'] = array(
    '#type' => 'file',
    '#title' => t('Choose a file'),
  );
 
  $form['submit'] = array(
    '#type' => 'submit',
    '#value' => t('Upload as private file'),
  );
 
  return $form;
}

Now we need to create submit handler where we save file as private.

/**
 * Submit handler for private file upload form.
 */
function example_file_upload_form_submit() {
  $file = file_save_upload('private_file', array(), 'private://');
  if ($file) {
    drupal_set_message(t('Thank you for uploading private file. You can download it from @url',
            array('@url' => file_create_url($file->uri))));
  }
}

Key function is file_save_upload(). Here we define how we would like to save file and validators (in second argument). In this implementation we use standard validator but you of course welcome to look into more details about this function and define your custom rules.

After we have saved file we need to define access callback to download the file. This should be done in hook_file_download().

/**
 * Implements hook_file_download().
 */
function example_file_file_download($uri) {
  // Get the file record based on the URI. If not in the database just return.
  $files = file_load_multiple(array(), array('uri' => $uri));
  if (count($files)) {
    foreach ($files as $item) {
      // Since some database servers sometimes use a case-insensitive comparison
      // by default, double check that the filename is an exact match.
      if ($item->uri === $uri) {
        $file = $item;
        break;
      }
    }
  }
 
  global $user;
  if (($file->uid == $user->uid && user_access('download own private files'))
    || user_access('download all private files')) {
    // Access is granted.
    $headers = file_get_content_headers($file);
    return $headers;
  }
}

And that's it. We need to set up private files folder and we can start testing this code.

It is really great that we can build such functionality in this easy way. Taking opportunity I would like to thank all people who were involved in building this functionality. It is really great!

You are welcome to download this module below.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web