Oct 20 2019
Oct 20

Supercharge Drupal 8 migration process by externalizing data export and transformation.

Nuvole offers a training at DrupalCon Amsterdam: "Drupal 8 Migration as a process" - register until October 27.

When having to import data onto a Drupal 8 site there is no other choice than relying on core’s Migrate API and its contrib ecosystem. The Migrate API in Drupal 8 implements a rather classic Extract, Transform, Load (ETL) process, with the following "Drupal-lingo" twists:

  • The extract phase is called "source" and it uses a source plugin to read data from external systems, be it a Drupal 7 database, a CSV file, a REST web service, etc.
  • The transform phase is called "process" and it uses process plugins to process and transform data
  • The load phase is called "destination" and it uses destination plugins to import data into specific Drupal 8 entity storages (e.g. nodes, taxonomy terms, etc.)

To recap, Drupal core implements the ETL process as follows:

  • Source plugins extract the data from the source.
  • Process plugins transform the data.
  • Destination plugins save the data as Drupal 8 entities.

Limitations of standard Drupal 8 ETL process

In the process described above the three steps are executed at the same moment in time, sequentially, every time we run a migration import. In this scenario we are to lose one of the most valuable aspects of an ETL process: testing and validating data prior to import.

Also, the process above easily accommodates for multiple data sources to be consolidated into one, coherent, dataset. With Drupal Migrate API this is only possible by using chained source plugin, and it can only run on the destination site. This is quite a limitation in complex enterprise scenarios, where displayed data is often the result of complex and convoluted transformations in the backend.

A middle-format approach

At Nuvole we have adopted a so called "middle-format approach". The process simply moves the Extract and Transform parts outside Drupal so to ease the production of an easy to import dataset, in a well known middle-format (such as JSON API).

This approach has proved very successful in complex scenarios and, while completely leveraging the standard Drupal 8 migration process, it also allows to:

  1. Aggregate data from different sources (not only Drupal 7 databases)
  2. Test the data transformation process
  3. Test imported Drupal 8 data
  4. Easily automate the points above

The process looks like the following:

The approach outlined above brings the following benefits:

  • Exported data can be reviewed by the client and iteratively refined
  • Since Drupal 8 is not a precondition to export and transform data, data export site building can run in parallel, making the whole migration process much more efficient
  • Exported data can be presented to the different stakeholders using a user friendly UI, even before starting any Drupal 8 development
  • Since the data uses a well-known middle-format building the import process (as Drupal core’s Migrate plugins) is straightforward and allows to maximize code reusability

Middle-format migration at work

At Nuvole we have successfully used the middle-format approach to incrementally transform, review and import data over several years, on complex sites such as the World Food Programme main website. In that scenario we had to consolidate data coming from two Drupal 7 sites (plus a number of external data sources) in 13 different languages, over a two years period.

We have recently open-sourced a simplified version of the tool we have used to export and transform data, you can find its boilerplate code here.

We will run an hands-on session on how to use such a tool to plan and execute data exports in our upcoming DrupalCon Amsterdam training.

Jan 23 2017
Jan 23

Expose atomic UI patterns as plugins and use them as drop-in templates in your Drupal 8 projects

Update: We have setup a Gitter chat room if you have any specific questions about the module's functionality: Gitter

Establishing an atomic design system in your project is one of the most effective way to build a consistent and maintainable user interface.

Over the past years projects like PatternLab or the Component Libraries module aimed at lowering the cost of maintaining (PatternLab) and re-using (Component Libraries) UI patterns in your projects, be it a generic PHP application or a brand new Drupal 8 site.

But, as we all know, when it comes to presenting content the Drupal 8 landscape is quite diverse: you can layout your pages using Panels or Views, style your entities using Display Suite view modes, group your fields with Field group, etc.

Such a diversification can surely present some challenges when it comes at reusing a well-designed and consistent UI library. In other words: how can I transparently use the same UI pattern in my views, layouts, field formatters, etc.?

Enter the UI Patterns module

The UI Patterns module allows you to define and expose self-contained UI patterns as Drupal 8 plugins and use them seamlessly as drop-in templates for panels, field groups, views, Display Suite view modes and field templates.

The module also generates a pattern library page that can be used as documentation for content editors or as a showcase for business and clients (the following example is styled using the Bootstrap theme):

The UI Patterns module also easily integrates with with tools like PatternLab or modules like Component Libraries.

Define your patterns

Patterns can be defined using YAML in files named MY_MODULE.ui_patterns.yml or MY_THEME.ui_patterns.yml using the following format:

  label: Blockquote
  description: Display a quote with attribution information.
      type: text
      label: Quote
      description: Quote text.
      preview: Life is like riding a bicycle. To keep your balance, you must keep moving.
      type: text
      label: Attribution
      description: Quote attribution.
      preview: Albert Einstein

After defining the pattern you have to provide a Twig template to render it which, in our case, could look like that:

  <p>{{ quote }}</p>
  <footer>{{ attribution }}</footer>

Once you are done you can visit the pattern library page and check your new Blockquote pattern in action:

We have much more options available to make sure pattern definition can fit your use case (i.e. template overrides, etc.), make sure you check the documentation for the full list.

Use your patterns everywhere

After exposing your patterns you are ready to use them anywhere thanks to the sub-modules bundled within UI Patterns, namely:

Example: style links as call-to-action buttons

One of the most ordinary situation is styling a group of links as call-to-action buttons. This can be easily achieved using UI Patters.

Say we have defined the following Button pattern:

label: Button
description: A simple button.
     type: text
     label: Label
     description: The button label
     preview: Submit
     type: text
     label: URL
     description: The button URL
     preview: http://example.com

On the entity display setting page we access the link field setting by clicking on the gear icon:

Then, after selecting the Pattern field template and the Button pattern, we map the link field columns to the pattern's fields defined above:

Each value of our multi-valued link field will be then formatted using the Button pattern, as shown below:


The UI Patterns module aims at integrating your pattern library with the most used Drupal 8 rendering systems. It also makes easy to use third-party tools such as PatternLab.

The project is currently under active maintenance, please file issues and/or support requests using this GitHub repository.

P.S. Special thanks to aleksip for getting the integration with PatternLab and the Component Libraries to work!

IMPORTANT UPDATE: If you are using version 8.x-1.0-beta2 make sure you read this change note for a safe upgrade to newer versions.

Sep 19 2016
Sep 19

Make the most out of your Behat tests by using custom contexts, dependency injection and much more.

This post is an excerpt from the topics covered by our DrupalCon Dublin training: Drupal 8 Development - Workflows and Tools.

At Nuvole we consider writing good tests as a fundamental part of development and, when it comes to testing a complex site, there is nothing better than extensive behavioral tests using Behat. The benefits of such a choice are quite obvious:

  • Tests are very easy to write.
  • Behat scenarios serve as a solid communication mean between business and developers.

As a site grows in complexity, however, the default step definitions provided by the excellent Behat Drupal Extension might not be specific enough and you will quickly find yourself adding custom step to your FeatureContext or creating custom Behat contexts, as advocated by all official documentation.

This is all fine except that your boilerplate test code might soon start to grow into a non-reusable, non-tested bunch of code.

Enter Nuvole's Behat Drupal Extension.

Nuvole's Behat Drupal Extension

Nuvole's Behat Drupal Extension is built on the shoulders of the popular Behat Drupal Extension and it focuses on step re-usability and testability by allowing developers to:

  • Organize their code in services by providing a YAML service description file, pretty much like we all are used to do nowadays with Drupal 8.
  • Override default Drupal Behat Extension services with their own.
  • Benefit of many ready-to-use contexts that are provided by the extension out of the box.

Installation and setup

Install Nuvole's Behat Drupal Extension with Composer by running:

bash $ composer require nuvoleweb/drupal-behat

Setup the extension by following the Quick start section available on the original Behat Drupal Extension page, just use NuvoleWeb\Drupal\DrupalExtension instead of the native Drupal\DrupalExtension in your behat.yml as shown below:

        - Drupal\DrupalExtension\Context\DrupalContext
        - NuvoleWeb\Drupal\DrupalExtension\Context\DrupalContext
      goutte: ~
    # Use "NuvoleWeb\Drupal\DrupalExtension" instead of "Drupal\DrupalExtension".
      api_driver: "drupal"
      services: "tests/my_services.yml"
        node_submit_label: "Save and publish"

"Service container"-aware Contexts

All contexts extending \NuvoleWeb\Drupal\DrupalExtension\Context\RawDrupalContext and \NuvoleWeb\Drupal\DrupalExtension\Context\RawMinkContext are provided with direct access to the current Behat service container. Developers can also define their own services by adding a YAML description file to their project and setting the services: parameter to point to its current location (as shown above).

The service description file can describe both custom services and override already defined services. For example, given a tests/my_services.yml containing:

    class: Your\Own\Namespace\HelloWorldService

Then all contexts extending \NW\D\DE\C\RawDrupalContext or \NW\D\DE\C\RawMinkContext will be able to access that service by just calling:

class TestContext extends RawDrupalContext { /**
   * Assert service.
   * @Then I say hello
public function assertHelloWorld() {



The your.own.namespace.hello_world service class itself can be easily tested using PHPUnit. Also, since Behat uses Symfony's Service Container you can list services your service depends on as arguments so to remove any hardcoded dependency, following Dependency Injection best practices.

Override existing services

Say that, while working on your Drupal 7 project, you have defined a step that publishes a node given its content type and title and you want to use the same exact step on your Drupal 8 project, something like:

Given I publish the node of type "page" and title "My page title"

The problem here is that the actual API calls to load and save a node differs between Drupal 7 and Drupal 8.

The solution is to override the default Drupal core services specifying your own classes in your tests/my_services.yml:

  # Overrides Nuvole's Drupal Extension Drupal 7 core class.
  drupal.driver.cores.7.class: Your\Own\Namespace\Driver\Cores\Drupal7
  # Overrides Nuvole's Drupal Extension Drupal 8 core class.
  drupal.driver.cores.8.class: Your\Own\Namespace\Driver\Cores\Drupal8

    class: Your\Own\Namespace\HelloWorldService

You'll then delegate the core-specific business logic to the new core classes allowing your custom step to be transparently run on both Drupal 7 and Drupal 8. Such a step would look like:

class TestContext extends RawDrupalContext { /**
   * @Given I publish the node of type :type and title :title
public function iPublishTheNodeOfTypeAndTitle($type, $title) {
$this->getCore()->publishNode($type, $title);



Ready to use contexts

The extension also provides some utility contexts that you can use right away in your tests. Below a quick overview of what's currently available:

Context Description


Standard Drupal context. You want to use this one next to (and not instead of) Drupal\DrupalExtension\Context\DrupalContext.


Perform operations on Content.


Allows to interact with CKEditor components on your page.

    mobile_portrait: 360x640
    mobile_landscape: 640x360
    tablet_portrait: 768x1024
    tablet_landscape: 1024x768
    laptop: 1280x800
    desktop: 2560x1440

Resize the browser according to the specified devices, useful for testing responsive behaviors.


Check position of elements on the page.


Interact with Chosen elements on the page.

We will share more steps in the future enriching the current contexts as well as providing new ones so keep an eye on the project repository!


At the moment only Drupal 8 is supported but we will add Drupal 7 support ASAP (yes, it's as easy as providing missing Drupal 7 driver core methods and adding tests).

Mar 24 2015
Mar 24

Write more complete Behat test scenarios for both Drupal 7 and Drupal 8.

On of the main goal of BDD (Behaviour Driven Development) is to be able to describe a system's behavior using a single notation, in order to be directly accessible by product owners and developers and testable using automatic conversion tools.

In the PHP world, Behat is the tool of choice. Behat allows to write test scenarios using Gherkin step definitions and it generates the corresponding PHP code to actually run and test the defined scenarios.

Thanks to the excellent Behat Drupal Extension Drupal developers have been able to enjoy the benefits of Behavioral Driven Development for quite some time.

Essentially the project provides an integration between Drupal and Behat allowing the usage of Drupal-specific Gherkin step definitions. For example, writing a scenario that tests node authorship would look like:

Scenario: Create nodes with specific authorship
  Given users:
  | name     | mail            | status |
  | Joe User | [email protected] | 1      |
  And "article" content:
  | title          | author   | body             |
  | Article by Joe | Joe User | PLACEHOLDER BODY |
  When I am logged in as a user with the "administrator" role
  And I am on the homepage
  And I follow "Article by Joe"
  Then I should see the link "Joe User"

Dealing with complex content types

The Gherkin scenario above is pretty straightforward and it gets the job done for simple cases. In a real-life situation, though, it's very common to have content types with a high number of fields, often of different types and, possibly, referencing other entities.

The following scenario might be a much more common situation for a Drupal developer:

Scenario: Reference site pages from within a "Post" node
  Given "page" content:
    | title      |
    | Page one   |
    | Page two   |
    | Page three |
  When I am viewing a "post" content:
    | title                | Post title         |
    | body                 | PLACEHOLDER BODY   |
    | field_post_reference | Page one, Page two |
  Then I should see "Page one"
  And I should see "Page two"

While it is always possible to implement project specific step-definition, as show on this Gist dealing with field collections and entity references, having to do that for every specific content type might be an unnecessary burden.

Introducing field-handling for the Behat Drupal Extension

Nuvole recently contributed a field-handling system that would allow the scenario above to be ran out of the box, without having to implement any custom step definition, working both in Drupal 7 and Drupal 8. The idea behind it is to allow a Drupal developer to work with fields when writing Behat test scenarios, regardless of the entity type or of any field-specific implementation.

The code is currently available on the master branches of both the Behat Drupal Extension and the Drupal Driver projects, if you want to try it out follow the instructions at "Stand-alone installation" and make sure to grab the right code by specifying the right package versions in your composer.json file:

  "require": {
    "drupal/drupal-extension": "3.0.*@dev",
    "drupal/drupal-driver": "1.1.*@dev"

The field-handling system provides an integration with several highly-used field types, like:

Date fields

Date field values can be included in a test scenario by using the following notation:

  • Single date field value can be expressed as 2015-02-08 17:45:00
  • Start and end date are separated by a dash -, for ex. 2015-02-08 17:45:00 - 2015-02-08 19:45:00.
  • Multiple date field values are separated by a comma ,

For example, the following Gherkin notation will create a node with 3 date fields:

When I am viewing a "post" content:
  | title       | Post title                                |
  | field_date1 | 2015-02-08 17:45:00                       |
  | field_date2 | 2015-02-08 17:45:00, 2015-02-09 17:45:00  |
  | field_date3 | 2015-02-08 17:45:00 - 2015-02-08 19:45:00 |

Entity reference fields

Entity reference field values can be expressed by simply specifying the referenced entity's label field (e.g. the node's title or the term's name). Such an approach wants to keep up with BDD's promise: i.e. describing the system behavior by abstracting, as much as possible, any internal implementation.

For example, to reference to a content item with title "Page one" we can simply write:

When I am viewing a "post" content:
  | title           | Post title |
  | field_reference | Page one   |

Or, in case of multiple fields, titles will be separated by a comma:

When I am viewing a "post" content:
  | title           | Post title         |
  | field_reference | Page one, Page two |

Link fields

A Link field in Drupal offers quite a wide range of options, such as an optional link title or internal/external URLs. We can use the following notation to work with links in our test scenarios:

When I am viewing a "post" content:
  | title       | Post title                                              |
  | field_link1 | http://nuvole.org                                       |
  | field_link2 | Link 1 - http://nuvole.org                              |
  | field_link3 | Link 1 - http://nuvole.org, Link 2 - http://example.com |

As you can see we use always the same pattern: a dash - to separate parts of the same field value and a comma , to separate multiple field values.

Text fields with "Select" widget

We can also refer to a select list value by simply referring to its label, so to have much more readable test scenarios. For example, given the following allowed values for a select field:

option1|Single room
option2|Twin room
option3|Double room

In our test scenario, we can simply write:

When I am viewing a "post" content:
  | title      | Post title               |
  | field_room | Single room, Double room |

Working with other entity types

Field-handling works with other entity types too, such as users and taxonomy terms. We can easily have a scenario that would create a bunch of users with their relative fields by writing:

Given users:
  | name    | mail                | language | field_name | field_surname | field_country  |
  | antonio | [email protected] | it       | Antonio    | De Marco      | Belgium        |
  | andrea  | [email protected]  | it       | Andrea     | Pescetti      | Italy          |
  | fabian  | [email protected]  | de       | Fabian     | Bircher       | Czech Republic |

Contributing to the project

At the moment field-handling is still a work in progress and, while it does support both Drupal 7 and Drupal 8, it covers only a limited set of field types, such as:

  • Simple text fields
  • Date fields
  • Entity reference fields
  • Link fields
  • List text fields
  • Taxonomy term reference fields

If you want to contribute to the project by providing additional field type handlers you will need to implement this very simple, core-agnostic, interface:

namespace Drupal\Driver\Fields;/**
* Interface FieldHandlerInterface
* @package Drupal\Driver\Fields
interface FieldHandlerInterface { /**
   * Expand raw field values in a format compatible with entity_save().
   * @param $values
   *    Raw field values array.
   * @return array
   *    Expanded field values array.
public function expand($values);

If you need some inspiration check the current handlers implementation by inspecting classes namespaced as \Drupal\Driver\Fields\Drupal7 and \Drupal\Driver\Fields\Drupal8.

Jul 15 2014
Jul 15

Bringing "reusable features" to Drupal 8.

This is a preview of Nuvole's training at DrupalCon Amsterdam: An Effective Development Workflow in Drupal 8.

Configuration Management in Drupal 8 elegantly solves staging configuration between different environments addressing an issue that is still haunting even the most experienced Drupal 7 developer. In earlier posts we covered the new Configuration Management in Drupal 8, seeing how it compares to Drupal 7 and Features, and even investigated how to manually simulate Features in Drupal 8 last year. Recent developments and contrib modules can take us several steps closer.

Packaging configuration

Developers familiar with code-driven development practices need an equivalent in Drupal 8 + Drush 7 to what the Features module does in Drupal 7 with its features-update and features-revert Drush commands.

While Drupal 8 configuration staging capabilities are far more advanced than what Features could possibly provide, what the new Configuration Management system really lacks is the ability to package configuration.

Enter the Configuration development module

The Configuration development module, currently maintained by chx, serves two main purposes:

  • It automates the import of specified configuration files into the active storage.
  • It automates the export of specified configuration objects into files.

The module offers a simple, global UI interface where a Drupal developer can set which configuration is automatically exported and imported any time they hit the “Save” button on a configuration setting page.

In order to achieve a more modular configuration packaging it would be enough to set a specific module’s config/install directory as the actual export destination.

Nuvole contributed a patch (EDIT: now integrated) to make that possible: instead of firing an auto-export every time a “Save” button is clicked the developer can, instead, specify in the module’s info file which configuration needs to be written back to that module’s install directory and run a simple Drush command to do that.

Reusable “features” in Drupal 8

One of the main advantages of having a standardized way of dealing with configuration means that modules can now stage configuration at installation time. In a way that’s something very close to what Features allowed us to do in Drupal 7.

Say we have our news section up and running on the site we are currently working on and we would like to package it into a custom module, together with some other custom code, and ship it over a new project. The patched Config development module will help us to do just that! Here it is how:

Step 1: Download, patch and enable Configuration development module

We need to download and enable the Configuration development module and apply the patch (EDIT: already integrated as of 8.x-1.0-alpha7) attached to this Drupal.org issue.

After rebuilding the cache, we will have the config-writeback Drush command available. Let's have a closer look at what it is meant to do:

$ drush help config-writeback

Write back configuration to a module's config/install directory. State which configuration settings you want to export in the module's info file by listing them under 'config_devel', as shown below:

  - entity.view_display.node.article.default
  - entity.view_display.node.article.teaser
  - field.instance.node.article.body

drush config-writeback MODULE_NAME        Write back configuration to the specified module, based on .info file.

module                                    Module machine name.

Aliases: cwb

Step 2: Find what configuration needs to be packaged

We now look for all configuration related to our site’s news section. In Drupal 8 most of the site configuration is namespaced with related components so, if we keep on using consistent naming conventions, we can easily list all news-related configuration by simply running:

$ drush config-list | grep news


Step 3: Package configuration

To package all the settings above we will create a module called custom_news and, in its info file, we will specify all the settings we want to export, listing them under the config_devel: directive, as follows:

$ cat modules/custom_news/custom_news.info.yml

name: Custom News
type: module
description: 'Custom news module.'
package: Custom
core: 8.x
  - entity.form_display.node.news.default
  - entity.view_display.node.news.default
  - entity.view_display.node.news.teaser
  - field.instance.node.news.body
  - image.style.news_medium
  - menu.entity.node.news
  - node.type.news

After enabling the module we will run:

$ drush config-writeback custom_news

And we will have all our settings exported into the module’s install directory:

$ tree -L 3 modules/custom_news/

├── config
│   └── install
│       ├── entity.view_display.node.news.default.yml
│       ├── entity.view_display.node.news.teaser.yml
│       ├── field.instance.node.news.body.yml
│       ├── image.style.news_medium.yml
│       ├── menu.entity.node.news.yml
│       └── node.type.news.yml
└── custom_news.info.yml

The Drush command above takes care of clearing all sensitive UUID values making sure that the module will stage the exported configuration cleanly, once enabled on a new Drupal 8 site.

To get the news section on another site we will just copy the module to the new site's ./modules/ directory and enable it:

$ drush en custom_news

The following extensions will be enabled: custom_news
Do you really want to continue? (y/n): y
custom_news was enabled successfully.     

Final evaluation: Drupal 7 versus Drupal 8

One of the main differences between working in Drupal 7 and in Drupal 8 is represented by the new Configuration Management system.

While Features was proposing a one-stop solution for both configuration staging and packaging, Drupal 8 CM does a better job in keeping them separate, allowing developers in taking a greater control over these two different and, at the same time, complementary aspect of a solid Drupal development workflow.

By using the method described above we can upgrade our comparison table between Drupal 7 and Drupal 8 introduced in one of our previous posts as follows:

Functionality D7 Core D7 Core + Features D8 Core (current) D8 Core (current) + Config Devel Export full site config (no content) NO NO YES YES Export selected config items NO YES YES YES Track config changes (full site) NO NO YES YES Track config changes (selected items) NO YES YES YES Stage configuration NO YES YES YES Package configuration NO YES NO YES Reuse configuration in other projects NO YES NO YES Collaborate on the same project NO YES NO NO

The last "NO" deserves a brief explanation: Configuration Management allows two developers to work simultaneously on different parts of the same project if they are very careful: but "merging" the work would have to be done by version control (GIT or similar), that doesn't know about YAML or Drupal.

Some open issues

Contributed modules seem to be the best way to enhance the core Configuration Management system, much like what happened with Drupal 7 and Features. There are still several issues that should be considered for an optimal workflow, to match and improve what we already have in Drupal 7:

  • Piping: the ability to relate configuration components based on both hard and logic dependencies, for example: I export a content type and, automatically, I get also its fields. If piping might have been too rigid, at times, it would be still useful to have in some configurable form.
  • Enhanced configuration diff: it might be useful to have the possibility to review what configuration is going to be installed before enabling a module, like it is now when importing staged configuration to the active storage.
  • Granularity: it is still impossible to export part of a configuration file, so we still depend on the core conventions for grouping configuration into files, and we can't export a single permission for example.
  • Ownership: we can't know if another module (or "feature") is tracking a component we wish to track; this could be useful in the perspective of maintaining several "modular" features.
  • Updates: we can reuse configuration by simply enabling a module, but this holds only for the initial installation; after a module is enabled, we don't have a clean way to import changes (say, to "upgrade" to a newer version of the feature) outside the standard workflow foreseen in Configuration Management.
Jul 27 2012
Jul 27

At Nuvole we have always supported the idea that Open Atrium can deal with complex use cases. Modules like Spaces, PURL and Organic groups can push the limit of the platform far beyond being a simple-yet-powerful intranet software. We were already experimenting with building public websites and simple distributions-like mini-sites with Open Atrium for quite some time, but now the new Alfa Puentes project gave us the opportunity to blend together all those customizations in one powerful platform.

These concepts will be covered in more detail in the Nuvole DrupalCon Munich Training.

The Alfa Puentes project's main aim is to enhance the international cooperation between the European and the Latino-American Higher Education environments by creating a community of teachers, rectors and other related stakeholders. The members of this community need to share data and information both online, using an intranet platform, and offline, by participating to a series of mid-size seminars and conference spread across the two interested regions. Given the diversity of this community, multi-language support was also a strong requirement.

The platform is organized in three main parts: a public portal, a series of private self-managed intranet groups and mini-sites for the organization of real-life events. Since it's developed on a single Open Atrium installation users of the platform can have different roles in each components, e.g., a rector can start a private discussion group while a project partner may help with translations.

The portal is based on our mini-site features set, it contains information about the project, like funding periods and partners involved. All content is available in three languages: English, Spanish and Portuguese. Site managers can easily assign content translation to partners or collaboratively work on a specific section of the website. Even though the portal is implemented as a specific Open Atrium group type, the site look and feel does not resemble in any way its default theme: visitors perceive the portal as being an independent site.

All portal content is available in three languages:


Even though it is built as a group the portal does contain highly customized sections:


In a cooperation project it is essential to provide a platform where users can easily engage in a conversation and share relevant information: nothing can be simpler with Open Atrium. Each partner can start a private discussion group and can easily invite users to participate. To facilitate information sharing each group can be powered with our Atrium Folders feature, which provides a more user friendly and familiar way of sharing documents and files than the built-in Notebook feature. The language of each group can be set at its creation, chosen among English, Spanish and Portuguese.

Group owners can invite users to join a group by using powerful search tools:


Atrium Folders enhances the document sharing user experience:


The mini-site features set also powers the creation and management of mini-sites for organization of events. Each event site is an independent Open Atrium public group, with a specific Spaces preset which, at every site creation, enables features like static pages, custom navigation menu pre-filled with default content, news section and a spotlight area on the front page. Each mini-site must also provide a customizable per-event registration form which will add specific information (like participation to dinners, workshops, etc...) to the usual user profile data. To implement such a functionality we have integrated the Webform module with Open Atrium, ensuring a smooth user experience in managing such a complex use case.

Managers can deploy and customize a fully fledged event website in minutes:


Users can integrate their profile with event-specific information:


The default Open Atrium user profile section has been customized to be the starting point for the user to find her way trough the system. After the login the user is redirected to her profile page, from where she can have direct access to update her personal information, get to the groups she is member of and manage her event registrations.


Jun 05 2012
Jun 05

Drupal Global Training Day

Antonio De Marco

5 Jun 2012


Antonio De Marco

5 Jun 2012


Antonio De Marco, 5 Jun 2012 - 3 Comments

Nuvole organizes a free event for NGOs in Brussels

Nuvole is proud to join the Drupal Global Training Day on June 22nd.

We will give a free generic introduction to Drupal. So this won't be our usual, highly technical, training about streamlining Drupal Development that you may have seen at Drupalcon Chicago 2011, DrupalCon Denver 2012 and that is scheduled for DrupalCon Munich 2012 too. This time we will focus on Drupal as a platform, its community, what can be easily done with the available modules and what can be reached with a bit (or a lot!) of customization.

The target public is mainly people who would like to assess Drupal for use in international not-for-profit organizations: Nuvole has a long history of successful projects with this kind of organizations, and the examples we will show are all from this sector.

Topics will include:

  • A generic introduction to Drupal and its community
  • Why open-source is the best solution: low entry cost, availability of thousands of free modules, focus on customization
  • Drupal for distributed NGOs: Public websites with private working groups (Intranet) and file repositories
  • Drupal for event sites: minisites for conferences and events, easy to customize and replicate
  • Drupal for campaigns: high-impact sites for campaigns with social networks integration

The event is free to attend, and attendees are also welcome to stay for a buffet lunch. Registration is required. Please see the registration page for all details.

Feb 07 2012
Feb 07

An extract from the new teaching materials we are preparing for our DrupalCon Denver 2012 pre-conference training, Code-Driven Development: Use Features Effectively.

By now the advantages of a clean Code-Driven Development workflow are clear to the majority of Drupal developers; things like separation between configuration and content, packaging and deployment of new site functionalities or sharing changes in a distributed team don't look that scary anymore.

Still not everything in Drupal can be clearly classified into configuration or content, such as Taxonomy terms. Taxonomy terms are mostly used to categorize our content so, naturally, they are often used in site configuration: a context that reacts to a given taxonomy term is a fairly common scenario. Taxonomy terms can be generated by users (think about free-tagging), which makes them fall into the "Content realm". Drupal, in fact, considers them as such, assigning to each term a unique numeric identifier, and that's the problem.

Each component has its own name

One of the turning points in making configuration exportable in code was the introduction of a unique string identifier for each component: this way it's easy to share configuration and to avoid conflicts. So what happens with taxonomy terms? If they appear in our context conditions then we'd better bundle them together: no matter where we are going to deploy our feature, they will always have to have the same values. That's simply not possible: they have numeric IDs, forget about easy life here.

Various attempts have been made to export numerically identified components, such as UUID Features Integration or the handy Default Content module, but they only solve part of the problem: we want other components to know the unique name of our terms.

Hard and Soft configuration

At Nuvole we adopted the following terminology to classify the two kind of settings that we need to deal with everyday:

  • Hard configuration includes the settings under the distribution developer's control (e.g., Views or Contexts); it has a machine name, it is easy to export, it gives no headache. We store it in Features.
  • Soft configuration includes the settings that are meant to be overridden by the site administrator (e.g., the default theme, or the initial terms of a vocabulary); it often gets a unique numeric ID from Drupal, it is impossible to export safely, it is painful to handle. We store it in the Installation profile.

This distinction becomes fundamental when the configuration is altered or (if you are dealing with a distribution) when the underlying distribution is upgraded. In the case of Hard configuration, altering it results in an overridden feature, which is upgrade-unsafe. In the case of Soft configuration, altering it does not change the Features state, since the corresponding settings are stored in the Installation profile, and changes are upgrade-safe.

Not only taxonomy terms

The distinction between Hard and Soft configuration goes beyond how to conveniently export taxonomy terms: it is more a design decision, especially important when dealing with distributions. We consider everything that the site builder might be entitled to change as Soft configuration. The place where to usually store Soft configuration is your hook_install(); this guarantees a safe upgrade path to the next version of the distribution. An example could be the default theme: you may ship your distribution with a specific theme but the site owner might want to change it and subsequent updates shouldn't alter it.

Here is how our profile hook_install() might look like in a Drupal 7 distribution:


* Implements hook_install()
function example_install() {
= array();  
= taxonomy_vocabulary_machine_name_load('category');

[] = 'Solution';
[] = 'Client';
[] = 'Use case';

foreach ($terms as $name) {
$term = new stdClass();
$term->vid = $vocabulary->vid;
$term->name = $name;
// Enable custom theme
variable_set('theme_default', 'example_theme');  


Reference for site builders

If you are customizing a distribution and you need to override some of its default Hard configuration you might want to have a look at the Features Override module and at these two related articles from Phase2 Technology:

Nov 28 2011
Nov 28

A single Open Atrium installation to serve minisites from different domains with a common backend

You can use a single Open Atrium installation to create minisites, i.e., minimal websites with a simple structure, served at different domains, but sharing a centralized backend. Each minisite can be public or private and can have a specific theme and specific features. An anonymous visitor will only see a small, self-contained site, while site editors will be able to manage content on all minisites from the same backend.

This is an ideal situation, for example, when an Open Atrium installation is used as a private Intranet but you want to build small auxiliary sites for conferences or events, and possibly take advantage of the fact that editors and registered users are the same as the main Intranet.

And, best of all, this can be done with a minimal amount of coding and respecting the Open Atrium architecture.

Creating and Customizing a Minisite

Our aim is to build a feature that will allow us to create and customize a minisite in just a few minutes. We want to start from the ordinary "Create Group" page:


And this is what we get immediately upon form submission:


The minisite ships with default content (pages linked from the left side menu) and the editor can just click and add content.

We can also hook our minisite to the built-in look-and-feel customization that Open Atrium provides for its groups and give the possibility to the editor to choose background and foreground colors:


After a quick content editing, the editor can achieve something like this:


Screenshots are taken from Alfa Puentes, an international project funded by the EU that will focus on exchanging best practices between Europe and Latin America about higher education and that will include several events and conferences, each with a dedicated minisite.

Let's now see how to build this feature in Open Atrium.

Creating a minisite group type (spaces preset)

To start, we simply define a new spaces preset type, named "minisite": this can be done through hook_spaces_presets() using the existing definitions for public and private spaces from atrium_groups.spaces.inc as a basis.

<?phpfunction atrium_minisite_spaces_presets() {
$export = array(); $spaces_presets = new stdClass;
$spaces_presets->disabled = FALSE;
$spaces_presets->api_version = 3;
$spaces_presets->name = 'minisite';
$spaces_presets->title = 'Minisite';
$spaces_presets->description = 'Minisite space preset.';
$spaces_presets->space_type = 'og';

The new space preset can be exported into a feature and creating a minisite becomes as easy as creating a new group.


Assigning features to a minisite

Just like a standard Open Atrium group has some features available by default, we will want our minisites to use a certain set of features by default. In our case, we built two features for Pages and News and we enable them for our minisites, that will thus have static pages and news available.

<?php ... (follows from above) ...
$spaces_presets->value = array(
'variable' => array(
'spaces_features' => array(
'atrium_blog' => 1,
'atrium_book' => 0,
'atrium_calendar' => 1,
'atrium_casetracker' => 0,
'atrium_members' => 1,
'atrium_pages' => 1,
'atrium_shoutbox' => 0,
'spaces_dashboard' => 1,
'atrium_news' => 1,
  ... (continue
along the lines of atrium_groups.spaces.inc) ...

Making minisites self-contained: menus and blocks

We want our minisites to be self-contained as much as possible. In order to achieve that, with some standard Drupal hooks, we create a new menu for each minisite, that takes care of updating it as soon as the respective minisite group changes. Here is one of the implemented hooks:

function atrium_minisite_nodeapi(&$node, $op, $arg = 0) {
  if (isset(
$node->spaces_preset_og) &&
atrium_minisite_is_minisite_preset($node->spaces_preset_og)) {
    if (
in_array($op, array('insert', 'update', 'delete'))) {
$menu = array();
$menu['title'] = $node->title;
$menu['menu_name'] = atrium_minisite_get_menu_name($node);
$menu['description'] = $node->og_description;
atrium_minisite_menu_api($op, $menu);
      if (
$op == 'insert') {
module_invoke_all('atrium_minisite_default_content', $node, $menu);

As you can notice, our minisite feature exposes an atrium_minisite_default_content() hook which allows other modules to provide default content once a new minisite is created (e.g. "About" and "Contact" pages, etc...). Shipping each new minisite with default content allows the editor to start filling up the site straight away without the need to recreate again and again the same content structure.

To make our minisite menu group-aware, we implement hook_block() in atrium_minisites.module. The implementation below relies on a couple of helper functions defined elsewhere.

function atrium_minisite_block($op = 'list', $delta = NULL, $edit = NULL) {
$blocks = array();
  if (
$op == 'list') {
$blocks['primary-navigation']['info'] = t('Minisite: Primary navigation');
$blocks['primary-navigation']['cache'] = BLOCK_NO_CACHE; $blocks['secondary-navigation']['info'] = t('Minisite: Secondary navigation');
$blocks['secondary-navigation']['cache'] = BLOCK_NO_CACHE; $blocks['full-navigation']['info'] = t('Minisite: Full navigation');
$blocks['full-navigation']['cache'] = BLOCK_NO_CACHE;
  if (
$op == 'view') {
    if (
atrium_minisite_is_minisite_space()) {
$config = atrium_minisite_get_menu_block_config($delta);
      if (
$delta == 'secondary-navigation' || $delta == 'full-navigation') {
$config['level'] = 2;       
$config['expanded'] = 1;
$config['depth'] = 0;
      if (
$delta == 'full-navigation') {
$config['level'] = 1;       
$blocks = menu_tree_build($config);

This block is then specified as a context reaction, so that site visitors will see a self-contained navigation when browsing the minisite.

Making minisites themeable

Each minisite can be assigned a different theme. To do so from a convenient interface, we implement hook_form_alter() in atrium_minisite.module to give the possibility to select a theme for the given minisite. Again, we omit some helper functions for brevity.

* Implementation of hook_form_alter()
function atrium_minisite_form_alter(&$form, &$form_state, $form_id) {
  if (isset(
$form['#node']) && $form_id == $form['#node']->type .'_node_form') {
$node = $form['#node'];   
    if (
og_is_group_type($node->type)) {
module_load_include('inc', 'system', 'system.admin');     
    if (
atrium_minisite_is_minisite_space()) {
      if (
$node->type == 'page') {
$space = spaces_get_space();
$conf['menu_default_node_menu'] = atrium_minisite_get_menu_name($space->group);
$menu_name = atrium_minisite_get_menu_name($space->group);
$menu = array($menu_name => $space->group->title);
$form['menu']['parent']['#options'] = menu_parent_options($menu, $item);

We can now choose the best theme for our minisite directly on the creation page:


Serving minisites on different domains

Each Open Atrium group gets its own URL path component; in our case the minisite will be reachable at something like http://alfapuentes.org/conference/. With a quick Persistent URL customization we can actually set the URL rewriting to be per domain, allowing our minisite to have its own domain. This can be achieved by:

  1. Enabling the "Domain" URL modification type from admin/settings/purl/types;
  2. Assigning to "Group space" the new modifier type on admin/settings/purl;
  3. Adding a custom URL each time we are going to create a minisite group

By taking advantage of the Persistent URL domain rewriting we can really make the new minisite look completely independent from the underlying Open Atrium platform:


The techniques shown here are covered in detail in our trainings; if interested, contact us for more information.

May 25 2011
May 25

Open Atrium's default set of features covers most of the common needs of an average organization, except one: a file repository with the familiar look and feel of a folders tree. By deploying customized versions of Open Atrium for different kinds of organizations we noticed that the built-in Notebook feature is still cumbersome for intranet users to store and categorize their documents (meeting reports, minutes, etc...): they are used to store their files in directories and subdirectories and not as books and child pages.

On the other hand, the needed technology for a more familiar documents repository is there: Drupal's core Book module can create hierarchical relations among nodes and files can be easily attached to nodes, so it's enough to interpret book pages as folders and their attachments as files to get the same paradigm users are familiar with.

Enter Atrium Folders: A simple file repository application

The idea behind Atrium Folders is to have a very simple file repository application with no external dependencies that users can just download, enable and start playing with. To get started, download the code directly from Nuvole's GitHub, place it under sites/all/modules/features in your Open Atrium installation and enable the new feature. Atrium Folders is shipped as an ordinary Drupal feature so, after enabling it as a module, we need to enable it in the Open Atrium group.


We can now visit the new Folders section and start adding the first folder.


As already mentioned, folders are nodes: we just need to specify its name, press Save and we are good to go. Now, say goodbye to the ordinary Drupal node interface; from now on we will be managing our repository from an intuitive "filesystem browser"-like interface:


Managing your repository

Use the top menu to manage the folder currently being displayed; it contains the following four items.

Basic information: change name and description of the current folder:


Subfolders: easily add a bunch of subfolders by entering their names, one per line:


Attach files to this folder: just the reinterpretation of the good old Drupal file upload form, it gives the possibility to upload as many files as we need to the current folder:


Notifications: specify who will get a notification of the current changes (again, we don't reinvent the wheel but we integrate with the default Open Atrium notifications system):


Upload multiple files: quickly upload a ZIP archive containing several files and folders: Atrium Folders will unzip it for you and place its content in the current folder (PHP must have ZIP support enabled in order for this functionality to be available)


Rearranging folders

The Reorder tab allows to rearrange the structure of our repository by simple drag-and-drop. This handy interface, borrowed from Drupal's core Book module, is also very helpful when we have to rename several folders at once.


File toolbox

Each file has a toolbox that will allow to rename, move or delete the respective file. The toolbox uses jQuery and AJAX calls extensively, to guarantee a smooth user interaction: files will be modified and changes will take immediate effect, with no need to reload the page.


Creating your own toolbox

Atrium Folders exposes hook_folder_toolbox() to enable third party modules to expose their own toolbox. As an example, here is the default atrium_folders_folder_toolbox() implementation:

* Implementation of hook_folder_toolbox()
function atrium_folders_folder_toolbox() {
  return array(
'op' => 'move',
'type' => 'file',
'title' => t('Move'),
'description' => t('Move the file to another location.'),
'form callback' => 'atrium_folders_move_file',
'ajax callback' => 'atrium_folders_move_file_ajax_callback',

Drag and Drop Upload

Atrium Folders ships with a built-in integration for the Drag'n'Drop Uploads module which gives the possibility to simply drag and drop files on the upload form to attach them to a node. If Drag'n'Drop Uploads module is enabled a Drop zone will be visible in the Attach files to this folder panel.


Drush integration: import an existing structure

When an organization wants to switch to Open Atrium, in most cases they need to migrate a shared folder to the new intranet. Atrium Folders includes the folder-import Drush command that allows to import a directory located in our local filesystem into a certain folder of a specific Open Atrium group. We just need to specify the full path of the directory we want to import and the Drupal path of the group we want it to be imported:

$ drush folder-import /tmp/meeting secretariat
Created 13 folders and 168 files.

Type $ drush help folder-import for more.

Apr 12 2011
Apr 12

Drupal Solutions for Public Works Monitoring and Remote Collaboration

Antonio De Marco

12 Apr 2011


Antonio De Marco

12 Apr 2011


Antonio De Marco, 12 Apr 2011 - 4 Comments

Our presentations from Drupal Government Days Brussels

The Drupal Government Days held last week in Brussels were a good occasion to showcase two solutions by Nuvole, born as services for local and European institutions.

Public Works Monitoring in Drupal

Nuvole built a dedicated platform, based on the widely used Drupal system, to make it easy for municipalities to monitor public works in real time and optionally share selected information with citizens.

Please find the Slideshare embed below.

Remote Collaboration and Institutional Intranet

Nuvole created a solution for the European Commission based on Drupal and Open Atrium to allow several hundreds experts to discuss in a Virtual Community about the harmonization of higher education in Europe. Besides the Virtual Community, several websites are created for the periodical seminars, all based on a common template and single sign-on.

Please find the Slideshare embed below.

Feb 07 2011
Feb 07

Open Atrium, Beyond the Intranet

Antonio De Marco

7 Feb 2011


Antonio De Marco

7 Feb 2011


Antonio De Marco, 7 Feb 2011 - 13 Comments

A new way to look at Open Atrium - From the Brussels Drupal Dev Days

Open Atrium is designed to be a powerful Drupal-based Intranet solution, but the underlying technology allows to do much more: creating a public portal with a totally different graphic design and complementing it with a private Intranet; creating other group presets than the default "public" and "private" group; customizing the profile fields in a clean and modular way, and several other possibilities.

Here is the Nuvole presentation from the Drupal Dev Days in Brussels (Slideshare embed below, PDF attached to this post).

If you are interested in discussing Open Atrium from a developer's point of view at DrupalCon Chicago next month, please join the BOF.

Dec 17 2010
Dec 17

Open Atrium comes, out of the box, with a rather complete set of features which allow to bootstrap a functional intranet solution in a few minutes. Still, to be actually usable in a real life context, an Open Atrium installation must be adapted to each organization's needs and structure.

In this article we will learn how to customize an Open Atrium installation in a code-driven fashion, without hacking its core and making sure that an upgrade to a newer version will not swipe away our valuable changes; we will cover:

  1. How to add custom user profile fields;
  2. How to override Atrium's core configuration;
  3. How to create custom group types.

For a smooth reading you should be familiar with Features and Exportables concepts. If you are not, just check out our Code Drive Development blog post series.

1. How to add custom user profile fields

One of the greatest changes introduced with the Features module 1.0 stable release is that, when exporting, you can actually decouple CCK fields definition from the node type they belong to, or in other words: CCK fields and node type can be exported to different features.

As a direct consequence of this approach we can store our custom user profile fields in a feature (call it custom_profile) that extends the Atrium's core atrium_profile. Let's have a look to its .info file:

core = "6.x"
dependencies[] = "features"
dependencies[] = "text"
description = "Custom profile feature."
features[content][] = "profile-field_profile_city"
features[content][] = "profile-field_profile_country"
name = "Custom Profile"
package = "Features"
project = "custom_profile"
version = "6.x-1.0"

This makes sure that, when Open Atrium will be upgraded, we will still find our custom fields (namely "City" and "Country") since they belong to the custom_profile feature.

2. How to override Atrium's core configuration

The Features module exposes several hooks that alter the default settings of exported components, like variables, permissions, contexts, spaces, etc... A correct use of these hooks makes it possible to safely override default component settings being sure that they will not get lost after an Atrium core upgrade.

Let's say we want to change the system's default user picture: Open Atrium stores it into a variable, so all we need to do is to implement hook_strongarm_alter() and override its value:

* Implementation of hook_strongarm_alter()
function custom_profile_strongarm_alter(&$items) {
  if (isset(
$items['user_picture_default'])) {
$items['user_picture_default']->value = 'sites/all/themes/custom/user.png';

After clearing the cache we will see our new user picture appearing everywhere on the site. Same goes for the default blocks we have on the dashboard of a newly created group: we can choose, for example, to replace the "Welcome" video with the "Latest discussion" block. Here is how to do that:

* Implementation of hook_spaces_presets_alter()
function custom_group_spaces_presets_alter(&$items) { // Store a reference to our target block section.
$blocks = &$items['atrium_groups_private']->value['context']['spaces_dashboard-custom-1:reaction:block']['blocks'];
// Remove "Welcome" block.
unset($blocks['atrium-welcome_member']); // Add "Latest discussions" block.
$blocks['views-blog_listing-block_1'] = array(
'module' => 'views',
'delta' => 'blog_listing-block_2',
'region' => 'content',
'weight' => 1,

After clearing the cache all new groups (and those that didn't have their dashboard customized by an user) will inherit the new settings. If we now have a look at our features status we'll see that several Atrium core features have been marked as "Overridden":

$ drush features

Name                 Feature              Status    State       
Atrium               atrium               Enabled   Overridden  
Atrium Activity      atrium_activity      Enabled               
Atrium Blog          atrium_blog          Enabled     
Atrium Notebook      atrium_book          Enabled               
Atrium Calendar      atrium_calendar      Enabled               
Atrium Case Tracker  atrium_casetracker   Enabled               
Atrium Groups        atrium_groups        Enabled   Overridden  
Atrium Members       atrium_members       Enabled   Overridden  
Atrium News          atrium_news          Enabled               
Atrium Pages         atrium_pages         Enabled               
Atrium Profile       atrium_profile       Enabled  

This is a totally safe situation since your configuration has been altered via code, thus it is 100% upgrade-safe. To learn more about altering default hook have a look at Feature's feature.api.php file.

3. How to create custom group types

In the example above we learnt how to replace a block in the default "Private group" space preset. What if we need a completely new group type, with custom features and specific configurations, something that we cannot really map into one of the two group types ("Public group" and "Private group") provided by Open Atrium? Easy, we just need to provide a custom space preset that describes the kind of group we need.

For example: we want to have a "Portal group" which will be used as the public website of our organization, it will have news and static pages and it must look slightly different than the intranet. The Spaces module is powerful enough to allow you to turn each group into a completely different website, customizing settings, menus, theme, etc...

To do that, we start by cloning the "Controlled group" space preset into our new "Portal group" preset. We then export it to a new feature called atrium_portal, having an .info file similar to:

core = "6.x"
dependencies[] = "atrium_news"
dependencies[] = "atrium_pages"
dependencies[] = "context"
dependencies[] = "menu"
dependencies[] = "spaces"
features[context][] = "layout_portal"
features[ctools][] = "context:context:3"
features[ctools][] = "spaces:spaces:3"
features[menu_custom][] = "menu-portal"
features[menu_links][] = "menu-portal:calendar"
features[menu_links][] = "menu-portal:dashboard"
features[spaces_presets][] = "atrium_portal"
name = "atrium_portal"
package = "Features"

As you might have noticed, the file contains a dependence on two features that are not part of the Open Atrium core: atrium_news and atrium_pages, which respectively add news publishing and usual static pages to our new portal.

So let's have now a closer look at the heart of this feature, the atrium_portal Spaces preset;

* Implementation of hook_spaces_presets().
function atrium_portal_spaces_presets() {
$export = array();
$spaces_presets = new stdClass;
$spaces_presets->disabled = FALSE;
$spaces_presets->api_version = 3;
$spaces_presets->name = 'atrium_portal';
$spaces_presets->title = 'Portal group';
$spaces_presets->description = 'Turn a group into a public portal. All users may view public content from this group. Users must request to join this group.';
$spaces_presets->space_type = 'og';
$spaces_presets->value = array(
'variable' => array(
'spaces_og_selective' => '1',
'spaces_og_register' => 0,
'spaces_og_directory' => 0,
'spaces_og_private' => 0,
'spaces_features' => array(
'atrium_blog' => '0',
'atrium_book' => '0',
'atrium_calendar' => '1',
'atrium_casetracker' => '0',
'atrium_members' => '1',
'atrium_news' => '1',
'atrium_pages' => '1',
'atrium_shoutbox' => '0',
'spaces_dashboard' => '1',
'spaces_setting_home' => 'dashboard',
'site_frontpage' => 'dashboard',
'menu_primary_links_source' => 'menu-portal',
'menu_default_node_menu' => 'menu-portal',
'designkit_color' => array(
'background' => '#3399aa',
'designkit_image' => array(
'logo' => 0,
'context' => array(
'spaces_dashboard-custom-1:reaction:block' => array(
'blocks' => array(
'views-atrium_news-block_3' => array(
'module' => 'views',
'delta' => 'atrium_news-block_3',
'region' => 'content',
'weight' => 0,
'views-calendar_listing-block_1' => array(
'module' => 'views',
'delta' => 'calendar_listing-block_1',
'region' => 'right',
'weight' => 0,
'spaces_dashboard-custom-2:reaction:block' => array(
'blocks' => array(),
'spaces_dashboard-custom-3:reaction:block' => array(
'blocks' => array(),
'spaces_dashboard-custom-4:reaction:block' => array(
'blocks' => array(),
'spaces_dashboard-custom-5:reaction:block' => array(
'blocks' => array(),
// Translatables
  // Included for use with string extractors like potx.
t('Portal group');
t('Turn a group into a public portal. All users may view public content from this group. Users must request to join this group.'); $export['atrium_portal'] = $spaces_presets;

The first important customization is the set of features we are enabling for our new Portal preset. Since the set of features available to each group is stored into the spaces_features variable the Spaces module can override its value when we are browsing a Portal group. Here what we have enabled:

'spaces_features' => array(
'atrium_blog' => '0',
'atrium_book' => '0',
'atrium_calendar' => '1',
'atrium_casetracker' => '0',
'atrium_members' => '0',
'atrium_news' => '1',
'atrium_pages' => '1',
'atrium_shoutbox' => '0',
'spaces_dashboard' => '1',

This is generally true with all those settings that are stored into variables, like default theme, primary links menu, default user picture, etc...

With a space preset we can also override the default block settings of the Open Atrium dashbaord. For example, for our Portal we want to show a news block on the content region and a calendar with the latest events on the right sidebar region. Here is how to do that:

'context' => array(
'spaces_dashboard-custom-1:reaction:block' => array(
'blocks' => array(
'views-atrium_news-block_3' => array(
'module' => 'views',
'delta' => 'atrium_news-block_3',
'region' => 'content',
'weight' => 0,
'views-calendar_listing-block_1' => array(
'module' => 'views',
'delta' => 'calendar_listing-block_1',
'region' => 'right',
'weight' => 0,

With this simple space preset we can actually turn an Open Atrium group into a real public portal for our organization.

A different way to look at Open Atrium

A stable code-driven customization joint to the powerful Spaces module could push Open Atrium much far behind its original private-intranet. This great distribution, in fact, can be seen as a versatile platform that can answer to the most diverse needs of an organization providing, for example, groups that can be turn into full featured public portals or used to organize events, etc... With the right customization each group could also be heavily customizable, letting the administrator to completely change their look and feel to serve the most different purposes.

If you are interested in learning more about the topic covered in this blog post you can check out our Drupal Con Chicago 2011 session proposal here.

Jul 11 2010
Jul 11

Code driven development session proposal at DrupalCon Copenhagen 2010

Antonio De Marco

11 Jul 2010


Antonio De Marco

11 Jul 2010


Antonio De Marco, 11 Jul 2010 - 3 Comments

What code driven development is all about.

At Nuvole the adoption of a code driven development has completely revolutionized our working flow making it more maintainable, solid and scalable. Projects like Features and Chaos tool suite are at the heart of it, even though a lot depends still on how actually you are going to use them: good code conventions (see the Kit project), well defined features boundaries and other good practices can really make the difference at the end of the day.

Therefore, Nuvole is proposing the session "Code Driven Development: a 100% DB-free development workflow" at DrupalCon Copenhagen 2010 with the aim of sharing good design decisions, code snippets and anything else related to a pure 100% database free development. More specifically, the session will cover:

  • database driven vs. code driven approach in a nutshell
  • what code driven development is all about: Features, Strongarm, Drush, exportables...
  • coding conventions: make the world a better place
  • your friends hook_install() and hook_update_N()
  • the controller feature: one feature to rule them all
  • lots of tips & tricks to take back home

If you are interested in the topic help this session going through the selection process by voting for it.

Jul 05 2010
Jul 05

The Features module is great for keeping configuration in code, with just a few clicks you can bundle your settings together and be sure that changes will be tracked and safely dumped to code at the next feature update.

However, often you will want to share other kinds of modifications with your team. Imagine you suddenly need to add a Taxonomy vocabulary or to enable a couple of modules your feature will be dependent on. Since your team's workflow is completely 100% database free, you cannot pass those changes by simply sharing your database. Still, you need to push those changes to other developers, who already enabled the feature you have been modifying in their development environment. How do you do that?

Make your workflow solid: meet hook_update_N()

Features are modules and modules can have their own upgrade path by implementing hook_update_N(). That's exactly what you need.

All the changes the Features module is not keeping track of must be stored in sequential implementation of the hook_update_N() to be sure that other developers will have them replicated in their database by simply visiting update.php.

Have a look at the following example: we want to add a free-tagging taxonomy vocabulary to our feature. First of all we need to enable the Taxonomy module add it as dependency in its .info file:

dependencies[] = "taxonomy"

Adding a dependency will not enable the new module on the other developers' local copy, since the feature has already been enabled and dependencies will not be rechecked. To be sure the module will be enabled we have to write an update function:

* Enabling Taxonomy module.
function feature_example_update_6001() {
$return = array();
$return[] = array('success' => TRUE, 'query' => 'Enabling Taxonomy module.');

Now it's time to create our free-tagging vocabulary. Since taxonomies are still not supported by the Features module we have to find another way to put this change in code. One way to do it is to create the vocabulary in an update function:

* Create "Tags" vocabulary.
function feature_example_update_6002() {
$return = array();
$vocab = array(
'name' => 'Tags',
'multiple' => 0,
'required' => 0,
'hierarchy' => 0,
'relations' => 0,
'weight' => 0,
'nodes' => array('story' => 1),
'tags' => TRUE,
'help' => t('Enter tags related to your post.'),
$return[] = array('success' => TRUE, 'query' => 'Create "Tags" vocabulary.');

We can now commit our changes. Once the other developers pull the latest version of our feature in their local environment all they need to do is to run a Drupal update by visiting update.php.

The comment on top of the update functions above plays an important role in the workflow since both Drush and update.php are actually able to display it when running drush updatedb or running update.php, giving valuable information to the other team members:

$ drush updatedb
The following updates are pending:

feature_example module              
6001 - Enabling Taxonomy module.
6002 - Create "Tags" vocabulary.

Do you wish to run all pending updates? (y/n):

Make your features database free: meet hook_install()

Storing your changes in hook_update_N() will allow your team to be always up-to-date with the latest development of your feature, making your development workflow solid and maintainable. But what happens if a new developer wants to hop in? Our rule of thumb is "No database sharing", hence the new developer needs to install our project from scratch and, still, we need to guarantee that he will get the complete latest status of the project. To do that we need to copy some of the configuration we have been placing in update functions to the hook_install():

* Implementation of hook_install()
function feature_example_install() {
$vocab = array(
'name' => 'Tags',
'multiple' => 0,
'required' => 0,
'hierarchy' => 0,
'relations' => 0,
'weight' => 0,
'nodes' => array('story' => 1),
'tags' => TRUE,
'help' => t('Enter tags related to your post.'),

Structural and development updates

As you might have noticed hook_install() copies code from the feature_example_update_6002() and not from feature_example_update_6001(). This is because the two updates have a different nature: 6002 is a structural update, meaning that it is something we must guarantee even if the feature will be installed from scratch; 6001 is a development update which only aims to upgrade an already working development copy.

When writing your upgrade paths, it's good practice to distinguish between two kinds of updates, and in the case of a structural update, make sure you copy changes to the hook_install() of your feature.

Real life examples

Upgrade and install hooks are really powerful, you can put virtually anything in there. Below we list some real life example of upgrade paths we use in our projects at Nuvole.

Add a menu and menu items

* Implementation of hook_install()
function master_site_install() {
// Create a custom menu called "Manage Content".
db_query("INSERT INTO {menu_custom} (menu_name, title, description)
            VALUES ('%s', '%s', '%s')"
'Manage Content',
'Manage your site content.'); // Add "Home" menu item to Primary Links menu.
$item['link_title'] = t('Home');
$item['link_path'] = '<front>';
$item['menu_name'] = 'primary-links';
$item['weight'] = -10;

Add OpenId to admin account

* Add Nuvole OpenID to admin account.
function nuvole_site_update_6004() {
$return = array();
// Delete any other association of the OpenId account to avoid conflicts.
$return[] = update_sql("DELETE FROM {authmap}
                          WHERE authname = 'http://nuvole.myopenid.com/'"
// Bind OpenId account to admin user.
$return[] = update_sql("INSERT INTO {authmap} (uid, authname, module)
                          VALUES (1, 'http://nuvole.myopenid.com/', 'openid')"

Upgrading data from OpenLayers 1.x to 2.x

* Upgrade OpenLayers 1.x to 2.x: WKT data in "content_type_opera"
function publicopera_site_update_6007() { // OpenLayers 2.x stores WKT values as geometry collection.
  // Update data accordingly.
db_query("UPDATE {content_type_opera} SET field_opera_map_openlayers_wkt =
            CONCAT('GEOMETRYCOLLECTION(', field_opera_map_openlayers_wkt, ')')"
  return array(array(
'success' => TRUE, 'query' => 'All WKT content updated.'));
May 28 2010
May 28

First Code Driven Development meet-up in Antwerp

Antonio De Marco

28 May 2010


Antonio De Marco

28 May 2010


Antonio De Marco, 28 May 2010 - 3 Comments

Sharing tips and best practices.

Yesterday night the first "Code Driven Development" meet-up took place at the Krimson office in Antwerp (Belgium). Around 30 people were attending the meeting, most of them to learn more about this new "hype". Nuvole shared its experience in the field presenting "First Steps in Code Driven Development", both an introduction and a collection of tips and examples for a quick start.

For more read our introductory blog post on Code Driven Development.

Straight after the presentation people were participating in an Open Space about the topic.


(Picture by netsensei)

Nuvole chose the following two sessions:

  • S.O.S. Features: beginner session about features and code driven development where we had the chance to answer people's more specific questions about the flow showcasing real-life examples from Nuvole's projects.
  • Base Features: session focused on features re-usability and followed by a brainstorming about how features should be designed in order to be more extendible and re-usable.

Then time was up and I had a train to catch. Thanks everybody for the great inspiring evening and, especially, to all the Krimson folks for putting this together.

May 24 2010
May 24

Building a website using Drupal is essentially about tuning and extending its core functionalities to meet the project's specific requirements. During its development phase we create content types with CCK, display them using Views and extend the basic functionalities by setting up contributed modules. All these actions are recorded into the database in the form of a very heterogeneous collection of settings, each specific to the module it belongs to.

When the development phase is completed the database probably contains, besides all our valuable configuration, other Drupal "entities" such as content, users, etc... that we don't really want to carry along once the website goes live: a careful and time consuming cleanup is then necessary.

With time our website naturally grows in size; it contains much more content and several users are active on it: soon it might be necessary to extend its functionalities. How to do it when both settings and content are now valuable, when really nothing can be lost during development?

The immediate solution could be to work on a copy of the production database and restore it afterwards. This, obviously, means that activities on the production website have to be suspended until the new development is completed. Additionally, a pre-live clean-up will probably be necessary also in this case.

When a team of developers works on the same project issues multiply: if each developer keeps and works on his own database dump the risk of overriding each other's changes is high. Another option is to work all together the same shared database, but still hoping that nobody will mess things up.

It is clear that relying on the database makes the development flow really error prone, not easily scalable and definitely difficult to maintain.

Settings in code: the key to Drupal heaven

Keeping track of changes, working concurrently on the same project and resolving conflicts are all issues that are usually addressed by the adoption of a version control systems (VCS). Unfortunately they are not that useful when it comes to working with configuration stored in database: you can surely version a dump but change tracking and conflict solving are practically impossible.

Everything changes if configuration is stored "in code", meaning that, instead to have settings spread across different tables, they are translated to understandable PHP code that Drupal modules can easily consume. Exportable entities are not something new in the Drupal universe: modules like Views and CCK have already supported exporting setting for quite some time. What's missing is to streamline the export/import operations to, eventually, switch from a database to a code driven development. All this is today possible thanks to the Features project. This project introduces the concept of feature as a collection of settings and configuration that describe a very well defined functionality. In doing so it implements a very solid system of keeping track of changes that occurs to settings "owned" by a particular feature making easy to dump them to code. That's basically it, the rest is all about wisely organizing our project in features which keeps all our settings safely in code. At this point, by using our favorite VCS, changes get traceable and conflicts resolvable, exactly how it happens with standard PHP code.

Introducing #codepower

Projects like Features, Chaos tool suite or Strongarm are at the heart of the code driven revolution, even though a lot still depends on how actually we are going to use this technology: good code conventions, well defined features boundaries and other good practices can really make the difference, at the end of the day.

What we really need is to share each other's best practices to develop together patterns for a solid code driven development. For this reason Nuvole introduced, at Drupal VolCon in Antwerp, the #codepower Twitter tag. The idea is simple: we will tweet and tag with #codepower whatever we deem interesting to be shared with the community regarding code driven development. We ask you to do the same.

The Movement

Today we all have the chance to push Drupal even further by changing the way we work and develop applications with it. At Nuvole the adoption of a code driven development has completely revolutionized our working flow, making it more maintainable, solid and scalable. For this reason we want to contribute back to the community our experience about code driven development with a series of blog posts, starting in occasion of the first event on the matter: "Developer Session: Code Driven Development", the 27th of May 2010, in Antwerp (Belgium) hosted by the great guys at Krimson.

Apr 13 2010
Apr 13

Keep things neat.

I bet every Drupal developer keeps a Drupal installation as local sandbox where they try out code snippets, maintain modules or generate patches to be contributed back to the community. If not well maintained this installation will, day after day, grow into a chaotic mess of content and settings. Having separate Drupal installations for each project we work on would be a good solution here.

"Divide et impera" with Git

Git is one of the rising stars in the version control system world and the main candidate in replacing the obsolete Drupal CVS infrastructure. After installing Git, setting up a repository is a piece of cake, in your project root type:

$ git init

and there you go, you are now ready to commit, branch, merge or revert your sandbox code. Having your code under version control is generally a good practice: you can be sure that nothing will get really lost and, more important, you can branch your code according to what you are working on, to help keeping things in order. To create a branch using Git type:

$ git branch my_module

You can list all your branches by typing git branch, you'll be working on master branch by default:

$ git branch my_module
* master

To actually move the development to that branch type:

$ git checkout my_module

After applying your changes you can commit to the branch and move back to master to do something else. You can learn more about a normal Git workflow here, for more information about branching have a look at Basic Branching and Merging.

You can branch your sandbox per module, appending version information if you wish, and everything will be safely separated. On my local sandbox the situation looks like this:

$ git branch
* master

The sandbox is divided per module and version: this division makes it easy to change the development focus quickly just by checking out different branches. Let's say a new bug needs to be fixed in the OpenLayers Geocoder module, 2.x branch, we type:

$ git checkout openlayers_geocoder-6.x-2.x-dev

A CVS checkout of the module is available under sites/all/modules/devel/openlayers_geocoder/ (CVS metadata are also committed to Git). After fixing the bug we run:

$ git commit -a -m "Fixing a bug."

and we are now ready to focus on something else. What shown so far is not going to work correctly for the very simple reason: code is not (yet) the king in Drupal, the database is. All settings and content you use to develop and test your code is, obviously, stored into the database, what we need to do is to dump and commit the database:

$ git checkout master
$ drush sql-dump --result-file="dump.sql"
$ git add dump.sql
$ git commit -a -m "Adding dump."

This must be done in the master branch before branching, so we will always carry our dump along. After changing branch (e.g.: git checkout my_module) we must remember to restore the dump in order to have our Drupal sandbox work correctly. It's clear that the database restoring must happen after every checkout.

Automation with Drush and Git hooks

Git, as other version control systems, exposes a series of hooks. Hooks are bash scripts that are ran in conjunction with a specific Git event. The main Git hooks here are located under .git/hooks directory:


If we are interested in post-checkout.sample: to activate a hook we just need to remove the .sample extension. In order to automate our dump restore we paste into post-checkout the following code:


  # $3 is equal to 1 in case of a branch switch or after git clone, 0 otherwise
  if [ $3 ]; then
    echo "Restoring database."
    drush sql-cli < sites/all/database/development.sql
    echo "Clearing cache."
    drush cc all
    echo "Done."

This bash script restores the database and clears the cache. We are now able to switch from one branch to another carrying along all our content, settings, etc.:

$ git checkout node_widget-6.x-1.x-dev
Checking out files: 100% (933/933), done.
Switched to branch 'node_widget-6.x-1.x-dev'
Restoring database.
Clearing cache.
'all' cache was cleared

$ git checkout openlayers_geocoder-6.x-2.x-dev
Switched to branch 'openlayers_geocoder-6.x-2.x-dev'
Restoring database.
Clearing cache.
'all' cache was cleared

On the other hand, dumping and committing the database before switching branch is not automatic since we want to have control of what changes need to stay: it will happen that you just checkout a branch, make a quick test and you want to forget about it.

Getting to the speed of light: Git bash completion

If you want to attach name code version to your branch or other information you will end up having really long branch names. If you don't want to type them all the time get to the speed of light with Git bash completion.

Apr 07 2010
Apr 07

The OpenLayers suite represents a breakthrough in the Drupal mapping solutions. The module allows to combine maps coming from different providers (Google Maps, Yahoo, and many others...) and, by using services like CloudMade or MapBox, it is finally possible to have maps perfectly integrated into the website look and feel. In order to input geospatial information using OpenLayers we have to enable the OpenLayers CCK module: the input widget provides a map where users can easily mark the desired location.

Unfortunately, this approach does not scale to deal with more complex scenarios. Imagine a website where users need to geo-locate restaurants in different cities: the input process will result in a series of very tedious drag-and-zoom operations. Ideally, the only information the user would need to provide is the address or the name of the location he or she is looking for.

Google Geocoding Web Service

The Google Geocoding Web Service V3 is a very powerful service that accepts full street addresses, location names and even known places (i.e. airports, train stations etc.), returning a complete set of information about the location we are looking for. For instance, Let's have a closer look to the web service response for Pizzeria da Vittorio, Rome:


results in:

  "status": "OK",
  "results": [ {
    "types": [ "point_of_interest", "establishment" ],
    "formatted_address": "Pizzeria da Vittorio di Martino Enzo, Via Benedetto Croce, 123, 00142 Rome, Italy",
    "address_components": [ {
      "long_name": "Pizzeria da Vittorio di Martino Enzo",
      "short_name": "Pizzeria da Vittorio di Martino Enzo",
      "types": [ "point_of_interest", "establishment" ]
    }, {
      "long_name": "123",
      "short_name": "123",
      "types": [ "street_number" ]
    }, {
      "long_name": "Via Benedetto Croce",
      "short_name": "Via Benedetto Croce",
      "types": [ "route" ]
    }, {
      "long_name": "Rome",
      "short_name": "Rome",
      "types": [ "locality", "political" ]
    }, {
      "long_name": "Rome",
      "short_name": "RM",
      "types": [ "administrative_area_level_2", "political" ]
    }, {
      "long_name": "Lazio",
      "short_name": "Lazio",
      "types": [ "administrative_area_level_1", "political" ]
    }, {
      "long_name": "Italy",
      "short_name": "IT",
      "types": [ "country", "political" ]
    }, {
      "long_name": "00142",
      "short_name": "00142",
      "types": [ "postal_code" ]
    } ],
    "geometry": {
      "location": {
        "lat": 41.8428020,
        "lng": 12.4858480
      "location_type": "APPROXIMATE",
      "viewport": {
        "southwest": {
          "lat": 41.8344890,
          "lng": 12.4698406
        "northeast": {
          "lat": 41.8511139,
          "lng": 12.5018554
    "partial_match": true
  } ]

As you can see the web service tells us pretty much everything we want to know about the location. It's time to use all this awesomeness to make the user's life shining again!

Meet OpenLayers Geocoder

OpenLayers Geocoder provides a new input widget for OpenLayers CCK fields that makes location spotting a fast and painless experience. All we need to do is to select the OpenLayers Geocoder input widget from within the CCK field setting page.


After enabling the OpenLayers Geocoder widget, adding a restaurant is all about providing its name or address: OpenLayers Geocoder will provide the user with a list of possible locations.


Get the most out of the web service response

We have only used the geospatial information from the web service response so far: latitude and longitude to center the map and the boundary box to nicely fit the desired location. Besides that, the response contains really valuable information about the location we were looking for, like postal code, administrative area, city, country, etc.

OpenLayers Geocoder gives the possibility to fill CCK text fields automatically on the node submission form with data coming from the response object. This is possible thanks to the integration with the Token module: all address parts are exposed and ready to be used as replacement patterns. To enable the autofilling we have to visit the OpenLayers CCK field setting page in order to map which token is going to fill which text field.


When we will be looking for a place, information like city, country, etc. will be automatically fetched into the selected text fields.


This gives us a great flexibility: we can display our nodes by city using Views or search through them using Faceted Search.

Future development: Reverse geocoding

Another very interesting feature of the Google Geocoding Web Service is the possibility to perform reverse geocoding. As the name suggests, this time the user will spot a location on the map and the web service will return the closest address to the given point. OpenLayers Geocoder will have support for reverse geocoding in one its next releases.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web