Feb 01 2015
Feb 01

You have built an application where there was a taxonomy or options field with more values defined in them than what was really being used after release. And these fields are being used as exposed filters in a View. This basically means that you end up with an option in an exposed filter that yields no results when selected. Not a good UI behaviour, and confusing for the end user. This is where the Views Selective Filters module comes in handy: it will limit the exposed filter options to only those that are present in the result set.

There are many possible use cases for the Views Selective Filters module, and in this post I will walk you through some basic usage.

First of all you need to install and enable the module. Unluckily for you, there is a BUG in Views that needs to be fixed before this module will work (Issue #2159347). The patch has already made it to Drupal 8 (thanks to david_garcia and claudiu_cristea for their effort, go buy them a beer!) but the D7 backport still needs to be commited.

Now we will be creating a view in which we will display a list of all the content in the site, and an exposed filter to filter by content type.

Go to /admin/structure/views and select "Create a new View":

Now give a name to your view and select "Continue and Edit":

Change the default display type to Table, and add a Content Type field that is excluded from display:

Go to the Add Filter option. After enabling the Views Selective Filters, for every filter in the system you will get an additional one with the "(selective)" suffix. Look for the "Content Type (selective)" filter:

Now you must configure the filter:

  • Display Field (1): This is the field (previously added) that will be used to populate the exposed filter options. This field can be hidden or visible, and even rewritten. Only fields that are compatible with the selected filter will be shown here.
  • Sort (2): Pick ascending or descending alphabetical sort. When using an Options field, you will see an option to sort using the original Options definition order.
  • Limit (3): If you unproperly configure your view and the field used to populate the field's content returns too many results, you can crash the view. To prevent that a default of 100 limit is set for the exposed options. For example, if you try following the steps in this post using the Node ID field and filter, and you have 2,000 nodes in your site, then you will get 2,000 elements in the exposed filter (good luck trying not to get an out of memory error...).

Save the filter and you are ready to go! You now have an exposed filter that will only show the content types that are being used. If one of your content types has no content, it will not appear in the exposed filter.

Final Words

This is a very simple - and lightweight - yet helpful module that you probably will need on any website if you are using views that expose taxonomy or options fields.

There is only one caveat with this module: the number of available filters effectively doubles. Other modules such as transilteration (i18n) also have a similar effect, so that for a field you will get different filter versions (original, translated, format, selective). But this is only an issue on sites with a very large number of fields, and well, it just means a very long (and slow) list of filters when adding a new filter to a view - nothing to really worry about.

The Views Selective Filters module is maintained by the Drupal team at Sabentis, the authors of the www.DrupalOnWindows.com website. If you need professional services - or partners -, these guys handle more than 300,000 alumni per year on their corporate deployed e-learning platforms and you can come and visit them at their Barcelona (Spain) or León (Mexico) offices.

Jan 31 2015
Jan 31

Here is a known fact - it's really easy to break the sites you are building. One wrong line of code, and a page is returning a 503 error.

Here is a known secret - (almost) nobody is doing QA. Since I'm not into arguing about this, I'm willing to soften it a bit to "most companies, don't do proper QA".

The reasons are pretty clear - not enough time and not enough budget. This post isn't going to be about the importance of QA - that point is clear to everybody, but rather give realistic tips and tools that will allow you to start improving the quality of your projects, and actually even save you some time and money.

Let's start by agreeing that a developer can never do a proper QA to their own feature. They can't not because they don't care, but because they are normally rushed into their next task.

Let's also agree that a code review (which for me is even more important than QA, but that's for another post), isn't QA. It may catch bugs - but it's meant to check the entire site or application.

Nothing can replace a human doing a manual check. The trick is however, making sure they do it only once, and let the machine repeat that test automatically over and over again.

How to start your QA

Just start.

Seriously, no divine sign will be given. Basically, there are two approaches:

  1. You catch one person of your team that has a good eye for details, and the enthusiasm to make sure all your applications are properly tested, and ask them to be responsible for the QA. It's recommended that person would be a developer, so they truly understand what they are checking, and know all the weak points.
  2. You ask all the developers to start doing some (automatic) QA.

As always, in Gizra we are advocating for a balanced approach. We have a couple of people responsible for the QA, and the rest of the developers know they have to write at least one test per day. Why not write a test for each pull request, you ask? The answer is simple - writing tests takes time, so it's about finding the balance. Start with a test a day, and as your team advances, the automatic tests will get created more regularly.

Our two rules of thumb for automatic tests coverage are:

  1. We try to reach a 15% coverage of all the features.
  2. When it comes to access we have a 100% test coverage. Your client will thank you for it.

Practical example

I'd like to demonstrate how easy it is to start QA. Lets say you are creating a new content type on your site, and you migrate content to it. You will probably validate manually the migrated content once. Now with a minimal effort, the same test can be repeated for you on every commit.

By definition tests shouldn't be "smart" or try to do super sophisticated stuff. They should just assert a certain behavior.

# blog_post.feature
Feature: Blog post
  In order to be able to view a blog post
  As an anonymous user
  We need to be able to have access to a blog post page

  Scenario Outline: Visit blog post page
    Given I am an anonymous user
    When  I visit ""
    Then  I should the text "" under the main content
    And   I should see the author ""

    | url             | text             | author   |
    | some-url/foo    | That Lorem Ipsum | Hélène   |
    | anotehr-one/bar | 

Some HTML | Diderich | | and-a-third-one | | Celine |

Behat's "Scenario Outline" test will iterate over each row under the Examples table, visit the URL and assert the text or HTML exists in the page or under a specific region.

The sentence Then I should the text "" under the main content gets converted into code which basically instructs Behat to look for a certain HTML under the #main-content region.

 * @Then I should the text :text under the main content
public function iShouldTheLinkUnderTheMainContent($text) {
  $this->assertElementContains('#main-content', $text);

Obviously we could have tested more, but assuming you had zero tests up until now, I would say it's enough to test a few items. Again, considering you didn't have anything before, having something is already a major improvement. The above simple example, for me is considered a proper QA!

So for the next content type, you will only need to copy the blog_post.feature file, and change it a bit to fit the new page's logic.

You know what's the best part? If you didn't have any QA in place up until today (like every other company around you), just by having someone from your team going over your applications with a QA mindset and not a developer mindset, and by writing a few simple automated tests, you will significantly improve your final product, not to mention the time saved on fixing regressions.

Quickly Setup Behat

Whenever we want to add Behat testing we use the Hedley generator as it scaffolds all the needed files, and even sets up a .travis.yml. I've added some automatic tests to assert Gizra.com which is built on Jekyll. You can grab it and try it for yourself.

One time setup:

git clone git@github.com:Gizra/Gizra.git
cd Gizra
cd behat
# By default the base URL is set to http://gizra.com
cp behat.local.yml.example behat.local.yml
# Assuming you have composer instlled globally https://getcomposer.org/doc/00-intro.md#globally
composer install

Execute the tests ./bin/behat

Not so hard, is it? Go ahead and start your QAing today!

Jan 31 2015
Jan 31

When I first wrote Ubercart's Cart module, we knew we were going to support both anonymous and authenticated shopping carts and checkout. The decision came at a time when there wasn't consensus around the impact of forced login on conversions, but we knew we wanted it to be optional if at all possible. Additionally, for authenticated users, we wanted to preserve items in their shopping carts so they would see the same items when logging in from multiple devices or across multiple sessions.

This resulted in a small conflict that we had to figure out how to deal with: users could have items in their authenticated shopping carts but browse the site anonymously, create a new shopping cart, and then log in. What should happen to the items in their authenticated carts vs. the items in their anonymous carts?

There are three basic resolutions: combine the shopping carts together so the user still has a single shopping cart, remove the items from the previous session and leave it up to the customer to find them again if desired, or retain the old shopping cart but ignore it until the customer has completed checkout for the current cart. In Ubercart, I chose to combine the items, but in Drupal Commerce I changed course to retain the old cart but, from the customer's point of view, treat that anonymously created cart as the current cart after login.

We got some push back for this decision, but ultimately I didn't change the default functionality of Drupal Commerce. We just made sure there was an appropriate hook (hook_commerce_cart_order_convert()) so developers could alter this behavior on a site-by-site basis as need be.

From the merchant's standpoint, the thinking behind combining carts goes that you don't want customers to forget they intended to purchase those products in the past. However, from the customer's standpoint, suddenly having additional items in the cart after logging in during the checkout process is quite jarring.

In fact, I've been bitten by this behavior when shopping online at Barnes & Noble. Weeks prior to placing an order, I had put a Wheel of Time novel in my shopping cart but eventually bought the book in store. When I came back to the site to purchase a gift for my wife, I used a login button on the checkout form to quickly reuse my previous addresses and payment details. Unbeknownst to me, the website combined my old shopping cart with my current one such that my "quick checkout" experience made me accidentally order a book I already owned! I then had to spend 30 minutes with customer service canceling the order and placing it afresh just for the book I actually wanted.

That experience confirmed in my mind we made the correct decision not to combine carts automatically. As eCommerce framework developers, we have no clue where a developer might like to integrate login during the checkout process. Best to let them decide if it's safe to do something with those previous cart items instead of silently making the decision for them.

That said, I believe we can improve the experience even further. Right now, Drupal Commerce retains the old shopping cart order, and after the customer completes checkout they'll see the previous shopping cart as their current cart. This can be confusing as well!

My ideal situation would likely be a user interface component on the shopping cart page where customers can see items they had added to their carts in previous sessions, giving them the option to add those products to their current carts. If they decide not to, I don't see any harm in then just deleting those historical carts and moving on.

There's always room for improvement. Smile

Photo credit: alphageek

Jan 31 2015
Jan 31

Sometimes, diving in to try and help work on something in an open source project can leave you feeling stupid, lost and confused. Generally, you'll find you are not alone. Sharing the problem, and the solution when you find it, can be helpful to build your own understanding, but also might help others too. So, just in case I'm not the only one feeling lost and confused about why the path / route / link issue in Drupal is so complex, I thought I'd share some of my confusion and a little ray of light that might help unravel this tangle of related terminology.

In the Drupalverse, we use IRC to connect with each other.  So I popped into channel and asked:

Can someone describe how drupal uses these terms? route, path, url, uri, link, menu item - or point me to a reference?

Angela Byron generously responded with a rough outline of definitions, which I've fleshed out a bit below with some references.


"this URL goes to this PHP code, and can only be accessed by these kind of people."
As far as I can tell, this is a relatively new concept for Drupal with routing and controllers, replacing the hook-menu system we had previously. Here's a couple of references that might be helpful if you want to build a deeper understanding.


Uniform Resource Locator eg. "https://www.drupal.org/community" It's generally the address we use to find content on the web.


Uniform Resource Identifier is often confused with URL because it's so similar. See the URI wikipedia page for more information. I'm not sure if or how Drupal distinguishes between the use of URIs, URLs and URNs (Uniform Resource Names), but let's save that yak to shave on another day.

The Build a module team made a video that describes the difference between a URL and a URI
What the difference is between a URI and a URL (a Drupal how-to)


The path is like a pathway to find content eg. admin/content but because it can be an alias, it may not actually represent the location of a file on disk, which helps lead to some of the complexity under the hood in Drupal, and the confusion about when to use http://example.com/blog/yakshaving, /blog/yakshaving, or node/5


foo - This one seems pretty straightforward - it's the HTML markup used to point to a URI or path.

Menu item

A link in a menu - which could be pointing to a route, path or URI.

Hope that helps you, it certainly helps me to lay it all out like this. And, just in case you're wondering how I fell down this rabbit hole, this relates to a series of critical issues holding up the release of Drupal 8.  If you can help, please get involved  or buy a ticket in my chook raffle to help fund the Drupal 8 Accelerate initiative.

Jan 30 2015
Jan 30

The Drupal Association is thrilled to announce the addition of four new staff members. As part of our goal to increase Drupal adoption and provide the community with strong support and advocacy, the organization has been growing at a rapid rate over the past year. Now, we’re welcoming four new staff members into the fold. Please help us say hello to Elise, Lucia, Rachel, and Tim!

Elise Horvath, Operations Team, Operations Coordinator

Elise HElise (EliseH1280) is joining the Operations team as an Operations Coordinator. She will manage key details of the Drupal 8 Accelerate program, will manage the Drupal Store, will assist Operations with any accounting needs, and will assist the board of directors by managing meetings and schedules and taking meeting minutes. Prior to joining the Association, Elise worked in logistics and operations for scrum training services. When not working, Elise enjoys spending time with her fiance, watching movies, cooking and baking, riding her bike, and going to Disney World whenever she has the chance!

Lucia Weinmeister, Revenue Team, Sponsor Fullfillment Coordinator

Lucia WLucia (lweinmeister) is the Association’s new Supporter Fulfillment Coordinator, and will be working with the revenue team to ensure that all our Supporting Partners, Hosting Supporters and Tech Supporters get the most out of their sponsorships. Lucia is one of three Austin, TX-based Association employees, and comes to the Association with a marketing and advertising background. Lucia was born and raised primarily in Mexico City, is fluent in Spanish, and enjoys reading, running, doing Crossfit, cooking, and chasing around her two sons, Bruce and Leon.

Rachel Rivera, Revenue Team, Junior Account Manager

Rachel RRachel (rayn1ta) grew up in the San Francisco area and spent four years living outside the US in Latin America, Asia, Africa and Europe. She has worked as a ski instructor, English teacher, and digital marketer. In addition to learning foreign languages, she enjoys yoga, hiking and scuba diving. As a Junior Account Manager with the Drupal Association's revenue team Rachel will focus on identifying and satisfying the needs of awesome Drupal Businesses.

Timothy Constien, Community Programs, DrupalCon Sponsor Fullfillment Coordinator

TimTim (timconstien) is joining the Association’s Community Programs team as a DrupalCon Sponsor Fulfillment Coordinator. In this position, he will be ensuring that DrupalCon sponsors enjoy all their benefits and receive top-quality service before, during, and after the convention. Tim is a graduate of Oregon State University, and most recently worked to support the sales and marketing departments at a national radio group based in Portland. In his free time, Tim enjoys exploring: Whether he is finding new pubs to shoot pool at, finding the new best food joint, exploring new tree runs to snowboard through, or road tripping to the next music festival, he is always on the go.

Please help us give a warm welcome to our four new staff members. It’s great to have you on board!

Jan 30 2015
Jan 30

We're always on the lookout for great sites built with Drupal Commerce, our truly flexible software that's changing the face of eCommerce one site at a time.

Pam Kerr is one of New Zealand's leading independent jewelry designers. Her company - Pam Kerr Designs - had a Shopify site that served retail customers well, but it didn't meet their growing B2B needs. With the help of Blue Fusion, a New Zealand based web design and development chose Drupal Commerce for its flexibility, power and customizable user interface.

For more information, check out the full write-up DrupalCommere Showcase

To see Drupal Commerce sites we've Spotlighted in previous weeks view the Other Spotlight Sites

Jan 30 2015
Jan 30

Last Thursday - Jan 22nd - President Michael D. Higgins launched the European Year for Development at Dublin Castle, saying that "2015 is a seminal year for the future of human development". Dóchas, as national coordinator of the programme, said that they "intend to use the European Year to encourage people in Ireland to take action, and to think of themselves as change makers". Participate in the conversation at the #EYD2015 hashtag.

Live tweeting from The #EYD2015 event! Share your thoughts with us pic.twitter.com/8SFbLDtC1k

— EYD 2015 (@EYD2015) December 9, 2014

As Ireland's leading web development agency to the non-profilt and charity sector, Annertech plans to play our part by continuing to make as much of our code and knowledge as possible freely available for others to use. At the same time we will continue to work with our great clients in the development sector, clients such as Oxfam Ireland and Trócaire, to ensure that they can best make use of their digital tools.

Examples of some of our contributed code for the NGO sector includes:

  • Commerce Donate - which encourages people to make a donation while making a purchase in a charity's online store
  • Webform Conditional Confirmation Messages - this lets our clients personalise confirmation/thank you pages on webforms so our they can increase engagement with their users.
  • Commerce CiviCRM - a plugin that sends customer data to CiviCRM - the world's most popular open source CRM for NGOs and other non-profits
  • Commerce eCards - which creates eCards as shop products as an income generation stream for charities and others.

For knowledge sharing, we will continue to be regular and frequent attendees and speakers at open source conferences and events:

  • February, Mark and Anthony will be in attendance at DrupalCamp London, with Anthony in line to speak about "Creating WOW Factor with Drupal"
  • April, Mark and Gavin will attend Drupal Developer Days in France
  • May, we will sponsor/organise/speak at Drupal Open Days Ireland
  • August, a number of us will attend Front End United in Bristol
  • September, the whole team will travel to Barcelona for DrupalCon Europe
  • November, we will sponsor/organise/speak at Drupal Camp Ireland

Why the continued focus on open source? We believe a more open and transparent world will be a better and more secure world. We are happy to play our part to achieve this. We wish all participating people and organisations a very successful European Year for Development.

Want us to help you to help others?

Jan 30 2015
Jan 30

Previously: Ensuring security of funds and preserving anonymity when using Bitcoin for e-commerce

I quite often use Mollom to prevent spam submissions on contact and comment forms. It works pretty well, but some spam still gets through.

An alternative anti-spam technique is to require a Bitcoin dust transaction before an unprivileged user can POST a form. The value of such a transaction would only be about $0.001 USD. For a non-spammer this cost is fine, but for a spammer this is enough to make it totally uneconomical as they need to send out millions of posts.

I created a Spam Filter sub-module in my Coin Tools project for Drupal. It can be used to require a Bitcoin payment on any form on a Drupal website.

Coin Tools already has a Bitcoin payments system. When viewing a form, a new payment is created for the minimum amount possible. In the latest Bitcoin reference implementation the smallest output is 546 satoshis. However, many wallets still use the old value of 5460 so that is what is used.

The form's submit button is hidden with CSS (it still needs to be in the DOM for the form to function correctly) and a clickable QR code for the payment is put in its place. Coin Tools payments are BIP 70 compatible so a payment can either be satisfied by a direct POST from the wallet to the Drupal website, or the wallet can broadcast the transaction through the Bitcoin network (slightly slower).

Once Coin Tools has determined that the payment has been completed it will POST the form via JavaScript. If there are any validation errors the form will reload in the normal Drupal way. In this case, the submit button is no longer replaced by a QR code as it is recorded in the form state that the payment has been made.

When the form is submitted it is also verified on the server that the payment has been completed.

Here is a video of it in action.

Of course, this technique requires that the user has a small amount of bitcoin. For a website not targeting the Bitcoin community it would not only prevent spammers it would actually prevent everyone from posting. As Bitcoin usage increases this technique will be able to become more commonplace.

Browser integration

It has been proposed before that web browsers should have Bitcoin SPV wallets built-in, e.g. for paywalls. If a payment is required an "HTTP 402 Payment Required" response would be generated. In that situation it would make sense for the browser to prompt the user before a payment is made. For the spam filter this could just happen automatically. The transaction could actually just be included as part of the POST to submit the form.

Burning Coins

Because the transactions are for such a small amount it may not be economic to spend the received funds as large miner fees would be required. It might be simpler to just generate a random Bitcoin address for each payment. This means that you don't have to have a wallet on the server and could just use Chain to check if the payment has completed.


If a double-spend on a comment submission was detected after it had been accepted, the post could be deleted. For email submissions, they could be delayed a few seconds to be sure there is not a contradictory transaction floating around.

Even without implementing these protections, double-spending wouldn't make sense for a spammer.

Could a spammer double spend and avoid paying the dust amount? No - double spending is extremely expensive so it would be even worse value for money than just paying the dust amount.

Could a spammer simultaneously broadcast many transactions that spend the same outputs to many different forms and websites? In theory this might be possible and some of the forms would accept the POST before realising the transaction is a double spend. Spamming multiple forms on the same website simultaneously would be impossible because the website would be connected to just one Bitcoin node. If this did become an issue the fee required to POST could just be increased to make it uneconomic.

Greater amount?

Of course, it may be desirable to actually charge a larger fee for the purpose of generating revenue. The admin interface could be extended to allow a configurable amount.

Jan 30 2015
Jan 30

In this week's episode Addison Berry hosts Greg Anderson, one of the Drush maintainers, and Juampy Novillo Requena to discuss Drush. We start off by explaining why Drush exists and some cool things about it. One of the big hangups people have with Drush is installation, so we talk a bit about that, and how it is easier now with Composer. Speaking of Composer, we dive into what it actually is, why the Drush project has moved to using it, and how things look for using Composer with Drupal generally. Composer is part of the upcoming Drupal 8 and has quickly become a standard packaging and dependency tool throughout the PHP community. We also touch on the difference between Drush 6 and 7, talk about Juampy's new Drush book, Drush for Developers, and wrap up with Greg and Juampy's favorite Drush features.

Jan 30 2015
Jan 30


Before I pelt you with details of my first year as a sysadmin, I think I should give you a little information about myself. I’m Emlyn, 23 years young and based in the UK...the West Midlands to be a little more precise.

I graduated from Coventry University with a 2:1 in Games Technology with the hope of becoming a games programmer. However, after graduating, I noticed my portfolio fell far short of other students on my course, and, I’ll be honest, that bummed me out.

That was when I was approached by my cousin, Jamie, who works as a Systems Manager for Code Enigma, with the prospect of training to become a Junior Sysadmin. I gave it some thought; I had always been interested in Linux systems and wanted to know more about them. In fact, one of my modules at University was Operating Systems Security and I thoroughly enjoyed the assignment we were given where we had to create a shell script that provided the ability to display working processes, navigate a directory, list online users, amongst a few other things. So, I accepted the offer of a three month internship, which started in October 2013, which then turned into a permanent position in January 2014, and I really haven’t looked back since!


As with every new job in a new field, training is required. For me, this was two weeks in Cardiff with Jamie, trying to learn as much as possible before working from home by myself. It was daunting, I’ll be honest. I knew there was so much to learn in a very short space of time. Perhaps I put too much pressure on myself; no one expected me to become an expert in just two weeks!

On the very first day of my internship, I was given the task of ‘building’ a new server which would serve as an internal server. So, my first lesson was in Puppet and how to use it to provision a new server semi-manually. This blew my mind! A few simple-ish (well, now I think they’re simple-ish) steps and Git pushes later and a new, usable server was up and running. Let me go back on myself, my first lesson was in Puppet and Git. One interlinked the other, and vice versa.

In a sense, I was thrown in at the deep end, given tickets to work on myself. Jamie pointed me in the right direction and gave me hints and tips that aided me in my work. In all honesty, his method of training me was brilliant, perfect. I’d try the work assigned to me first, and ask for help when I needed it.

Over the course of the two weeks, Jamie never showed any impatience or annoyance at what I was doing, even if I did something completely wrong or didn’t understand what had to be done. He taught me an important part of being a sysadmin: patience, especially patience towards clients. I had to understand that the client may not know as much as me, even though I’d only been in the industry for two weeks myself!

Over the course of my three month internship, and in fact the following nine months, my training never really stopped. I was learning new things every single day, all the while fine-tuning my newly acquired skills. Another important lesson I learnt during my first year as a system administrator was that I will never know everything there is to know in being a sysadmin, there will always be something new to learn.

Working from home

Being given the chance to pursue a totally new career and develop new skills was fantastic and an opportunity I couldn’t turn down, but what put the icing on the cake was that I’d be working from the comfort of my own home. However, I realised that I was going to have to be very disciplined, even more so than I normally would. It dawned on me that my fellow colleagues would be trusting me to work efficiently and I did not want to let them down by being slack just because I was working from home.

Working from home has its advantages, as you can imagine. No suit and tie and no long commute to work every morning. So there’s not a mad rush in the morning and I don’t have a busy office environment around me to distract me from my work. However, I am alone (as are my colleagues), which means not being able to walk up to them should I have a question or concern and not being able to go for a drink or food after work. But, these disadvantages, I feel, are out weighed by the advantages of working from home.

Running a distributed company can be quite challenging. My boss, Greg, is currently working on a series of blog posts in which he goes into detail about being Spread About. It's well worth a read.

What was I expecting?

I’ll be honest, I didn’t know what the job was going to be like! All I knew was that I was going to be learning a lot in a short space of time. From the email conversation I had with Jamie before starting, I had a feeling it was going to be all hands on deck. A lot of the time, I’ve been kept busy all day dealing with several different clients, all wanting different things done. Other days, it’s been a little less busy and I’ve been able to spend a bit more time on each task.

Surprises! Of all kinds of variety

I’d be lying if I said I hadn’t been surprised by anything. I think what surprised me the most in my first year, and what still baffles my mind today, is the sheer number of tools and software available to a system administrator. For example, we use Percona as our database of choice, but we could use MongoDB, or MariaDB. Then there are multiple database caching systems to choose from, numerous HTTP accelerators, web servers and continuous integration tools! My point is there isn’t a short supply of tools available; in hindsight, I’m not sure why that surprised me, but it did.

Top five things I’ve learnt

A year ago, I knew very little to nothing about being a system administrator. I could be very lazy and just list the top five things I’ve learnt in my job, but I’ll try and be a bit more descriptive. In no particular order of importance:

  1. Patience is actually a virtue! I’ve learnt that when dealing with a task, be it for a client or a personal task, to be patient with it. Getting frustrated and angry (perhaps covertly when dealing with a client) will only make matters worse and lengthen the time it takes to solve the problem. Clients who are not as technically knowledgeable appreciate patience; as I described above, Jamie was incredibly patient with me during my training, which helped me to understand the problem at hand more easily.

  2. Security is important. Very important. I deal with client data on a daily basis, so being proactive in keeping my workstation and computer secure at all times is important. I’ve learnt to be vigilant when transferring data to a client or a colleague; rather than send them a sensitive document over plain text email, I’ll upload it to a server the recipient has access to so they can grab it from there themselves. I’d GPG encrypt an email to them, but ever since the people who make the GPGTools plugin for the OSX mail app have started charging for the service (shakes fist), I’ve opted to use other means. We at Code Enigma like FOSS! Over the last year, Code Enigma have become ISO 27001:2013 certified, which meant I had to learn and understand how to handle data and documents properly.

  3. Workflow is also important. I learnt about the simple, yet very effective, workflow used by Code Enigma in the first couple of weeks of joining the company. It’s as straightforward as this: master -> stage -> production. Push changes to the master branch first, then merge them through to stage after some initial testing. Once the client is happy and has signed off the changes, push them through to production for deployment to the live site. Simples! However, before I joined CE, I didn’t know or think like this. When I ran an online game with my brother (not in Drupal), I’d make changes to the code locally, with no local setup running, and FTP (ew) the updated files directly to the server, which affected the live site. Horrendous. Now, when I start work on a personal project, I have an exact workflow I want to follow.

  4. Drupal...well, a small amount. A very small amount. Primarily, I’m a sysadmin, so I do a lot regarding setting up Drupal sites and debugging server issues relating to Drupal. I did a small amount of Drupal development at the start of my employment, so learnt a little bit about it which has fared me in good stead when it comes to dealing with simpler Drupal issues.

  5. Linux. Well, obviously not all of it, but I have learnt a lot in my first year. Obviously there are some things I’ve learnt that I need to improve on, but that’s a given with most operating systems, especially Linux. I knew a little bit about Linux before joining CE, but only rudimentary stuff from using Ubuntu as my main OS for a little while. During my first year, I picked up some really useful tips and tricks, learnt of different command line tools and how to use them, such as Drush and learnt how to read system and access/error logs. That’s just to name a few Linuxy things I’ve learnt.

What I hope to do/learn in the next year

There’s so much out there to learn that appeals to me, I could probably write a separate blog post just about that! But what I really want to learn, that I think will benefit me as an individual and my role in Code Enigma, will both be time consuming and difficult: MySQL multi-master replication and DRBD, to assist our Australian sysadmin with server cluster issues, and Drupal 8. I’ve a head start (ish) on Drupal 8 in that I know a bit about PHP, but I’ve been made aware the PHP I know and have programmed in is archaic and old-fashioned. It’s time to delve into Object-oriented programming!

It would be great (for me at least) to get a text-based, role-playing game up and running in Drupal, be it 7 or 8. But seeing as I want to develop my D8 skills, it makes sense to develop the game using the latter of the two. As I previously mentioned, I used to run an online text-based game with my brother, which used a very poorly written engine now that I look back on it. I’ve not seen many, if any, games of the sort created using Drupal. Even if no one plays the game, it’ll still be an achievement in my eyes as I’ll have improved my Drupal knowledge.

MySQL multi-master replication will be an important concept to understand and know how to use because we have a handful of clients that have a high availability layout setup with us, which include two application servers and two database servers, at least. If there’s a blip in the network, replication between the servers can be affected, which requires manual intervention to put right. If this happens, our Australian sysadmin is woken up through the use of his batsignal, but if I could solve any replication issues and save him being woken up in the early hours of the morning, then it’s a win-win for both of us; I’ll have bolstered my knowledge and Mig won’t be woken up abruptly.

What are my impressions after my first year?

My impressions after my first year as a system administrator for a Drupal agency are very high; the other sysadmins in our team are incredibly knowledgeable and can adapt to every situation and issue thrown at them. This has given me inspiration to develop my skills and knowledge to reach their level and some day leave the same kind of impressions on a future junior sysadmin.

My impressions for working for a Drupal agency are just as high. My colleagues are all very clued up in their area of expertise, from Drupal magic to content strategy to finance management. Everyone does their part for the company, be it providing bespoke tools for a website (such as the ones used for this very site) or by providing top level training to new, or old, Drupal agencies. It's an honour to be part of such a fantastic team!

Main image by Wendy Seltzer, released under the Creative Commons Attribution 2.0 Generic license.

Jan 29 2015
Jan 29

In my last blog post I explained what the Panels Suite is and does. I explained how Panels itself is a User Interface on top of hook_theme() and theme(). That technical explanation of Panels underlines what I think is its main conceptual virtue:

Panels encourages a mental model of pulling data into a specific design component

Panels pt 2 blog post image

At Palantir we're working with Design Components that are created in static prototypes. Design Components are the reusable pieces of front-end code that compose a design system. Design Components should not know about Drupal's internal implementation details. We're not alone in this thinking. (Inside the Drupal community, and outside of it).

The task of "theming a node" is now "print this node so that it renders as this design component." Unfortunately Drupal core does not have hook_design_component(). It has hook_theme(). Some of the entries in hook_theme() from core are essentially design components.

Entries like ‘item_list' and 'table' are design components. They are conceptually based around their HTML rendering. They make sense independent of Drupal. To use them as a Drupal Developer you need to get your data organized before you call theme() (directly or otherwise).

On the other hand, much of the Drupal core usage of hook_theme() is not at all design component minded. 'node', 'user', 'comment' all have entries in hook_theme(). In these elements you don't have to organize your data before calling theme(). You can give theme() a node object and after that template_preprocesss_node() has to do a ton of work before hitting the template.

It's no coincidence that the design component-ish hook_theme() entries have minimal preprocessing or no preprocessing whatsoever. The design component-ish entries like ‘item_list' know what the HTML will look like but have no idea what your data is other than you were able to get it into a list. The non-design component entries like node know exactly what the Drupal-meaning of the data is but know very little about the markup they will produce on most production sites.

Panels unites the two mindsets. It knows what the incoming data is (A node context, a user context, etc) and it knows what design component it will print as (the layout plugins.) If you put a debug statement inside of panels_theme() you will see the names of layouts and style plugins. These hook_theme() entries are more of the design components-ish hook_theme() entries. They know what their markup will be. And the part of Panels most people pay attention to, the drag-and-drop interface, is where you control how the data of a node is going to prepare itself for the design component.

And here is where the admin UI of Panels might set up a confusing mental model.

How it looks in the Panels admin UI

Panels module - context >> layout >> content

But at execution time it is

Panels module - context >> content >> layout

Or the way I think of it

Drupal Data ? transforming Drupal data into printable variables ? design components for those variables to print in

Panels module - data (nodes)

The next time I get into a discussion about Panels at a meetup, DrupalCamp, or DrupalCon, think I'll first ask, "Does Panels let you think about building websites the way you want to think about building websites?" I like to think about passing variables into encapsulated configuration associated with a specific design component. I prefer that mental model to the "show and hide based on globals" mental model of Core's Blocks or the "just call theme() on a node and figure out the overrides later" mental model encouraged by node--[content-type].tpl.php. As the Drupal community asks itself again how it wants to do rendering, let's also ask "how do we want to think about rendering?"

The rise of design component thinking in the wider Wweb development world is not turning back. Web Components and modern front end MVC frameworks encapsulate design components. They do not care about every single implementation detail and layer of a node object. They care about getting variables ready for printing and updating. Panels module may fall out of the picture once Web Components fully mature. Until then, Panels allows for us to think in ways we will need to think for Web Components to work with Drupal.

Jan 29 2015
Jan 29

Shipping Requirements Have Changed

Heretofore, shipping has focused on exposing a simple yet effective API for connecting rating and shipping companies to the checkout process. Many stores eschew real-time rating and instead opt to offer flat-rate shipping or free shipping on orders. This entirely removes the need for rating. For those who do use real-time rating, the existing process offers a simple set of tools that are only adequate for simple shipping needs.

However, since the original release of the Shipping module, the ecosystem and community have changed. Functionality that could be centralized has been replicated in module after module, usually as a one-off change. 3rd Party Logistics providers, who store products and ship orders on behalf of businesses, are giving their customers the ability to manage and ship orders like their bigger competitors. It’s not uncommon anymore to ship certain products from one or more of these third party logistics providers. Being able to intelligently group products into less shipments by the cheapest or closest warehouse can have a huge impact on the bottom line. Shipping things like chemicals or bulldozers underscore the need for handling complex requirements, rate quoting, and itemized reporting. The existing Commerce Shipping module fails to address these things.

So let’s talk about Shipping 3.x.

Shipping 3.x Goals

Shipping 3.x has two main goals, summarized here:

  1. Reduce customization effort. Customizations currently involve rewriting parts of every carrier used for shipping.
  2. Reduce code duplication. Incorporate common functionality across current shipping modules.

In other words, the goal of Shipping 3.x is not to implement a wide range of functionality. No, primarily, this release attempts to make the developer experience better and make it possible to implement complex shipping requirements with less code and better “core” support for the types of requirements we see on a daily basis.

The release of Commerce Shipment was a huge step forward. It lays the ground work for breaking up an order into actual shipments. Commerce Shipment, making a shipment a fieldable entity, simplifies solutions to many common problems:

  1. Storing tracking numbers.
  2. Representing multiple shipments on a single order.
  3. Packing slips and pick lists.

It logically follows that Shipping 3.x would take advantage of Commerce Shipment and build upon it to create a more robust solution. At the core of that is the new rating engine. Which the rest of this post will cover.

Commerce Shipping 3.x Overview

To summarize what Shipping 3.x looks like, here is a side-by-side comparison of Shipping 2.x on the left and 3.x on the right:

Current workflow

  1. Manually configure shipping carriers and their methods in the backend.
  1. Rate the entire order if Rules conditions allow to do so (how it rates is dependent on the rating engine).
  2. Show the user the rates provided by the rater(s).
  3. Once the user selects a rate, add a shipping line-item to the order.

While this works fine for small, single origin and destination configurations, businesses looking to differentiate themselves or better serve their customers need more options. Things like multi-warehouse fulfillment, shipping to multiple destinations, and unique packing requirements often require custom one-off customizations to the entire fulfillment workflow.

The new, proposed workflow

  1. Optional configuration of the shipping workflow (ships backwards compatible).
  2. Manual configuration of raters and shipping workflow.
  1. A new (optional) pre-shipping pane that allows for user customization of the order. Think multiple shipping destinations or gift selection.
  2. Splitting - orders are split up based on their shipping location and their shipping destination.
  3. Packing - Orders are packed into boxes/shipments.
  4. Rating - Individual shipments are rated with shippers.
  5. Show the user the rates provided and, if necessary, allow them to choose the shipping methods and rates they want.
  6. Add shipping information to the order.
Order Completion
  1. Provide post-order support by allowing shipping engines to expose labeling and tracking numbers directly back to the shipping module.

Nuts and Bolts of Shipping 3.x

The New Rating Engine

At the heart of Shipping 3 is a new, multi-step, pluggable rating system. The multi-step rating engine allows for strategic management of shipping by allowing for plugins to only modify the steps in the rating engine that are required to achieve their goals. The new rating engine will have three specific hook points:

  1. Splitting
  2. Packing
  3. Rating


During this step orders can be split based on both the source and destination. Here, a multi-warehouse module might reach out to an API to determine the best sources for a given order. The shipments created in this step would will have all of their products split up. Likewise, a business which allows for multiple shipping destinations would be able to split up the orders based on the destination addresses populated in a checkout pane.

More examples:

  1. Single-origin products could have an address field added to them and a plugin used to split the order by looking for that field on the products within the order.
  2. Products fulfilled by 3PL would be split apart from the other products in an order.

By default, only a single shipment will be created on all orders.


During packing, a shipment will be further split into one or more boxes. Here, a plugin could allocate products by configured or automatic manipulation. Shipping costs could also be added based on packing material, cost assigned to each box, information added to generate packing and pick lists. For businesses that ship in specific quantities or have exact shipping requirements, this hook will make it easy to simply split up those products into boxes.

To accomplish this, we’ll be exploring the use of Commerce Box to provide an interface and management system for box types as well a generic packing algorithm. Ideally, each rating engine would provide their preferred boxes, but businesses could create their own box types.

As far as the actual packing is concerned, there are a couple of potential suggestions such as Commerce Packing and Packaging that both have starts at implementing packaging algoritms. All that would be necessary is to implement the proper hooks and allow them to work on the shipments instead of the order itself.

By default, only a single box will be created on all shipments and all items will go in that box.


Finally, the carriers (FedEx, UPS, USPS, etc) would need to be contacted for rates of any applicable shipping services. Or you can use a table rater. Each carrier will need to update to the latest version of the API to rate all packages and shipments in the order (as opposed to rating the order en toto).


  1. Out of the box, we can make configuration a bit more straight-forward by requiring an admin enter a store-evel origination address instead of making the owner enter the configuration for each carrier.
  2. Configuring the rating engine will likely be a simple UI at first to choose which plugins will be used, and like Shipping 2.x currently works, Rules will be utilized to enable and disable pieces based on conditions.

Customer Experience

  1. As mentioned earlier, a pre-splitting pane can be made available so that a store who wants to be able to provide a custom destination system could do so. You could upload a CSV or have a click-and-drag system where orders are split up based on user preference.
  2. Additionally, once the shipments have been split, the shipping pane will be refactored to allow for shipping experiences give the user as much control as you’d like to have. They could choose rates based on each package (e.g. Package 1 ships Ground and Package 2 ships Next-Day Air) or a single shipping method (Ground vs. 2-day).

Backwards Compatibility

Shipping 3.x needs to (at least initially) continue the trend of simplicity. All that should be required out of the box is to install Commerce Shipping (+ Shipment) and a 3.x-ready rating engine. By default, the splitting step will just create a single shipment. Admins will be responsible for creating a origination shipping profile (using the Shipping configuration area). This will serve as the source. The destination will be the shipping profile on the order. This shipment will consist only of physical items that are marked as being shippable. Packing will be “dumb” (throwing every item into a single, rater-defined box). Rating will add the cumulative weight, and then rate the single box.

Since there will be only one box and therefore only one shipment, the existing pane which shows rates should work exactly as expected. As will the existing shipping line item.

Shipping 3.x release timeframe

It’ll ship when it’s ready!

But seriously, we’re currently aiming for a beta release at Drupalcon LA. At a minimum, this will implement the new rating engine, the final release should be “drop in ready” for new sites.

How you can help

There are 3 ways you can help:

  1. Code: As code begins to be released and the API updated, carrier engines and other modules using Shipping 2.x will need to be updated. Check the issue queue as well.
  2. Test: Try it out, and provide feedback.
  3. Sponsor: If you think the changes in Shipping 3.x would help your organization, join us in building 3.x by sponsoring the work directly.
Jan 29 2015
Jan 29

Alphabet Help Drupal 8 has been all about pushing the boundaries, so why should help content be any different?

With the release of Drupal 8, we will also ship with tools to complement hook_help() entries: if you, the developer, are providing a documentation page for your module, why not also provide an interactive step by step guide on how your module works as well?

The idea of Tour isn’t a new one; it has been maturing over the past two years. It all began after the release of Drupal 7 when we decided to move the help passage from the front page to the help page. This meant that users new to Drupal would not see this text, and would have to struggle through with no guidance.

In light of that issue, the below was suggested;

How about creating a “Welcome” message that pops up in an overlay with that same information that continues to appear until either the user checks a box on the overlay saying to dismiss it or the user creates a piece of content on the site?
- Vegantriathlete, August 10, 2011

With tour.module committed to Drupal 8 core, we now have context-sensitive guided tours for Drupal’s complex interfaces, and developers have a new way to communicate with the user. It doesn’t stop at core either: contrib modules can ship with tours to describe how users can take full advantage of their modules. Distributions can also ship with tours on how to get started. Imagine a tour in the Commerce distribution that took the user through setting up products: That would be amazing! It would enable users to discover the pages they are looking for and take the guesswork out of finding pages.

How do I Take Tours?

Tours can be initiated with a toggle which appears on the top right corner of a page. The toggle integrates with the new toolbar developed by the Spark team and is visible on those pages that contain tours. As of this writing, there is no standard way to find a listing of all tours, but it’s my belief that we, as a community, have reached a consensus that all tours will be listed on the help page – the same page where all help module content can be found as well. We are working very hard to get this committed to core (and might already be committed as you read this article). For more information on this work the issue can be found here http://wdog.it/4/1/tour.

Creating a Tour

To be able to write tour content, you will first need to learn two terms. The first is tip: what we call each of the steps in the tour. Generally, a tip is a small item of content, like a string – no more than two sentences that describe the item it is contextually referencing. The second term is tip type, which references the communication medium of the tip. That could be anything from a basic “title and body” type to a “YouTube video” type. This ability is due to the power of Drupal 8 and its plugin system, which streamlines architecting a module and keeping it very extendable for others.

So now that you know the basic terminology, let’s create a tour with a single tip as part of the tour. Tours are defined in YAML format and are stored in the config directory within the appropriate module. We’ll create a file tour.tour.submission-form.yml in the directory yourmodule/config/ which contains the following:

id: submission-form
label: Registration page
langcode: en
  - route_name: user.register
    id: field-first-name
    plugin: text
    label: "First name"
    body: "This is where you enter your first name."
    weight: "1"
      data-id: field-first-name

In the example above we created a tour with a single tip called “Registration page”. When the user clicks the toggle to enable the tour, a tip will be placed over the #field-first-name element of the page and show the text “This is where you enter your first name”. The tour tip type of “text” will ship with Drupal 8.

But what if you want different tip types?

We extend!

Let’s create a YouTube tip type that will accept a video ID and render that clip inside of a tour tip. That can be done by placing the following code in the file lib/Drupal//Plugin/tour/tip/.php:

Jan 29 2015
Jan 29

The Bootstrap HTML framework in Drupal, we love it. That's why we build the front-end of Drupal distribution OpenLucius with it. So we love it, but why is that?

There are alternatives to integrate in Drupal websites. Below we will give you a few reasons why we currently prefer the Bootstrap framework.

Why a HTML framework

First of all, why should you use a HTML framework? These possibilities also exist:

1) Write everything fully by hand:

Nowadays responsiveness is required with almost every new website. Bootstrap is offering cross-browser compatibility for this. To build the required responsiveness every time from scratch would make no sense.

2) Ready-made Drupal themes

You can download free Drupal themes or buy them ready-made. This will take you quickly in the right direction, but ‘the devil is in the details’. The final details are usually difficult, but necessary to make your desired layout. The problems are usually caused by not knowing the code and the code is often not scalable designed for your purposes. It is a kind of Rube goldberg machine to you.

Why Bootstrap

So a HTML framework is our weapon of choice. Specifically Bootstrap, 5 reasons why:

#1) Good documentation

It has become a widely used framework in Drupal. The Drupal Bootstrap base theme has currently almost 300.000 downloads and 50.000 installations. It is not only used in the Drupal community, other popular CMS systems, like Wordpress, also make extensive use of it.
As a result of this broad implementation, a lot of documentation is available and most questions are already answered on forums like StackOverflow.

#2) Good Drupal integration

Since we are a Drupal shop, scalability and flexibility of the integration are necessary. And of course this is offered, the technique of Drupal Bootstrap basis theme is excellent. It is even integrated with Bootswatch themes, so you can instantly choose from 14 ready-made templates.

We are gratefully using this in our Drupal distribution OpenLucius.

#3) Many ready-made free templates

Because it is used worldwide, there are many websites that offer paid and unpaid Bootstrap HTML templates, for example:

#4) Many components (snippets) are already available

Websites usually consist of similar content: homepage, list pages, news items, blog, contact, drop down menu, slider with pictures etc. But also think of elements such as a profile page, a timeline, or a login screen.
There are many websites that offer such components (snippets) within the Bootstrap HTML framework. Some examples:

A timeline

A profile page

A useful dropdown selector with filter function

?We have used these in OpenLucius:

Data tables

Data tables are providing performance optimalisation compared to standard Drupal Views. Data tables are loading all ‘tabular data’ and creates pages using jQuery. This will make a difference in the server requests when requesting each new page.

#5) Integrates with WYSIWYG

When you are working with content managers, you like them to see the text format as the visitor gets to see them. In other words: the text in the wyiwyg editor must be consistent with the front end. With Bootstrap this is relatively simple.

Relevant Drupal modules

Get started with Bootstrap in Drupal, this will give you a kick start:

And many more. Not everything in this list is bootstrap integration, it also contains results of modules that say something about ‘Drupal’s bootstrap process’. But that’s another chapter :)

Wrap up

Alright, that’s it. As always, don’t hesitate to contact me in case of questions or feedback!

-- Cheers, Joris

Jan 29 2015
Jan 29

Nothing keeps Drupal tourists from spreading the word! We are passionate for Drupal and IT, so enjoy meeting like-minded people very much! Despite the cold winter weather, Ternopil welcomed us with warmth and friendliness. How was it? Our blog post will tell.

We were getting ourselves ready for the ride for almost a month. Our brandy Drupal van wanted to make nice impression too, that’s why the journey hit off from the car wash :)

Drupal van car wash

Three hours of the journey passed like a moment and yet we came to Ternopil! Our tourists were gladly exploring the sightseeing points of the city, while there still was a time before the event actually hit off.

So the DrupalTour began! There was quite a lot visitors. Just have a look at all that clever faces:

DrupalTour visitors

We started with the report of Andrii Sakhaniuk. He informed everyone about sequence of actions, taking place, when you open an URL. The topic was cool and interesting, besides, Andrii did his best to make the report understandable for visitors with any technical background.

Andrii Sakhaniuk DrupalTour

Our next star was Vasyl Plotnikov. He understands the web development process from inside and knows many tricks, helping to solve multiple web development problems. This time he shared his experience of writing PHP extension. And yes, Vasyl was coding right there!

Vasyl Plotnikov DrupalTour

Everyone knows, that web security is crucial for any site. Max Orlovskii, the speaker from MagneticOne, delivered a report about this issue. Was very interesting and useful! By the way, we want to thank MagneticOne company for becoming a sponsor of DrupalTour. You guys are cool!

Max Orlovskii DrupalTour

And the last but not least: Serhii Puchkovskii! He was giving Drupal developers an advice, how to save their time. The answer is simple: Drush! More beer, less effort :)

Serhii Puchkovskii DrupalTour

So the official part of DrupalTour passed by and yet we’ve got another surprize — an excursion through Ternopil, held by our hosts. The evening city was truly amazing with all those lights!

After that we headed back to Lutsk, tired, but happy. Join us in our next ride! We’re heading to Khmelnitskiy!

Jan 29 2015
Jan 29

In An Introduction to Git Part 1, you learned what Git is and how to download it on your computer. In An Introduction to Git Part 2, you learned how to configure Git and create your first Git repository. In this section you will learn how to add and commit files to your new Git repository.

Git is one of the secrets from my 5 secrets to becoming a Drupal 7 Ninja ebook, and much of the following posts are from this ebook. To learn more about Git and the other secrets, please consider purchasing the ebook or signing up for the 5 secrets email newsletter which will give you more information about the 5 secrets.

Viewing Your Project Status

Git Command What does it do? git status View status information about your current Git repository.

The git status command is a command you will run early and often. It tells you the basics of what has changed with your Git repository. If you run the command now, you will see that there is nothing to commit yet. The command does tell you the branch that you are on (which we will cover later), as well as text telling you this is the “Initial commit”.

Git Status

Notice how the last line of the git status command tells you to create/copy files and use git add to track. Git is full of these helpful hints that tell you what you need to do. This can especially be helpful when you don’t know what to do or it has been awhile. I have spent way too much time searching the internet for answers, when the answer was often in the command output of the Git command I previously ran.

Ninja Lesson: Read the output of Git commands. It will save you time and headaches.

Open up a text or code editor and create a test file. Name your test file test.txt and keep the text really simple for now.

Example Text File

You will see what my current git_test directory looks like with the new test.txt file created.

Git Directory

Now re-run the git status command to see what has changed.

Git Status New File

Notice how the test.txt shows up under the “Untracked files” section. Also notice the line above the test.txt line that tells you how to add this file to include it in what will be committed.

You may have noticed the .git hidden folder inside the git_test folder. This hidden folder is created when we created the Git repository with the git init command. It is used to track everything about our Git repository. If you delete this folder, you are deleting your local Git repository.

Ninja Lesson: Do not delete the .git folder or you will delete your entire local Git repository.

Adding Files to your Git Repository

The next step in the process is to add the files to the Git staging area. The Git staging area is a middle ground between what has changed, and what has been committed to your Git repository. You can add files to this area and when you are ready, commit these files into one Git commit.

Git Command What does it do? git add [file] Add a specific file to the Git staging area of your repository. git add . Add all new/modified files inside the current directory to the staging area of your repository.

We are going to run the git add test.txt command to add the test.txt file to our Git staging area. We will then run the git status command to see that our file is now ready to be committed.

Git Add

If we had multiple files to commit, or we did not want to type in the file name, we can use the git add . command. The . (period) indicates to Git to add all new/modified files in the current directory or any subdirectories in the current directory (it does this recursively so even files in multiple levels of subdirectories would get added). We will use this command in future sections to provide a better idea of how it works.

Committing Your Changes to your Git Repository

A Git commit is a way to finalize and log the changes that we have added to our Git staging area. This essentially creates a new revision of your project at this particular point in time.

Git Command What does it do? git commit Commits all changes from the Git staging area, and launches a text editor to create a commit message. Save and close the text editor to complete the commit. git commit -m “My commit message goes here” Commits all changes from the Git staging area with the corresponding commit message.

You can use the git commit command to commit all of your staged changes. You will then need to fill out a commit message after the text editor is opened. After you save and close the file, the commit will be finalized.

You can also use the git commit -m command along with an inline commit message to simplify the process into one command. I prefer using this method as it is simpler than having to use a separate text editor tool. I also show the git status command after the commit which lets us know that we have nothing new to commit (our working directory is clean).

Git Commit

The commit message is much more important than it originally seems. The commit message provides a way for you to describe what has changed in the project. This makes it easy for you or others to quickly look at a history of commits to see how the project has changed over time.

Ninja Lesson: Commit early and commit often.

Viewing What Has Changed

Now that we have our first Git commit under our belt, we will make a few more changes. Let’s add an additional line to our test.txt file.

Example text file with new line

We will also create a new subfolder inside our git_test folder. Let’s call this directory test_folder.

Git test_folder

Inside this new test_folder directory, we will create a new file called test2.txt.

Second test file

We can now view the status of our git repository with the git status command.

Git Status

Notice the command output of the git status command lets us know that the test.txt file has been modified. It also lets us know about an untracked directory called test_folder.

Git Command What does it do? git commit Commits all changes from the Git staging area, and launches a text editor to create a commit message. Save and close the text editor to complete the commit. git diff Shows the changes between the last commit and the current working tree. This will only show changes in files that have been added to the repository.

Use the git diff command to see the specific changes of any files that have been modified.

Git Diff

Ninja Lesson: The git diff command only shows changes to files that are already being tracked by your Git repository.

We can use the previously mentioned git add test.txt command and a git add test_folder/test2.txt command to add the two files to the staging area, or we can use the git add . command to add both files for us automatically.

Git Add All

Ninja Lesson: The git add . command can be a real time saver, just be careful not to commit files you were not intending to commit.

You can now commit these changes. In this example, we will just use the git commit command without the -m to add the commit message.

Git Commit

Running this command will bring up a text editor to add a commit message. The text editor might vary depending on your system.

Git Commit editor

Now we simply add a commit message for this specific commit.

Git Commit Message in editor

We now save and close the text editor. This will complete the commit with your new commit message. This is just an alternative to adding the message directly within the commit command.

Git Commit Finished

Intro to Git Part 3 summary

You now know how to add and commit files to your Git repository. In An introduction to Git part 4 you will learn about Git branches and how they can be used within a Git repository. If you are looking for how this all relates to your Drupal projects, don't worry, we are getting there soon.

Jan 28 2015
Jan 28

While we at Mediacurrent have been trying to bring internal focus and organization to our contributions, the rest of the Drupal community was planning a “Sprint Weekend” for January 17th and 18th.

For those who've not experienced one of these - imagine a user group meeting where everyone works together on contrib tasks - writing patches, reviewing patches, improving documentation, etc. Now imagine tens of such user groups happening on the same weekend all around the world, and collaborating via IRC, Twitter and the Drupal issue queues. Originally conceived by Cathy "YesCT" Theys, a suggestion by Angie "webchick" Byron turned a small local code sprint into the first Sprint Weekend in March, 2013. This year marked the third such event, and they have become a regular part of the Drupal community, each one providing a nice boost in development for core and contrib alike.

A few of us did put in some time working through some things. Jeff Diecks (yes, our Senior VP of Professional Services!) proved my motto of "everyone has something they can contribute" and started us off by re-testing all of the available patches for Metatag and Panelizer. Jeff's work helped us find some issues that needed just a little bit of work to get them ready for review, so along with Rob McBryde’s previous work doing the same re-test shuffle on the Panels module, there were plenty of things to work on.

Over the weekend we had a few join in the patch reviewing and re-rolling fun. Matt Davis continued his previous work of making great progress going through the CTools and Panels issue queues, and volunteered to co-maintain a module. Michelle Cox also continued her progress on the CTools and Panels issue queues, occasionally tag-teaming Matt on reviews. Meanwhile, Mario Hernandez worked on some UI translations, while Derek DeRaps worked on an improvement for Bob Kepford’s WysiField module.

As for my own participation, my personal highlight was putting together new release candidates for both the D6 (v6.x-1.2-rc1) and D7 (v7.x-1.0-rc2) releases of Panels Everywhere; while not the stable releases I was hoping for, they're still great improvements and pretty solid points to work from. I also put a little work into some Panels, Metatag and Panelizer issues, bringing the latter close to that long-sought-after v7.x-3.2 release.

While our participation in the third Sprint Weekend was a little limited, we were still able to make some good progress on a few things. I'm hoping that next time we'll be able to do something slightly more official and get more involved, but I’m quite happy with our accomplishments.

If you’re interested to see what the community worked on over the weekend, just search the issue queues for the tag “SprintWeekend2015”.

Additional Resources

Introducing the Mediacurrent Contrib Committee | Mediacurrent Blog Post
The Power of Giving | Mediacurrent Blog Post

Jan 28 2015
Jan 28

Display Suite is one of the essential modules which I use on every project. It allows you to change the look and feel of entity bundles, i.e., content types, vocabularies, users and much more. Building custom layouts and adding fields is a breeze, but there's another feature you may not be aware of and that's custom field templates. Display Suite allows you to change the markup which is used to render individual fields.

The functionality to change field templates is off by default. To turn it on, you'll need to enable the "Display Suite Extras" sub-module and check the "Field Templates" on the Extras page.

In this tutorial, you'll learn how to enable "Field Templates" and how to use them.

Step 1: Enable Display Suite Extras

I'll assume you already have Display Suite configured and working. All you have to do is enable the "Display Suite Extras" sub-module. Go ahead and enable it.

If you've never used Display Suite and want to learn more about it, I recommend that you read "Configuring Layouts with Display Suite in Drupal 7"

Step 2: Enable Field Templates

Once you've enabled "Display suite Extra", the only thing left to do is enable "Field Templates". Go to Structure, "Display Suite" and click on the Extras tab.

From within the "Field Templates" field-set, check "Enable Field Templates" and click on "Save configuration".

Fig 1.1

You can also select which template will be used as default from this page. Just leave it as "Drupal default" for now.

Fig 1.2

Using Field Templates

Now go to the manage display page of a content type that is controlled by Display Suite. I've setup the Article content type to use it so I'll go to Structure, "Display Suite" and I'll click on "Manage display" within the Article content type.

If you've used Display Suite in the past then you'll notice "Field template: Drupal default" in the settings summary.

Fig 1.3

This tells you which field template is in use. At this point it's "Field template: Drupal default".

Just click on the cogwheel and select a different template from the "Choose a Field Template" drop-down list.

Fig 1.4

Now that you know how to enable different templates let's look at each one in more detail.

Template 1: Drupal default

Fig 1.5

The "Drupal default" template is the most common and it's used by default. If you've done any Drupal styling it should be familiar. Depending on your feelings about Drupal's markup, you either like the flexibility or hate that there's too much markup.

Template 2: Full Reset

Fig 1.6

"Full Reset" does what it says. It removes all field markup so what you're left with is just the HTML from the field.

Template 3: Minimal

Fig 1.7

The "Minimal" template isn't as strict as "Full Reset". A single DIV with classes wraps the field value.

Fig 1.8

The template comes with a formatter option: "Hide label colon". This option allows you to control the colon displayed in the label, if required.

Template 4: Expert

Fig 1.9

The "Expert" template isn't really a template like the others. All it does is give you the ability to define what markup will be used via the "Formatter setting" form. If you're pedantic about markup and want full control over what's displayed then give it a go.

However, if you're looking for this level of flexibility then I'd recommend you write your own custom Display Suite field. It can be very time consuming filling out the "Formatter setting" form if you want to use the same markup across multiple fields. Then again, defining a custom Display Suite field requires some coding ability.


Field templates offer an extra level of flexibility. But if you do require heavily customised fields then I would still look at implementing a custom Display Suite field.


Q: I can't find the "Display Suite" link under "admin/structure".

Make sure Display suite UI is enabled.

Q: I can't see the "Choose a Field Template" drop-down?

Make sure you enable Display Suite Extras sub-module and check the "Enable Field Templates" checkbox.

Like what you see?

Join our free email list and receive the following:

  1. Discover our best tutorials in a 6 part series
  2. Be notified when free content is published
  3. Receive our monthly newsletter
Jan 28 2015
Jan 28

Yesterday, Phase2’s own Chris Bloom was featured on the Drupal Association’s podcast on how to hire great Drupal talent. It’s a pertinent conversation to have at the moment, when 92% of hiring managers surveyed by the Drupal Association reported that there is insufficient Drupal talent in the market to meet their needs.

Over the course of an hour, Chris, Randi King, and Mike Lamb shared many insights on not only attracting talent, but keeping it around. I highly recommend giving the podcast a listen once the recording is available on the Association’s website. In the meantime, as Phase2’s Talent Manager, I wanted to elaborate on some of our company’s methods for finding, hiring, and retaining the best of the best in Drupal and beyond.

Finding Talent: Emphasizing Relationships

Because we operate in an industry of web professionals, it may be a little surprising that the majority of Phase2’s recruiting happens organically, not digitally. True, online advertising plays a role, and we share open positions on our website, LinkedIn page, and Twitter. However, many of our new hires are discovered by employee referrals and face-to-face introductions at community events and job fairs. This is no coincidence: we respect our team members’ judgements and encourage them to bring new people into the fold – and we really like talking to new people!

The emphasis is really on building relationships, as opposed to checking off a list of desired skills. Organic connection is, therefore, an enormous help in determining whether a candidate will be a good fit at Phase2, whether the connection derives from an awesome conversation, internal introduction, or past collaboration with contractors. We initially look to have an open dialogue in order to gage attitude, passion, and motivations – it is these intangibles that really get us interested in a candidate. A recommendation from one of our employees speaks volumes in this respect.

The Interview Process: exploring skill & creativity

An interview is obviously an evaluation of a candidate, but it should be less a trial than an open discussion. As I mentioned earlier, it’s important to pay attention to the undefinable character traits that will help the candidate succeed at your company. At Phase2, this means being aligned with our six values: dedicated, collaborative, smart, adaptive, authentic, and fun. Even the best developer in the country might not be the right choice if he or she is not a cultural match.

To judge technical abilities, we take code contributions and technology tests into consideration, but another big portion is evaluating thought process and decision-making skills. We ask candidates to talk us through how they would go about tackling certain challenges, getting to the heart of their understanding of the technology and proper processes. This method also offers the advantage of revealing people’s true creativity. Most technologists have an inner flair for creating, and it is always exciting to figure out where their passion comes from, and the unique ways it plays out when solving technical problems.

offering flexibility

According to a Drupal Association survey, 44% of job seekers emphasize location as an important factor in accepting a new job, specifically not having to relocate. Accommodating these candidates means walking the fine line between attracting top talent and maintaining a healthy, engaged team. Requiring all employees to work from a physical office encourages bonding but may scare off truly talented people that consider working from home to be a deal-breaker. At the same time, managing a remote team presents a myriad of logistical challenges in day-to-day communications, in addition to the difficulties of fostering a close-knit team.

At Phase2, our strategy is to offer ultimate flexibility for our employees. Our four offices in DC, New York City, San Francisco, and Portland give our social butterflies the chance to bask in our rich office culture. At the same time, about 30% of our team work remotely across the country. Day-to-day collaboration is achieved through diligent digital communication, video meetings on Google Hangout, and a water-cooler-like chat system which allows us all to bond at a distance. Maintaining an inclusive organization requires a concerted effort (such as our annual all-company gathering at headquarters) but it is well worth it to offer our employees the flexibility to live and work where they prefer to.

Retaining Talent: Letting your People Blossom

Phase2 has been very successful in retaining talent, and a large part of that is offering employees the chance to work on interesting and important projects. In a poll conducted with the podcast’s attendees, 70% believed that the most important factor in keeping staff happy and engaged was interesting work – much higher than compensation (10%), or even culture (20%).

Beyond interesting work, we at Phase2 believe career development is crucial to letting our people blossom. Encouraging long-term growth is key to ensuring your team feels appreciated and valued – it is basically an indication that your company is invested in their future. We manage this by instituting weekly check-ins with managers to discuss progress and goals. In addition, we’ve established well-mapped career trajectories. We feel that it is important to provide concrete steps for individuals to move forward in their own careers, pursuing specialties they themselves have shown an interest in.

How does your team find, hire, and retain top talent? We’d love to hear your thoughts in the comments!

Jan 28 2015
Jan 28

Stéphane Corlosquet, Sachni Herath, Kevin Oleary, and Kay VanValkenberg join Mike, Ted, and Ryan for a look into Drupal 8's impressive integration with Schema.org. The RDF UI module is really the star of the show, it promises to provide a super-easy way to create a content type based on an existing schema. We also talk about Dries' 2014 Drupal retrospective, Twig syntax vs. tokens, and Mike's bad internet connection causes hijinx. Picks of the week include a font for demos, a lightweight alternative to a popular Drupal module, and Views changes in D8.


DrupalEasy News

Three Stories


Picks of the Week

Upcoming Events

Follow us on Twitter

Intro Music

Drupal Way by Marcia Buckingham (acmaintainer) (vocals, bass and mandolin) and Charlie Poplees (guitar). The lyrics by Marcia Buckingham, music by Kate Wolfe.


Subscribe to our podcast on iTunes or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or bandwidth suggestions for Mike. If you'd rather just send us an email, please use our contact page.

Jan 28 2015
Jan 28

In the first board meeting of 2015, we hit the pause button and looked back on 2014. With all the numbers in and so many projects completed, we wanted to evaluate our success (and our misses) with the board and with you. We feel really good about what we accomplished with the rest of the community. To me, it's doubly impressive because the Association spent so much of last year growing like crazy. We started the year with just about 13 staff and ended the year with 27. We're still small, but doubling your staff is never an easy endeavor. So to go through that kind of change, and to also get some much other good stuff done seems pretty remarkable to me. As always, you can check out the notes, the materials, and the recording, or peruse my summary of the meeting here.?

Operational update

?I think I can safely say that the theme of 2014 was “Let’s see what we learn from this!” We started the year with a Leadership Plan that outlined some important goals and strategies. We also defined key metrics we would track to help us understand if we were making progress on those goals. This was the first time the organization had this kind of framework to not only get a lot of stuff done, but to understand if that stuff was fulfilling its purpose.

The plan helped us identify lots of things to experiment with, and throughout the year we learned a lot about our plan itself. Metrics didn’t always point to the outcomes we thought they did. Some goals that we set were impossible to meet because of outside influences. But having the plan - that was important. It forced us to think about our work before, during, and after every project. So where did all our experiments take us? A lot of places. Here is a short, incomplete, and grossly over-simplified list of what we accomplished in 2014:

  • We set the proper frame. We developed a vision statement, revamped the mission statement, and created a values statement for the Association.
  • We rebranded, developing new logos for the Association and our programs that reflect our maturity as an organization.
  • We diversified our revenue, by a lot. Introducing new programs and services we were able to make a dent in the ration of Con related revenue to non-Con revenue. This is important for the financial health of the Association, but also because if Cons are our primary source of revenue, we can’t innovate and evolve them with as much courage for fear of undermining our total revenue.
  • Speaking of DrupalCons, we held two really big ones. Lots of things went right - they are well run, with great speakers and great community. We also collected a lot of data about the Cons and identified lots of places to work on for 2015 and beyond. (We promise we heard you about the food in Amsterdam!)
  • The marketing team is creating lots of technical marketing and other branded content that is starting to get great traction in the field. Resources like “Managing Media in Drupal” allow us to showcase the best that Drupal has to offer, regardless of version.
  • The launch of Drupal Jobs was a big milestone for us. We had not launched a product before, and were thrilled to get something out there that the community has repeatedly asked for. It’s still new, and we’re still learning, but we are overall very excited about the steady growth that we have seen.
  • Testbots is an area I have heard about on a weekly basis since I started at the Association. In 2014 we were able to forge a great partnership with the testbed volunteers. The Association is now managing the ongoing operation of the existing testbed infrastructure while the volunteers get to work on the next generation. We’ve seen massive improvements in performance as a result - wait times have dropped from almost 120 minutes to about 20 minutes on average. During the recent Global Sprint Weekend, we went from our usual 4 AWS instances to 20!
  • Drupal.org profiles have also seen a tremendous change in 2014. Again, thanks to the work of some amazing volunteers, we were able to introduce small targeted changes frequently, beginning with profile pictures. The work is not done and there are more changes to come, but profiles are becoming better and better online resumes and community connectors for the community.
  • We managed to be our projected deficit spend for the year. This sets us up well for 2015

I would like to point out that I am extremely proud of the Association staff who endured a lot of growing pains while churning out really good, quality work. In addition to being awesome at what they do, they are hilarious and smart. I owe the a huge debt of gratitude. HOWEVER, all of the bullet points above represent a significant contribution from the volunteers in the community as well. We don’t do our work alone, and we are so grateful to the hundreds of you who have prototyped, tested, coded, documented, trained, mentored, and made puns. Your leadership in the community is noticed and appreciated. Our greatest hope is that we are making your Drupal life a little better.

Marketing Team 2015 Update

The marketing team built a very solid base in 2014 and is prepared to declare 2015 the year of content. Here are a few key initiatives that you can expect this year:

  • More branded content, better presentation. We’re going to turn Drupal.org into the best site out there to discover all you can do with Drupal. We’re currently developing a content strategy that will help us discover all the great content that already exists, but gets lost in the one million+ nodes on the site. Then we can combine that with the great technical content we are also crafting to create more resource centers covering everything from media to search in Drupal.
  • A Drupal.org blog. We are in the middle of a content strategy process led by staff with the Content Working Group and Forum One Communications. It’s clear that we need a better channel to expose the folks who want Drupal news, but who aren’t ready to drink from the firehose that is Drupal Planet. The blog will allow us to reach those folks, and we hope we can use it to highlight the best writing about Drupal that is already being produced.
  • Drupal newsletter. In 2008, we stopped sending a regular Drupal newsletter to the tens of thousands of subscribers on Drupal.org. We’re bringing that back in 2015, with a model similar to that of the blog - the best community content. This newsletter will differ from the Association newsletter in that all the content will be focused on Drupal itself.
  • A challenge will be localization - translating content for our global audience. With the release of Drupal 8 nearing, and its emphasis on localization, we want to meet this need. We’ll be working on strategies to make translation happen on key content.

Of course, there is more to the update than this summary, so I encourage you to check out the presentation.

And then we ran out of time

We were also scheduled to vote on a slate of candidates for the newly formed Licensing Working Group. Unfortunately, we ran out of time. The Executive Committee of the board will be discussing next week to see if we can vote electronically on this topic.

Thanks for a great 2014. Here’s to an even better 2015

Again, thank you for the support, the work, the encouragement, the ideas, and even the complaints. All of it makes us better as an organization, and we hope that when we’re better, Drupal is better. ?

Flickr Photo: DianaConnolly101

Jan 28 2015
Jan 28

When I worked in Reality TV Post Production I always had a troubleshooting guide. Most of the time I was working the night shift, and trust me no matter how many times you've done something sometimes your brain just breaks at 2am.

In case you haven't done this before a troubleshooting guide is basically just a list of the steps you take when things break, it helps you avoid that moment of exasperation 3 hours later when you realize how simple the solution really was.

They used to be much more common before you could Google everything. Which is a testament to no longer needing 40lbs of software manuals on the shelf, it also means that we sometimes overlook the simple.

So in the spirit of working with our organic brains. Here are the 7 Things to troubleshoot when your theming changes aren’t showing up in Drupal 7.

  1. Saved the files?

  2. Cleared the cache? Started the preprocessor?

  3. On the right URL?

  4. Clicked on appearance?

  5. Checked the logs?

  6. All the files are using the right theme name?

  7. Syntax errors?

Grab a full prettified cheatsheet with expanded tips on how to do each of the 7 things by joining the mailing list!

In addition to the cheatsheet. You'll get updates (think discounts) on my upcoming Custom Framework in Drupal book, and a short Refresh Your Drupal Design 101 course! In case you missed it get all of that awesomeness here.

What would you add to this list? Ping me @sarah__p on twitter.

Image Irish Hands by Alejandro Escamilla

Jan 28 2015
Jan 28

Docker has quickly become the favorite virtualization tool for many people including myself. A few months ago we were discussing various technical goals across our project and things started to come together pointing to a basic docker framework to facilitate our development processes. This basically sums up our wish list:

Our Goals

  • Faster developer sandbox set up to get started on projects sooner.
  • Consistent software stack across developers, testing infrastructure, and production.
  • Ideally, a basic tool set that would work for both our new projects as well as our maintenance sites.
  • Start using the cool new trendy docker.

Container configuration with fig

One challenge is to have portable configuration for the building, starting, and stopping of a project's containers. The fig project provides an elegant solution and the configuration is in YAML files, which we in the Drupal community should be getting used to now with Drupal 8. The fig.yml defines your containers, ports, mount point, and how they link together. Maintaining a fig.yml file in our project repositories allows us to do things like add an Apache Solr container with ease.

I was working on a collection of bash scripts and docker files and a fig.yml for one project and at some point it became stable enough to extract for general use. I brought these files together and made them available on github.

Introducing Bowline

Following the nautical and shipping metaphor, I chose the name bowline because it's a simple and basic knot with multiple uses. The idea is that Bowline ties it all together. Plus it reminds me of my sailing days when I could tie a bowline in less than 3 seconds, which is slightly faster than it takes to start the docker containers.

Code and instructions found at the git repo: https://github.com/davenuman/bowline

I have now had success with Bowline on both new project and on existing Drupal 6 and 7 projects. Just last week I also tried it out with Drupal 8 and I'm happy to report that it works just fine on Bowline as well.

Dockerfile flexibility

Out of the box, Bowline ships with two containers. One for mysql 5.5 which is simply the default image from Docker Hub. The second is the web container providing apache, php 5.4, and related software. The web container is defined within the .docker/web-5.4 directory and the Dockerfile with supporting config files are based on the awesome work of the new Drupal testbot project.

Automation, running tests

Imagine your developers getting their local sandboxes up and running in a matter of minutes. This is now possible, facilitated by a few simple bash scripts. Bowline provides a template document intended for instructing your team on how to get set up: https://github.com/davenuman/bowline/blob/master/sandbox.md

Basically, they run build sync-db to get a copy of the database, build sync-files to get the site's uploaded files, then build import which does all the work of building the docker containers and importing the database. There is also a backup script which will save a snapshot of your database named after your current git branch which is handy for switching to another task while preserving your work. The run command is intended for running your automated tests. It assumes behat but you can modify it to run whatever testing software you use. The nice thing is that our developers are all running local behat tests on the exact same software stack as each other and as the test server. We have a Jenkins server with docker and have jobs configured to execute the build and run commands just like we do on our own machines.

Slaying File Permission Dragons

Anyone who has worked with a LAMP stack has bumped into file permission issues with uploaded files. Add a docker container to the mix which is mounting your project files and serving them up as the apache user withing the container and there is lots of ways to mess things up. This dragon gave us some grief early on when starting to use docker in this way. We won the day by setting the apache user to run with the same uid as the docker host user. This way each developer will have ownership to their own file uploads on their system. Here's the simple bash code that makes it possible:
# Set the apache user and group to match the host user.
OWNER=$(stat -c '%u' /var/www)
GROUP=$(stat -c '%g' /var/www)
usermod -o -u $OWNER www-data
groupmod -o -g $GROUP www-data


Room for improvement

One tricky thing we found using docker containers is using drush in a complete way, particularly using drush site aliases. For now we have "crush" which is a temporary work around but not too bad of a work around actually. Crush is a simple bash function that calls drush as a command on the docker web container. We use crush to clear cache manage features and such, and it is working well. However it's not ideal and I'd like to add ssh server to the stack to allow for proper drush site alias usage.

There's always room for improvement. I'd like to find an elegant way to incorporate more developer tools such as sass, compass and debugging tools. Every project is different but it would be nice for Bowline to have some basic Behat smoke tests build in. These things will hopefully be added to the Bowline project as we use it and add things to our drupal project. And yes, pull requests are welcome.

Jan 28 2015
Jan 28

We just completed our third Drupal 8 project: SGG - Schweizer Gemeinnützige Gesellschaft after relaunching our own website, helping out with Drupal.com we are now excited to launch our first client website 100% on the next major release of our favorite open source CMS.

After we had built the community site Intergeneration and the voting platform CHymne using Drupal 7, we now chose Drupal 8 for the relaunch of the corporate website of SGG. The compact feature of the site allowed us to apply the strenghts of Drupal 8 as per today and so we created the association's new website relying entirely on Drupal 8 core functionality.

Building the new SGG website was a team effort; continue reading for the findings of each of us while we were creating the new site on the latest beta release of Drupal 8.

Boris was involved with site building and these are his thoughts:

Drupal 8 in terms of sitebuilding is awesome. After a short time you are able to build almost everything out of the box. Sometimes you have to think around the corner to get your result. And sometimes you get stuck because of some nasty bugs.

But with a great Backend-Developer at Hand (Alex) we could also solve some issues during creation of our project:

  • Building content types is much more powerful with Drupal 8 core: you can define view modes, form modes and many field types like e-mail, entity reference etc have been integrated
  • Full translation capabilities: thanks to #d8mi you don't need a dozen of i18n modules, but Drupal 8 core ships with configuration, content & interface translation
  • WYSIWYG editing functionality is part of core and just works
  • Views in Core: we can create dynamic listings  
  • Enhanced block system: we can even reference blocks using entity reference now
Views in core

Alex did backend development for the SGG relaunch and this is his feedback on the project:

  1. A lot of things in D8 are plugins and the new plugin system is really cool: to extend a plugin, all you need is to extend its class and place your new class in the proper namespace.
  2. Even if core still uses some PHP magic, everything is documented well with PHPDoc, and the IDEs help a lot to write code.
  3. Hint: while Drupal 8 still is in  beta stage, use core dev releases only for core development. I tried dev releases twice, in both cases a lot of functionality was broken. If you need D8 in production: use beta releases, they are much more stable.
  4. Every piece of PHP code is covered with tests. That's super cool: if you want to understand how a thing works or what its purpose: read its tests and you will learn a lot from that.
  5. Sad: there is no core stuff for testing JavaScript code. There are some contrib modules for that, but, anyway, it's not in the core, so JavaScript tests are not mandatory.
  6. Sad: beta-to-beta updates are not ready. That makes core updates really hard. The good thing is that it's the target number one.
  7. The translation system is really good. All translation cases are handled right in the core.

Overall, I have really really good feelings about D8. Previously we said "Drupal way" about many coding things. Now it's the "right way"! Drupal core now uses bleeding edge technologies, and that makes work really interesting.

Blocks are plugins in Drupal 8

Kathryn did front-end development, this is what she would like to share:

Building a custom theme for Drupal 8 is almost an entirely different process than building one for Drupal 7. I think this is especially true for Amazee Labs since historically, we are "Panels people." Because the Panels module isn’t ready for Drupal 8, we're forced to make heavy use of template files. 

In my experience with Drupal 8 (and on this project in particular), working with Twig templates is much more concise and straightforward to code than a D7 .tpl file. As a developer with only basic PHP skills, the Twig syntax is easier to grasp. 

For the SGG theme, there are over ten custom Twig templates, most of which extend upon another.

Even though the design of the SGG theme appears simple, there were many instances where the content display required use of the Twig {{ dump() }} function to drill into variables. 

One thing I found frustrating during this process was sifting through the output of the dump’s results. Krumo formatting in D7 is so nice and tidy, while the D8 output is a jumbled mess, even after wrapping it in a


To work around this, Kint is your new best friend. You’ll need to download the 8.x version of the Devel module, enable it, along with Devel Kint. Include {{ kint() }} in your template file and voilà - nested arrays that won’t make your head spin around. 

I could go on, but the gist of it is: mastering Twig continues to be my number one priority for Drupal 8. The SGG Drupal 8 project was no exception.

A sneak peak into Drupal 8 twig templates

And this is the result: http://sgg-ssup.ch

Creating web sites with Drupal 8 is possible today. You certainly have to be aware of constraints regarding not-yet-upgraded modules and account for some core bugs along the way. On the other hand, working with Drupal 8 just feels right already now: best practices from back-end to front-end development have been incorporated and the site building experience is really solid.

Step by step we will approach bigger client projects with Drupal 8. Interested in a future proof website? - let's start a project together.

Jan 28 2015
Jan 28

"OMG! You've got a responsive website!"

Once something to brag about, this is now old news. Of course it's responsive - that's standard practice now, right? Unless your site is not responsive, in which case, you are probably losing visitors and money. And now that Google has started adding "mobile friendly" tags to its search results, you can be sure it'll soon be an SEO bonus to have a responsive website.

The various resources coming up on my searches can't seem to agree on exactly what percentage of internet users are mobile internet users, but they do all agree on this: mobile is big, mobile is growing and most users don't use one device exclusively. This means that the same people are visiting the same sites on different screen sizes and expect a very good user experience each time.

Think about a typical day.

Your alarm sounds and you pick up your phone to turn it off. Seeing as you have it in your hand, you have a look to see what's happened in the Twitterverse overnight. At breakfast, you lazily cruise some of your favourite sites on your tablet. On the bus, you're back on your phone, reserving cinema tickets for tonight and finishing off some of the tasty articles you scanned over breakfast. Now in work, you've got your big desktop monitor, doing some research. Tired of flicking between browser tabs, you arrange your windows with three browser windows side-by-side. Home-time comes and you're back on your phone, in a queue, whilst waiting to be served some fries before the cinema. You hit home just in time to stick the phone on charge before its tired battery dies altogether.

Imagine the annoyance if you hit a site that does not take account of your device, that does not adapt to your view. You're busy: you've probably got other things to do. There's usually another site that does what you want. If it's responsive, you'll probably end up doing your business there.

This scenario is not too far-fetched. Modern life is connected. People are on the go whilst they are on the go, and two things they really dislike are having to think whilst doing something else, and having to wait whilst waiting. In much of our typical day, there's more than one thing going on: breakfast and surfing, Twitter and queueing for fries, travelling and booking tickets. The attention of the user in each case is split. They are distracted, and they are not happy if they have to struggle to use their sausage fingers to navigate tiny icons and then stare at a blank screen whilst the page fails to load.

Be nice to your users: you want them to come back and to use your service. In order for that to happen they have to feel good about doing it. A poor mobile experience is not going to make that happen.

People often worry that responsive sites are more expensive. They don't have to be, once it starts at the design stage. A design can facilitate it or hinder it, and it's going to impact on both the budget and the bottom line.

Most of my work lies in supporting existing sites. I have actually found it easier to maintain responsive sites than fixed width sites, because of this: when adding new features, typically a responsive site is built in such a way that the new feature will just work. This is because both the designer and the developer have understood how each element interacts with its neighbours and have thought about how that interaction will play out over different screen widths. Fixed width or partially adaptive sites inevitably cause layout problems, as they facilitate lazy CSS. You don't have to be a CSS ninja to fix a width, but you really have to think when creating a fluid grid with breakpoints.

Who needs a responsive site? 

Everyone. Users need to use them because it makes surfing on a mobile device suck less. Site owners need to own them because it makes users happy and increases conversions. If you are thinking of getting a site built, make sure it's responsive. If your agency is not providing responsiveness, ask yourself: are these guys cowboys? If you plough ahead and get a non-responsive site, it will lose you money and you will still inevitably come to ask the dreaded question: "Quick one for you - can you make my site responsive?".

Have a look at the analytics for your own site. All those users you thought would never move away from IE6? They're now using ipads.

Mobile is real. But what is it? 

At its simplest, a responsive site is one that changes depending on the screen width of the viewing device. This means that you don't have to pinch-zoom to read the main content on a page, and you don't have to struggle to deal with fly-out menus or wonder how on earth you're going to navigate the site that relies on 'hovering' over the main navigation to activate the sub-menus.

You can go further down the rabbit hole and talk about adapting the content depending on device or location, or talk about serving different images dependent upon device or connection speed, but the basic responsive site is just an adjustment of presentation to better suit the viewing tool.

If you're still not convinced, have a look at this presentation from Luke Wroblewski at Drupalcon Denver. It is well worth kicking back and enjoying this excellent talk on all things mobile and web.

So, only one question remains:

Do you want a responsive site?

Yes, I want a responsive site!

Jan 28 2015
Jan 28


We’ve been experimenting with monthly team sprints at ThinkShout over the last year with varied levels of structure and outcomes. This month, we decided to take a step back, reevaluate our goals, and reimagine our sprint process. And, we moved it to a Thursday. A bow-tie Thursday.

Previously, these sprints were loosely structured around a topic or technology, such as Twig in Drupal 8. Suffice it to say, they were a lot of fun and very exploratory, but they weren’t the most engaging for everyone on the team. This time around, we decided to collaborate on a single initiative - in this instance, a product - that would benefit from the skills and perspectives of everyone in the company. Consequently, we decided to rally around RedHen Raiser, our new peer-to-peer fundraising distribution for Drupal.

Introducing RedHen Raiser


RedHen Raiser is designed for building peer-to-peer fundraising websites, like the sites you see for marathons and walks, where a fundraising campaign is made up of myriad individual and team pages, and can be customized by the participants for fundraising amongst their respective communities, while remaining connected to the larger campaign.

As the name suggests, RedHen Raiser is built on top of RedHen CRM, including the RedHen Donation and RedHen Campaign modules, and it’s chock full of awesome:

  • Easy Campaign creation so site visitors can join right away by creating their own Team or Individual fundraisers.

  • A beautiful, consistent fundraising experience that is based on inherited display values from the larger Campaign.

  • Goal progress widgets including thermometers, leaderboards, etc.

  • Mini-blogs for Campaigns and Fundraisers via Update content type.

  • Ability to create and maintain different pages for different fundraisers with a single account.

  • Automated start and end dates.

  • Commerce-readiness - just add your payment method and go!

  • Single-page donation forms via RedHen Donation.

  • Built using established modules with simple UI (Views, RedHen, Context, etc) for easy customization.

It’s ThinkShout’s latest offering in a suite of nonprofit engagement building blocks that we’ve been developing, and was initially developed for the Capital Area Food Bank of Washington, DC. RedHen Raiser competes feature for feature with top software as a service (SaaS) peer-to-peer fundraising platforms, such as TeamRaiser, CauseVox and Razoo.

As a result of our work with this client, we were able to release a very rudimentary version of RedHen Raiser on Drupal.org that would provide a basic starting point to other developers interested in building a peer-to-peer fundraising tool. The product is also a huge win for CAFB of DC, simply because they were able to reap a huge dividend on their initial investment by getting these improvements for free.

Involving the Full Team in One Sprint


As an open source product, RedHen Raiser presented us with some interesting opportunities to engage more than just our engineers in the sprint process, and it certainly needed a lot of love on a lot of fronts. Leveraging the different interests and expertise of our 18-person company, we split into five teams:

  • Dev Ops - this team focused on deployment infrastructure, build processes, and automated testing;

  • Bug Fix & Feature Dev - team members spent the sprint day working on the development backlog;

  • UX - the User Experience team worked ahead of the feature development team to identify and sketch out new features and enhancements;

  • QA - the Quality Assurance team was made up of our project managers acting as "product owners;"

  • Community Engagement - this team, consisting of our sales, marketing, and operations staff, was tasked with documenting the sprint and sharing our contributions with the wider Drupal and nonprofit technology communities.

It’s worth noting that the quality assurance team and the community engagement team came together for the first half of the sprint for an in-depth training on the Drupal contributed modules and components underlying RedHen Raiser. Ironically, we often get so busy building these sorts of tools for our clients that we don’t stop to educate our own "non-developer" team members on how stuff works. By taking this time to dive into the nitty gritty with our project managers, marketing and operations folks, we create better advocates for these solutions and help ensure that everyone in the company feels like contributor to our success.

Planning for the Sprint


As ThinkShout has grown, the need for sprint planning has grown with it. Back when we first started these sprints, we could fit our entire team around a single table (covered in pizza boxes and beer) and call out with development tickets we each needed help with.

Now, with a team of 18 working together from 11am to 5pm, these sprints take a bit more planning - to say nothing of balancing the opportunity cost of investing a collective 108 hours of non-client work into a single week. To keep things running smoothly, we’ve taken a more project-planning-esque approach to our sprint days:

  • Scheduling in advance: The date and time of the sprint is scheduled a month in advance. We used to just stick with the last Friday of the month, but found that this sometimes excluded certain team members on deadlines or vacation. Now, we coordinate a bit more tightly to help ensure participation of as many team members as possible.

  • Laser focus: the focus of the sprint is announced to the team three weeks in advance. This gives the team time to think about stuff they want to work on, and add to the feature backlog in the weeks coming up to the sprint.

  • Pre-sprint planning meetings: The department leads meet a week before the sprint to form teams and structure the sprint agenda, and prioritize the development/feature backlog two days in advance of the sprint.

  • Pre-sprint presentations: The week before the sprint, we do a short, company-wide presentation on the sprint topic at our weekly staff lunch. This helps energize the team and sparks knowledge sharing in the lead up to the sprint day.

  • Formally "opening" and “closing” the sprint day: As our sprint commences, we kick things off with a quick, all-staff scrum. More importantly, we pull the team back together at the end of the day for each sprint team to present (and celebrate!) what they’ve completed.

Outcomes of Our RedHen Raiser Sprint


So what does it all mean? This new approach to our team sprints resulted in just shy of 100 commits on RedHen Raiser and the underlying modules that power the distribution. We published a new release of RedHen Raiser, RedHen Donation and the RedHen Campaign modules - as well as a release of our base RedHen CRM suite.

One of the biggest wins to come out of the sprint are automated tests powered by Behat. Tests are triggered with every commit to GitHub and run on Travis CI. At this point, test coverage is a bit limited, but the foundation has been laid for complete test coverage for RedHen Raiser, a critical factor when organizations are evaluating which software to use.

To top it off, we cleaned up a few RedHen project pages on Drupal.org and began working on a RedHen-specfic QA testing plan. We also reached out to the RedHen open source community to let them know what we were up to and how folks can continue to get involved. Most of all, we are proud to say that this effort is a huge contribution to the nonprofit tech community, in that it provides major improvements to a powerful tool that can be leveraged for free - and has the documentation to support it!

All in all, the ThinkShout team came together in a big way, and accomplished much more than we could have if we had remained siloed in our approach. We had a lot of fun, drank some beer, ate some good food, and got to collaborate as a whole team on something really cool. We’re really looking forward to the next one!

Jan 27 2015
Jan 27

In An introduction to Git Part 1, you learned a little about what Git and Version control is. You also installed Git on your computer. You are now ready to configure Git and set up your first Git repository. Whether you are using Git for the first time on a Drupal project, or for the 100th time, creating your Git repository always follows the same simple steps.

Git is one of the secrets from my 5 secrets to becoming a Drupal 7 Ninja ebook, and much of the following posts are from this ebook. To learn more about Git and the other secrets, please consider purchasing the ebook or signing up for the 5 secrets email newsletter which will give you more information about the 5 secrets.

Configuring Git from the command line

The first step is to open up your command line. You may want to create an empty test folder somewhere on your computer so you can test a few basic Git commands. I will start with a folder on my desktop called git_test. You will want to make sure you are inside that folder on your command line. If this is your first time using Git, you will likely need to configure your basic Git settings.

Git Command What does it do? git config --global user.name [name] Configure the username for Git to use for the current logged in user. git config --global user.email [email_address] Configure the user email address for Git to use for the current logged in user.

You will first want to configure your name.

Git Configuration: Username

Then you will want to configure your email address.

Git Configuration: Email

You should now have your basic Git configuration set up. You are now ready to create your first Git repository.

Creating Your First Git Repository

Git Command What does it do? git init Creates a Git repository in the current directory. git init [folder] Creates a new directory and Git repository.

Creating your first Git repository is incredibly simple. Just run the git init command on the command line from within your project folder.

Git Init

If you have not created the git_test folder yet, you can create the empty directory and the Git repository in one step.

Git Init Folder

Either of the above commands will create a new empty Git repository for you to start working with. It does not get much simpler than that. One simple command to Git you started.

Intro to Git Part 2 Summary

You now know how to configure Git from the command line and create your first Git repository. In part 3 of An introduction to Git, you will learn how to add files to your new Git repository.

Jan 27 2015
Jan 27

Welcome UX Design experts. We want to hear your latest thoughts, ideas, techniques and analysis on the latest state of the web. If you’ve got a burning idea you want to share about how to do things better, we want to hear about it.

We’re looking for talks on:

  • UX Process and Techniques
  • Strategy and Planning
  • Content Strategy
  • Visual Design
  • Usability
  • Integrated & Holistic Marketing methodologies

As the web matures, so does the field of User Experience. It’s more integrated than ever. Everything affects the user experience, from how a design makes a user feel to how fast a page loads. Each attributes ignites real emotional responses, be they good or bad.

For 2015, we’re seeking talks that bridge the complexities of the field of UX. For instance, how do we create better digital experiences with third party integrations. How do businesses effectively manage their digital presence across teams. What is the right balance between structure and flexibility in how we build our CMS solutions?

Tell us what you’re passionate about. Share your ideas today by submitting a talk for DrupalCon LA.

Nica Lorber
User Experience Design Chair
DrupalCon Los Angeles

Jan 27 2015
Jan 27

28 pages of unmarred perfection. This book is pure unadulterated genius

- Chris Arlidge

Never Be Shocked Again! - Budgeting your Web Project

Are you having trouble figuring out an appropriate budget for your next web project? Our whitepaper can help!

Download your FREE COPY, to learn the different types of website projects, understand the factors that play a role in the budgeting process, and determine where your web plans will fit when it comes to costs!

Don’t ever be shocked by web costs again! A clear guide to help you plan the budget for your next web project.

A few months ago I was asked to help manage our company blog. As a busy drupal developer, I had little time to post all of our new blog content on every social media site. So I decided to automate this so I could spend more time on client projects.

My first step was to look around and see what was currently available. Although there was some options, I found them to be a bit heavy for my application.

One of the things I was looking for was to be able to choose what social networks our content would be posted to. For example, I wanted to be able to choose to post this page to facebook, but not twitter, or post this to all social networks.

So I decided to build my own solution and here is how I did it (or just download the code from github).

Required Drupal Modules:

  • Taxonomy
  • Views
  • Features (if you're downloading the code)

Add New Taxonomy Vocabulary

Post To Social NetworksThe first step is to create a new taxonomy vocabulary called “Social Networks.” This will allow us to associate a social media site with our content. Next, we will want to add a few terms, like Facebook, Twitter, Google Plus and LinkedIn.

Now that we have our taxonomy setup, we need to add a new field to our blog content type. To do this, add a new term reference field called “Post To Social Networks.” Although you can use whatever widget you think would work best, I went with the check boxes/radio buttons option. Make sure to select the “Social Networks” vocabulary and the “unlimited value” option.

You will probably want to make sure this new field is not displayed on the actual post, so go to the manage display page and set this field to “hidden”.

Next, we will need to create some dummy content to work with or edit some existing posts. Once we have some content tagged with our social media sites, we then turn to views and create a new feed.

Configure Views and Social Feeds

CMM Social Feed ViewsNext, we will need to add a new relationship to our feed (we will need this for our argument).  Add the “Taxonomy terms on node” relationship and select the vocabulary that we created in the first step, leave the other settings default.

Now for the fun part, we will need to add a contextual filter. Add the “Taxonomy term: Name” filter and make sure to use the term relationship that we just setup. Under the “When the filter value is NOT in the URL” select  “Provide default value” and choose “taxonomy term id from url”.

Now that we have our contextual filter setup, we need to set the feed path. I decided to use social-feed/%/rss.xml. The % is the argument, so this would be replaced by the term name (facebook, google etc.).

Send Feed To Social Networks

Hootsuite add rssSo now that we have our vocabulary and RSS feed setup and working well, we need to be able to send these feeds to various social networks. We could create our own api call to each social network, but that would be a bit more complicated for this tutorial. Instead, lets use a service that’s already built.

Some options to choose from include; If This Then That (IFTTT), Buffer, and Hootsuite. My decision was easy, our company already had a hootsuite account, so I logged in and went to work. This part is fairly easy; go to settings and select “RSS/Atom and add a new feed.

Now add the feed URL that we configured ealier using views (http://yourwebsite.com/social-feed/facebook/rss.xml) select the network to send the feed to (Facebook, Google Plus) and your done. You will want to repeat this step for each social media site.

Now, if you have everything setup properly when you add a new blog post, it will be posted on all your social networks. Finally, you can breathe easy and thank me for all the time I just saved you and your clients!

Jan 27 2015
Jan 27

As a front-end developer working with Drupal, I often need to create custom Ctools layout plugins. Ctools is an essential suite of tools for Drupal developers and the basis of many popular modules, including Views, Panels, Context and Display Suite. Its layout plugins provide a flexible alternative to Drupal’s core page region system.

Creating a custom Ctools layout plugin for Drupal is actually quite easy. You define your layout in a .info file, create the html structure in a template (*.tpl.php), style it in a stylesheet (*.css or *.scss) and provide a thumbnail image so the site administrator has an idea of what it looks like. Each of these files follows similar but slightly different naming conventions. This can be tedious when you need to create a number of custom layouts for a project as we often do at Aten.

My previous workflow looked like this.

  1. Find another layout plugin – either an existing custom layout or one from the Panels module.
  2. Copy that folder into the plugins folder in my module or theme.
  3. Rename the folder to match my plugin.
  4. Rename all the files to match my plugin’s name, using the appropriate naming convention and file extensions.
  5. Edit the .info file to point to the appropriate files and change the array of regions to match the new layout.
  6. Edit the template file to match the newly created regions.
  7. Write the CSS to style the layout.
  8. Create a new thumbnail image.

A lot of these steps consist of copy, paste, find and replace – the kind of stuff that’s better suited for computers. That's where Yeoman comes in. Yeoman is a scaffolding tool. It's most often used to quickly create boilerplate code that can be customized to your needs.

I recently published a Yeoman generator that automatically creates a ctools layout plugin based on a few simple settings. Now my workflow looks like this:

  1. Create a directory for my plugin.
  2. Type yo ctools-layout.
  3. Answer questions about my layout.
  4. Add any extra markup to the template file.
  5. Write the CSS to style the layout.
  6. Create a new thumbnail image.

The new workflow eliminates the tedious tasks and allows me to focus on the code that makes a given layout unique: markup, style and a thumbnail.

Here’s how to use it.

Install Yeoman *note: Yeoman is a Node.js tool, so you need to install Node.js in order to use it.

To install Yeoman type the following into your terminal:

npm install yo -g

The -g flag installs this package globally. Doing so allows you to run the yo command outside of an existing node project. A common Yeoman use case is scaffolding out a brand new project, after all.

Note: Depending on your environment, you may need to run these commands with sudo.

Now you have the general Yeoman application. But to be useful, you need some generators.

npm install ctools-layout-generator -g

The ctools layout generator assumes you already have a custom module or theme that you are adding a layout to. For this example we'll assume this custom layout is in our theme. From your theme's root directory create a new directory for your layout plugin, change to your new directory and run the ctools_layout generator command.

mkdir plugins/layout/landing_page
cd plugins/layout/landing_page
yo ctools_layout

Yeoman will prompt you with a few questions about your layout, such as the name of your layout, the name of the module or theme it exists in and the regions you want to include. After answering the questions, Yeoman creates all the necessary files needed for a working layout.

The ctools layout generator makes no assumptions about what your layout looks like or the actual CSS styles and markup needed to make it functional. That's your world! It's completely up to you. It simply takes your list of regions and adds them to the layout's .inc file and to the template file in which you can add the appropriate markup.

Similarly, the generator will add a placeholder .png thumbnail of your layout. The generator has no idea what you have in mind for your layout, so you'll want to create your own thumbnail and save it over the placeholder. The thumbnail is important as it gives the end user a good idea of what the layout looks like. I've shared a Photoshop template that I use for creating layout thumbnails.

I hope you find this useful. If run into any problems or have feedback, please create an issue on github.

P.S. In case yeoman isn’t your thing, there is a drush plugin that has similar functionality and works for more than just layouts.

Jan 27 2015
Jan 27

I just updated all prior 16 posts in this series with up to date screenshots and text over the weekend and here is the new post!

In the introduction to content and configuration translation piece we discussed what is considered content in Drupal 8. This is a somewhat misleading term because custom blocks, custom menu items and even user entities (user profiles) are considered content in terms of their implementation.

Content is always stored as entities. The main difference between configuration and content entities is configuration is usually created on the backend (think views, vocabularies, etc.) while content is usually created on the frontend (think free tagging taxonomy terms, comments, blog posts). This is not a black and white differentiation but it helps think of the categories. The other key differentiator is content entities usually get to have configurable fields. You can add new fields to user profiles, taxonomy terms or comments. Again there are exceptions, for example custom menu items cannot get configurable fields in core. Finally, there are even content entities that will not be stored, in Drupal 8 contact form submissions are content entities that live only until they are sent via email. For this tidbit we are concerned for content entities that are stored and multilingual.

In the fifth tidbit, we covered language assignment to content entities. That showed the Content language page under Regional and language configuration which lists all the content entity types and lets you configure language defaults and language selector visibility for bundles of each. That is very useful on a multilingual sites where you don't translate posts, like a multilingual blog where you sometimes post in one language and then another.

If you also need content translation support, all you need to do is to enable the Content translation module and have multiple languages configured. The same screen can be used to configure content translatability that you used to configure content language defaults. Under that condition, the menu item changes from Content language to Content language and translation. Bingo!

This screen now lets you switch translatability as well on an entity type and bundle (subtype) level. So you can configure nodes per content type, taxonomy terms per vocabulary, custom blocks per type, etc. Configuring a bundle to translatable then opens a whole set of configuration on the field level. Built-in (base) fields are supported, so you can translate the title of nodes and name of taxonomy terms for example. Publishing metadata like author, creation date and change date "translation" lets you keep accountability on translations. Publication status tracking per language lets you implement workflows for translations, so you can keep some languages published while others are not (yet). Promotion and stickiness per language lets you keep different metadata per language variants. You can of course uncheck the ones which you do not intend to keep different per language.

Going further down on the field list, you'll notice that image fields even support translation on a sub-field level. That means that by default they offer to translate alt text and titles but keep the image itself the same across translations. This makes sense for product pictures for example. If you also need to have separate files per language, you can configure that too.

Finally, the article type also has a taxonomy tags reference field, which stores all related taxonomy terms. By making this field translatable, you can keep a different list of related taxonomy terms per language (Case A). It is also possible that you only want to translate the terms themselves, in which case you should uncheck this box and set the tags vocabulary terms to be translatable on the same page (Case B). That would mean you keep the same tags for all languages but translate the terms themselves. You can do both at once also (Case A+B) but that may be more confusing than useful.

If you already built multilingual sites with Drupal 7 that had content translation, you may notice this model is a refined version of the Drupal 7 Entity translation module (without awkward modules required like Title) rather than the Drupal 7 core content translation module. The core module in Drupal 7 keeps different copies of nodes and relates them together in a translation set. In Drupal 8 this feature is not available, instead you can configure translatability on a field (and in some cases subfield) basis. However if you configure all fields to be fully translatable, you can essentially reproduce the Drupal 7 behavior. Compared to the Drupal 7 core solution with different entity identifiers then, Drupal 8 will have the same entity identifier but a different language variant.

The biggest advantage of the Drupal 8 solution is you can configure to translate as much as needed and not more. With one unified solution to translate only some fields or all fields, (contributed) modules only need to deal with one system instead of two. With storing translations under one entity, modules that don't want or need to deal with multilingual scenarios can still consider the entity as one, no need for special translation set support. That is way more flexible than Drupal 7 and best of all it applies to all kinds of content, not just nodes. It is future proof for modules you enable like rules and commerce.

That's it for the basics. Next, we'll cover the user interfaces, permissions and basic workflow supported in core.

Issues to work on

  • Unfortunately language-aware entity types that cannot be translated (like Aggregator feed or File) will not show up in the master list for configuration. That is an oversight. Help at https://www.drupal.org/node/2397729
  • It is also possible to configure translatability for entity bundles on their own edit screen and then their fields on the field edit screens respectively. This is a lot more tedious and error prone compared to the huge overview screen we covered in this article. It may lead to incorrect settings combinations, see https://drupal.org/node/1893596. It also leads to bugs such as https://drupal.org/node/1894596.
  • Miro Dietiker outlined some concerns with the in-entity translation system, some of which have since been resolved. The document is by far not up to date but would be interesting for anyone looking at possible technological challenges with the Drupal 8 approach in more complex environments: http://techblog.md-systems.ch/tutorial-howto/2012-06-drupal-8-multilingu...
Jan 27 2015
Jan 27
Tags: Consulting, dco, dcsp

DrupalEasy hearts AcquiaU

Having just completed presenting the Drupal career training portion of AcquiaU, we are anticipating great experiences for all ten students as they begin their eight weeks of rotations within three different business groups within Acquia. The past two months have been a whirlwind of teaching, learning and team building, which provided great insight into a forward-thinking approach to building Drupal talent, made possible by the commitment of Acquia.

We are pleased to have contributed to the new AcquiaU with the customization of our Drupal Career Online curriculum. I’d like to share some great lessons learned, as well as introduce the ten people who were lucky enough (luck favors the prepared) to be selected for this amazing program.

What is AcquiaU?

AcquiaU is the career track trifecta for newbie (and soon-to-be newbie) Drupalers that are selected to participate. The 10 participants each get a paying job, (awesome) training, and an experience-based opportunity all wrapped up in a nurturing micro-community. For Acquia, the in-house talent incubation program is a bold grow-your-own approach to the ever increasing demand for Drupal talent. Acquia - like most other large Drupal organizations - realizes it must look outside the community for new talent.

AcquiaU is designed to take people with potential to be great Drupal site-builders, developers, and themers and train them into positions where they can start contributing to both Acquia and the overall Drupal community. Note the use of the word "potential.” - this is in marked difference to many Drupal organizations who focus their searches for experienced Drupalists. Acquia's focus on potential comes from the top, and is one of the primary reason AcquiaU exists.

More than 75 people applied for the 10 AcquiaU positions available. The fortunate ten are now paid employees of Acquia, tasked with learning everything they can about Drupal and Acquia's products, services, and culture. The program is divided into classroom Drupal training, which includes working in two teams to complete a major project. In the on-the-job training portion, participants go through three rotations within various Acquia business areas. In 2014, Acquia approached DrupalEasy about providing and delivering the Drupal Career Online curriculum for the classroom portion of AcquiaU.

How is DrupalEasy involved?

We were contracted to help with not only the content and delivery of the AcquiaU Drupal curriculum, but also to assist with student selection and ongoing evaluation of student performance. Starting in October, we worked with Amy Parker, the Director of AcquiaU, in evaluating student applications, student interviews, curriculum planning, and overall planning for the classroom portion. Then, for the first three weeks of December, 2014 and first three weeks of January, 2015, I was on-site at Acquia's headquarters in Burlington, Massachusetts to provide the training.

How did we compress the 12-week Drupal Career Online curriculum into 8 weeks?

Our normal schedule for delivering our long-form Drupal curriculum has traditionally been 10-12 weeks. We've run our course five times over the past few years prior to AcquiaU, so compressing it into 6 weeks of classroom training (and 2 holiday weeks) was going to be tricky. Granted, our 12-week course normally only meets a minimum of three half-days per week, but our students often spend 15-25 hours per week outside of class working on assignments and projects. We got lucky that the traditional holiday travel season fell right in the middle of the training. We were able to cover the first half of the curriculum in the first three weeks of December, then over the holiday, the students were tasked with several online assignments as well a major milestone for their team projects. These two weeks provided a welcome break from the high-intensity classroom training, as well as a chance to go back and review curriculum that had previously been covered. Following the holiday break, the second half of the curriculum was delivered.

The students

Over the course of about six weeks prior to the start of class on December 1, I participated in interviews and selection of the ten students. We selected participants based on a number of factors, focusing on each's potential, rather than experience with Drupal. The lucky ten have diverse backgrounds:

  • Steve Bresnick - instructional designer, English teacher, and web site developer.
  • Jaleel Carter - web site developer and webmaster.
  • Thomas Charging Hawk - currently pursuing M.S. in Media Management, web site administrator, and Acquia Support intern.
  • John Cunningham - computer technician and avionics technician with the United States Marine Corp.
  • Kerry DeVito - product assistant, freelance writer and web designer.
  • Matt Dooley - senior designer, web developer.
  • Elizabeth Mackie - communications, outreach web developer.
  • Colin Packenham - industrial designer.
  • Carl Watson - Drupal Association intern.
  • Doris Wong - Acquia UX intern, courseware developer.

Lessons learned

The most difficult part of the classroom training was probably its relentless nature. Every day the students were presented with new concepts - from a Drupal standpoint, as well as Acquia products and services. The standard Drupal Career Online 12-week schedule provides a lot more breathing room for students to digest, review, and explore concepts (something I refer to as "soak time"). From the start we built in as many "lab hours" as feasible for the AcquiaU students to focus on completing assignments, reviewing curriculum, or working on team projects; but it always seemed like there were never enough hours in the day.

Other than the repeated requests for time machines, students also asked for periodic, more formalized feedback. Typically during the Drupal Career Online program, I speak with students individually two or three times during the 12 weeks to talk with them about their progress, expectations, and areas of need. These conversations provide the student with some feedback, but also provide me confirmation that each student is progressing as well as I think they are. During the six on-site classroom weeks of AcquiaU, I wrote weekly evaluations for each student, but this information was not initially shared with the students. Based on student feedback, these evaluations will be provided to students, and in the future, I'll be providing weekly written feedback for all students of Drupal Career Online.

Over the past few years of writing, delivering, and refining the Drupal Career Online curriculum, I think we've found a pretty good balance of lecture, classroom exercises, demos, learning materials, and homework. Our curriculum has become more concise, focused, and builds upon previous lessons in a meaningful way. As we started putting together Acquia products and services curriculum, we quickly found that 1-1.5 hour presentations on various topics was just about the right amount of time. More than that and the topics often dove too deep into the weeds, less than that and the students weren't getting much more than an overview. In addition, we also found that the order and timing of Acquia content was more important than we originally thought.

Thank you, Acquia!

It was a great opportunity and experience to be able to participate in AcquiaU. Amy Parker and the rest of the learning services team could not have been more welcoming and supportive of our contributions to the program. AcquiaU is a unique program - where else can you go and get paid to learn Drupal? Additional sessions of AcquiaU are planned for 2015, check the (soon-to-be-relaunched) u.acquia.com web site for details. If you're not interested in a full-time training program (or moving to the Boston area), be sure to check out the next session of Drupal Career Online!

Trackback URL for this post:


Jan 27 2015
Jan 27

I decided to write this article after reading The Decline of Drupal, or How to Fix Drupal 8 by Mike Schinkel. I am not a Drupal person at all, but what is discussed here is quite close to discussion I had with my friends from the Plone community, and I am quite sure it is not restricted to those 2 CMS.

The question here is how to preserve popularity (or maybe just approachability) when we decide to restrict hackability.

And just to make clear what "hackable" means, I will just quote Jennifer Lea Lampton (already quoted in Mike's article):

Back in the day, Drupal used to be hackable. And by "hackable" I mean that any semi-technical yahoo (that's me, btw) who needed a website could get it up and running, and then poke around in the code to see how it all worked. The code was fairly uncomplicated, though often somewhat messy, and that was fine. At the end of the day, it did what you needed.

A CMS story

In the beginning was the Hack

When a CMS is still young, it is still light and fresh, not heavily structured, hence it can be hacked in many ways. Hacking is even probably the official way to use it. And that is precisely what is fun and attracting about it. That is what might turn it into a success.

The first versions of Plone were so easy to hack, we had something we named TTW, standing for Through-The-Web, and designating a fantastic feature of Zope (the Plone application server) allowing to code scripts, templates or even classes directly from the web interface. It was great because it allowed a very large audience to be efficient with Plone.

I guess Drupal had also very attractive hacking possibilities like the Drupal hooks.

To hack or not to hack

But soon, some developers point that such hacking is probably not a good idea. They say hacking might seems smart and efficient and productive, but that is in fact the very opposite. It makes maintainability more difficult, it makes upgrades or migration sometimes totally impossible, it does not conform to the programming standards and best practices.

And they are actually right. But saying so (and being right) is obviously not enough to make people stop hacking.

Nevertheless a gap appears, and new versions are not as hack-friendly as it used to:

- "Hey, I cannot do that anymore!",

- "Right, but if you were doing it the right way, you wouldn't have to do it this way",

- "Mmmmokay...".

You shall not hack

A CMS is continuously evolving, its users expect it to be able to provide the cool new features invented on the web last week, and its developers want it to integrate the bright new frameworks invented on GitHub last month.

At some point, it is quite clear that offering hackabilty restricts its capacity to evolve, and endangers the system. To be able to survive to all the needed changes, it must rely on a strong and strict architecture.

And a version is released where hackabilty is banished (that's what Drupal 8 is about, right? I guess we did it with Plone 3, even if it is not that clear).

I want my hack back

This is probably a wise technical decision, and the core developers are very proud of it. The system is clean now, and we can confidently face our future challenges.

But it is also a very very impopular decision. A big part of the developers were using the hack way because they cannot afford investing time to learn the straight way, some of them do not even understand why the so called "straight way" is that better than their usual way.

By trying to make our CMS better, hence more attractive, we made it impopular. That is really unfortunate.

And if some of those disappointed people consider moving to another solution, ideally a brand new CMS still its early happy-hacking age, I have a bad news for them: there will be no new CMS (see my post Why CMS will not die).

Hacking is not a bug, it is a feature

The problem here is a severe misunderstanding.

Of course, on the development point of view, hackability can be considered as a flaw and a danger, but that is not the proper point of view here. We must consider it with the usage point of view, and regarding usage, hackability is a very valid use case.

It is even a major use case, and our CMS must preserve it, or it will be endangered.

Nevertheless, I agree that banishing hackability from the core is a good decision. So how do we manage that?

That is simple: we just produce a clean/straight/unhackable core, and we implement hackability on top of it as a feature.

Implementing hackability means offering tools to deeply change the behaviour or the appearence of our system without messing with its underlying architecture.

That is what Plone proposes with the Diazo theming tool: as theming was involving too many Plone knowledge, it was unapproachable for non-Plone integrators, so we provide a theming proxy which dynamically injects a static design on any Plone page using a simple set of rules, and it is entirely controlled from a nice web UI.

That is also my objective with Plomino, so people can easily create a custom application that will work in their Plone site without learning about complex frameworks.

But there are of course many other fields to cover.

It is not easy, because building a tool able to provide as much flexibility as code hacks is a complex challenge, but that is the only way to keep our CMS valid (and to keep it fun too), hence to keep our audience.

Jan 27 2015
Jan 27

We worked hard during a few days and developed our own module called Private message with node.js.

Private message with node.js is the module based on the node.js platform, allowing users to send and receive text messages immediately without page reload. This module is built on the basis of Private message and Node.js integration modules. What are the advantages of this functionality?

Probably the most important feature of such tools is the speed of their work. After all, the messages, that are sent immediately save time and your nerve cells. With our module private messages are becoming more convenient, full-fledged and are working faster as well. Node.js does not overload the server, which means, that the process of private messages exchanging is accelerated substantially.

The multi functionality of Private message with node.js module is its another advantage. After all, the powerful Private message module is the basis of our tool. It is a powerful instant messaging system in Drupal, which opens up new opportunities to expand and improve this functionality (just consider the flexible API for developers).

Our module has a built-in API, meaning that to modify, add or expand anything is now much easier. The module can be adjusted to almost any website. It fits perfectly both for social networking sites and portals, where a fast and easy communication between users should be set up.

We did not violate the basics — the API Private message can be still used.

We have enumerated the main advantages of Private message with node.js. Next, let’s consider the useful features of our module:

  • User can now receive and send messages without reloading the page.
  • User receives notifications about new messages on every page without reloading it.
  • Having received a message notification, user is free to either select the dialogue page or mini-chat option for communication.
  • User can open several mini-chats at a time.
  • Notification sounds settings available. Note, to implement this functionality some additional tools (namely libraries module and audiojs library) should be enabled.
  • Typing indicator available: user can see, whether their interlocutor is writing a message in response.

You can also test its work on a demo-site here: http://example.internetdevels.com/pmnodejs/

Our Drupal development team does its best for your convenience! Enjoy communication!

Jan 27 2015
Jan 27

Let’s face it, we have a problem. It doesn't matter how powerful what you build is if no one wants to use it. How can the modern CMS hope to compete with simple drag and drop, one-click authoring services that can produce the majority of websites people want to build? Do you really think the answer to this can still be found on Drupal’s edit form or with the panels ecosystem? Will open source CMSs fade from popularity because we continue to ask human beings to design content with a WYSIWYG? Well, we aren't waiting to find out...

CKEditor in core is not going to be enough to make authorship shine in D8 and beyond. We need to focus on the thing that everyone seems to work on last, and that's the authoring experience. Members of the Penn State Drupal community are organizing a sprint at DrupalCamp NJ which will focus on a radical overhaul of the "page" authoring experience in Drupal. If you have ideas, you should seriously consider joining us in this endeavor.

Our starting point can be seen in this InvisionApp prototype. We are searching for that sweet spot between simplicity and flexibility, and we think we might be close.


  • Create an authoring experience that rivals the “web-site-tonight” platforms

  • Build against JSON endpoints from a nearly static HTML interface; making it "headless"

  • Create a flexible yet simple data model via Drupal entity's

  • As part of the push towards "headless" development, we want to be able to support Drupal 7, 8, Backdrop and theoretically anything that would supply the IA and RESTful endpoints

  • Not be locked into any specific modules for production (though we will support RESTWS first)

  • Build upon the ideas of panels and omega (3.x) while sticking to responsive design principles inline with Zurb, bootstrap and other popular frameworks

It is a moonshot born out of a three + hour, wall-wide, white-board jam session between me, Michael Collins (@_mike_collins), and Michael Potter (@hey__mp).  We are spearheading the development of a prototype built in either AngularJS or ReactJS talking to custom D7 "element" entities exposed via RestWS endpoints.

This is the kick off development sprint and we hope to see you there with your feedback, ideas and code :)

Jan 26 2015
Jan 26

It’s hard enough trying to find cool websites in general, let alone cool websites made using Drupal. I’ve managed to find 5 that I’d like to highlight below:

Anthelios by Laroche-Posay

A standalone product page disguised as an educational site about UV protection by Laroche-Posay. It’s great seeing widely different uses for Drupal. This being mostly a single page design using a lot of creative overlays and animations as you scroll (and be prepared to scroll a lot to get through all of it). Art style is cartoony, yet inviting and goes very well with the content. 


I’ve always been a sucker for clean,minimalistic site design. LiveAreaLabs, a design firm based out of New York / Seattle, has made just that. Clean lines, liberal use of white space, smooth page and section transitions, and one of the more unique “hamburger-style” menu’s I’ve seen used on a responsive website. Everything about this site is just cool, even down to the way the logo adapts in color as you scroll across different background colors.


Yet another clean minimal site design by some creative folks in Poland. Their portfolio layout is great, they let their large visuals do all the communicating with just a list of services provided for each project. I find this a lot more powerful than using a bunch of words and paragraphs. They’ve also got a cool animated dot loading gif for page transitions. One minor gripe: wish the gif looped more seamlessly. 

Tyler School of Art

Here’s a website that has a lot of content but formats it well with interesting shapes, colors and textures. The site is fully responsive and carries its design language well across all resolutions. I had no trouble navigating around the massive site and really dug how they treated headings and typography, never felt lost. Kudos to the designers of this site.

82nd & Fifth

A spin-off from the Metropolitan Museum of Art, a showcase of 100 works of art curated over the course of a year. There are weekly episodes of highlighted art slideshows and intuitive uses for pinch & zoom for viewing the art up close. The site is also available as an app for iOS in 12 different languages.

Jan 26 2015
Jan 26

atrium-logo (1)The constant struggle between content editors and web developers:

  • Content editors want to embed rich media, callouts, and references to related content anywhere within their WYSIWYG.  They love the Body field and want more buttons added to the WYSIWYG.
  • Web developers want to add additional fields for media, attachments, related content to support rich content relationships and access control.  Web developers hate the Body field and wish they didn’t need a WYSIWYG.

In the latest 2.30 version of Open Atrium, we attempt to help both content editors and web developers with a new approach to building rich story content using the Paragraphs module.  Rather than having a single WYSIWYG body, content editors can build their story using multiple paragraphs of different types and layouts. The order of these paragraphs can be rearranged via drag and drop to create long-form content.

Site builders can add their own custom paragraph types to their sites, but Open Atrium comes with four powerful paragraph types “out of the box”:

(1) Text Paragraphs


The simplest paragraph type is the “Text” paragraph.  It works just like the normal Body field with it’s own WYSIWYG editor.  But an additional Layout field is added to control how the text is rendered.  There are options for multiple columns of wrapping text within the paragraph (something nearly impossible to do with the normal Body field), as well as options for left or right floating “callouts” of text.

(2) Media Gallery


The “Media Gallery” paragraph handles all of the images and videos you want to add to your story.  It can replace the normal Attachments field previously used to add media to a document.  Each Media paragraph can contain one or more images, videos, or files.  The Layout field controls how that media is displayed, providing options for left or right floating regions, or a grid-like gallery of media.  Videos can be embedded as preview images or full video players.

When floating media to the left or right, the text from other paragraphs will flow around it, just as if the media had been embedded into the WYSIWYG.  To move the images to a different part of the story, just drag the media paragraph to a new position in the story.

In Open Atrium, images directly embedded into the Body WYSIWYG field becomes Public, bypassing the normal OA access control rules.  However, anything added to a Media paragraph works more like the Attachment field and properly inherits the access permission of the story document being created.  Thus, the Media paragraph provides a way to embed Media within your story while retraining proper privacy permissions.

(3) Snippets


The “Snippet” paragraph type allows you to embed text from any other content on your site.  You can specify whether the Summary, Body, or full Node is embedded and also control the Layout the same as with Text paragraphs.  You can also either display the Title of the referenced content or hide the title, or override the title with your own text.

One of the best features of Snippets is the ability to lock which revision you want to display.  For example, imagine you want to embed a standard operating procedure (SOP) within your story document.  You create a Snippet paragraph that points to the SOP.  However, if the related SOP node is updated in the future, you don’t want your old document to change.  For compliance purposes it still needs to contain the original SOP text.  By “locking” the snippet to the old revision, the old document will continue to display the original SOP even if the SOP is updated later.  If you “unlock” the snippet, then it will display the latest version of the related SOP.

Open Atrium access controls are also respected when displaying snippets.  If you reference content that the user doesn’t have permission to view, that snippet will be removed from the displayed text.  Users still only see the content they are allowed.  This provides a very powerful way to create rich documents that contain different snippets of reusable content for different user roles and permissions.  Similar to adding additional fields with Field Permissions, but much more flexible and easy to use.

(4) Related Content


The “Related Content” paragraph type is similar to Snippets, but displays the Summary or Full rendered node of the related content.  Like the Media paragraph, the Related Content can contain one or more references to other content on the site.  The Layout provides options for displaying the content as a table of files, or a list of node summaries (teasers), or as full node content.  When full node content is used, any paragraphs used in the related content will also being displayed (paragraph “inception”!).  In addition, any special fields from the full related node can be shown.  For example, a Related Event will show the map of the event location.  A Related Discussion will show all of the discussion replies and even provide the Reply Form, allowing you to reply to a related discussion directly from the story document itself!

Related Content is also a bi-directional link.  When you view the related content node, a sidebar widget called “Referenced From” will show all of the stories that reference the node being viewed.

A Real World Example

To pull all of this together with a real-world example, imagine that you are scheduling a Meeting.  You want to create an Agenda for that meeting and allow your team to discuss and edit the agenda before the meeting.  In Open Atrium you can now do this all from a single document:

  1. Create the Event for the Meeting, adding your team to the Notifications
  2. Add a Related Content paragraph for the meeting Agenda document
  3. Add a Related Content paragraph for the agenda Discussion

Open Atrium is smart about where this related content is created.  If you already have a section for documents, the Agenda will be created within that section.  If you already have a section for discussions, the related discussion will be placed there.  You can change these locations if you wish, but the default behavior reflects the most common information architecture.

When your team members receive the email notification about the meeting and click the link, they will be taken to your Event and will see the agenda document and discussion as if they were a normal part of the event body.  They can view the agenda content directly and can post replies directly into the discussion reply field.  They don’t need to go to separate places on the site to see the document or discussion.  If you *do* view the document or discussion nodes directly, such as from a search results page, you’ll see a link back to the meeting event in the References From list in the sidebar.


Not only do the Paragraph features help content editors build rich stories quickly and easily, they allow web developers to create related documents, linked content, better search results, better data structures.  It’s still not a magical unicorn wysiwig of content editor’s dreams, but it’s a significant step for Open Atrium and Drupal. It opens a whole new world of collaboration where all related content can be viewed together.

Looking for more information about Open Atrium? Sign up to receive Open Atrium newsletters and updates!

Jan 26 2015
Jan 26

atrium-logo (1)The constant struggle between content editors and web developers:

  • Content editors want to embed rich media, callouts, and references to related content anywhere within their WYSIWYG.  They love the Body field and want more buttons added to the WYSIWYG.
  • Web developers want to add additional fields for media, attachments, related content to support rich content relationships and access control.  Web developers hate the Body field and wish they didn’t need a WYSIWYG.

In the latest 2.30 version of Open Atrium, we attempt to help both content editors and web developers with a new approach to building rich story content using the Paragraphs module.  Rather than having a single WYSIWYG body, content editors can build their story using multiple paragraphs of different types and layouts. The order of these paragraphs can be rearranged via drag and drop to create long-form content.

Site builders can add their own custom paragraph types to their sites, but Open Atrium comes with four powerful paragraph types “out of the box”:

(1) Text Paragraphs


The simplest paragraph type is the “Text” paragraph.  It works just like the normal Body field with it’s own WYSIWYG editor.  But an additional Layout field is added to control how the text is rendered.  There are options for multiple columns of wrapping text within the paragraph (something nearly impossible to do with the normal Body field), as well as options for left or right floating “callouts” of text.

(2) Media Gallery


The “Media Gallery” paragraph handles all of the images and videos you want to add to your story.  It can replace the normal Attachments field previously used to add media to a document.  Each Media paragraph can contain one or more images, videos, or files.  The Layout field controls how that media is displayed, providing options for left or right floating regions, or a grid-like gallery of media.  Videos can be embedded as preview images or full video players.

When floating media to the left or right, the text from other paragraphs will flow around it, just as if the media had been embedded into the WYSIWYG.  To move the images to a different part of the story, just drag the media paragraph to a new position in the story.

In Open Atrium, images directly embedded into the Body WYSIWYG field becomes Public, bypassing the normal OA access control rules.  However, anything added to a Media paragraph works more like the Attachment field and properly inherits the access permission of the story document being created.  Thus, the Media paragraph provides a way to embed Media within your story while retraining proper privacy permissions.

(3) Snippets


The “Snippet” paragraph type allows you to embed text from any other content on your site.  You can specify whether the Summary, Body, or full Node is embedded and also control the Layout the same as with Text paragraphs.  You can also either display the Title of the referenced content or hide the title, or override the title with your own text.

One of the best features of Snippets is the ability to lock which revision you want to display.  For example, imagine you want to embed a standard operating procedure (SOP) within your story document.  You create a Snippet paragraph that points to the SOP.  However, if the related SOP node is updated in the future, you don’t want your old document to change.  For compliance purposes it still needs to contain the original SOP text.  By “locking” the snippet to the old revision, the old document will continue to display the original SOP even if the SOP is updated later.  If you “unlock” the snippet, then it will display the latest version of the related SOP.

Open Atrium access controls are also respected when displaying snippets.  If you reference content that the user doesn’t have permission to view, that snippet will be removed from the displayed text.  Users still only see the content they are allowed.  This provides a very powerful way to create rich documents that contain different snippets of reusable content for different user roles and permissions.  Similar to adding additional fields with Field Permissions, but much more flexible and easy to use.

(4) Related Content


The “Related Content” paragraph type is similar to Snippets, but displays the Summary or Full rendered node of the related content.  Like the Media paragraph, the Related Content can contain one or more references to other content on the site.  The Layout provides options for displaying the content as a table of files, or a list of node summaries (teasers), or as full node content.  When full node content is used, any paragraphs used in the related content will also being displayed (paragraph “inception”!).  In addition, any special fields from the full related node can be shown.  For example, a Related Event will show the map of the event location.  A Related Discussion will show all of the discussion replies and even provide the Reply Form, allowing you to reply to a related discussion directly from the story document itself!

Related Content is also a bi-directional link.  When you view the related content node, a sidebar widget called “Referenced From” will show all of the stories that reference the node being viewed.

A Real World Example

To pull all of this together with a real-world example, imagine that you are scheduling a Meeting.  You want to create an Agenda for that meeting and allow your team to discuss and edit the agenda before the meeting.  In Open Atrium you can now do this all from a single document:

  1. Create the Event for the Meeting, adding your team to the Notifications
  2. Add a Related Content paragraph for the meeting Agenda document
  3. Add a Related Content paragraph for the agenda Discussion

Open Atrium is smart about where this related content is created.  If you already have a section for documents, the Agenda will be created within that section.  If you already have a section for discussions, the related discussion will be placed there.  You can change these locations if you wish, but the default behavior reflects the most common information architecture.

When your team members receive the email notification about the meeting and click the link, they will be taken to your Event and will see the agenda document and discussion as if they were a normal part of the event body.  They can view the agenda content directly and can post replies directly into the discussion reply field.  They don’t need to go to separate places on the site to see the document or discussion.  If you *do* view the document or discussion nodes directly, such as from a search results page, you’ll see a link back to the meeting event in the References From list in the sidebar.


Not only do the Paragraph features help content editors build rich stories quickly and easily, they allow web developers to create related documents, linked content, better search results, better data structures.  It’s still not a magical unicorn wysiwig of content editor’s dreams, but it’s a significant step for Open Atrium and Drupal. It opens a whole new world of collaboration where all related content can be viewed together.

Looking for more information about Open Atrium? Sign up to receive Open Atrium newsletters and updates! Don’t miss our winter release webinar on Wednesday, January 28th, at 11am EST!

Jan 26 2015
Jan 26

This is part two of a series on configuration management challenges in Durpal 8. Part 1 looked at challenges for small sites and distriubtions.

What is the state of support for distributions in Drupal 8?

Trying to gauge the state of anything in Drupal 8 has its inherent pitfalls. The software itself is still changing rapidly, and efforts in the contributed extensions space have barely begun. That said, various initiatives are in process.

For background on configuration management in Drupal 8, see the documentation on managing configuration and the configuration API. Drupal 8 configuration is divided between a system for one-off simple configuration like module settings and configuration entities that store items you may have from zero to any number of, like content types and views. The Drupal 8 handbook pages on configuration are useful but not fully up to date.

Two recent blog post series that provide background and technical details are:

The challenges

Distributions in Drupal can be divided into two main types:

  • Starter-kit distributions like Bear are designed to get you started in building a site that you then take in your own direction.
  • Full-featured distributions like Open Atrium or Open Outreach are designed to fill a use case and support upgrades.

This distinction is important in light of the Drupal 8 assumption that sites, not modules, own configuration. Starter-kit distros will work fine with this assumption, but for full-featured distros it presents major challenges; see part 1 of this series.

Configuration management in Drupal 8 is built primarily around the single-site staging or deployment problem rather than the requirements of distributions. Back in 2012 a discussion tried to assess what was needed to make Drupal 8 distribution-friendly, but it didn't get far.

Two types of tools look to be needed to fill the gaps.

  • Developer tools. Managing configuration in distributions will require exporting it into feature-like modules. Because any extension (module, theme, installation profile) can include configuration, most of the needs of distribution authors are a subset of what any extension developer will need. For example, the built-in Drupal 8 configuration export functionality is designed only for use on a single site.
  • Site tools. Since Drupal core's single-site configuration management model conflicts with the requirements of updatable distributions, specialized modules will be needed to provide distribution-based sites with the ability to receive configuration updates.

Emerging solutions

  • Features 8.x
    Some of the first efforts to provide distribution-related functionality came in the form of sketches towards a Drupal 8 version of the Features module. The sandbox module contains a small collection of methods that can be called from the Drupal command line utility Drush for editing and reverting configuration modules.
  • Configuration Development
    Configuration Development provides automated import and export of configuration between the active configuration storage and exported modules.
  • Configuration Revert
    The sandbox Configuration Revert project provides a set of reports that allows you to see the differences between the configuration items provided by the current versions of your installed modules, themes, and install profile, and the configuration on your site. From these reports, you can also import new configuration provided by updates, and revert your site configuration to the provided values.
  • Configuration Packager
    Configuration Packager enables the packaging of site configuration into modules, like Features for Drupal 7. Rather than producing manually authored individual features, Configuration Packager analyzes the site configuration and automatically divides it up into configuration modules based on configured preferences.

Remaining work and coordinating efforts

See the drupal.org issue META: Required functionality for Drupal 8 distributions for an initial inventory of the work outstanding to prepare for Drupal 8 distributions.

As usual, the main challenges are probably not so much technical as strategic and organizational. To prepare the way for Drupal 8 distributions, we need to coordinate to understand barriers, explore solutions, and pool efforts.

Part of this work will be developing shared, generic tool sets. Already, there's a lot of work in modules like Features 8.x and Configuration Packager that isn't specific to features or packages of configuration and would better be merged into a more generic solution; see the issues #2383959, #2405015, and #2407609. Configuration Development is the most likely candidate (#2388253), though there are some outstanding issues.

Interested in helping? Please comment on and help flesh out the meta issue or the issues and projects referenced there.

Packaging configuration

My own efforts have been focused recently on taking a fresh approach to packaging configuration in Drupal 8 in the Configuration Packager module. In my next post in this series, I'll introduce that project.

Jan 26 2015
Jan 26

The Kickstarter campaign was not funded, but that does not mean that it was not successful! We are still moving ahead. I've just published my first course on Udemy and would like to get pilot members to provide feedback so that I can make sure the course ends up being world class.

Here is a coupon code to access the course for free: https://www.udemy.com/getting-started-with-drupal-for-total-beginners/?c...

The course introduction provides more details about the planned direction for the training. So, I won't repeat it all here. Suffice it to say that I am still planning to follow the Ridiculously Open Online Self Training Site philosophy.

Udemy requires that all coupons have a quantity specified. I have set the code to allow 250 redemptions. I'll update this post if the coupons "sell out."