Mar 14 2017
Mar 14

There's a lot to say about Drupal Configuration Management. Many contrib modules have emerged to address the shortcomings in core and I agree that most of them are clearly solving a need. I even have colleagues claiming "there's a CM-related article every week on Drupal Planet!". Here's one more :-)

Still, I'm trying to work with what core has to offer unless forced to do otherwise. And you can already do a ton. Seriously. What I think is crucial with CM is the possibility to 'productize' or 'featurize' a site's configuration. Think building an app from scratch through automation. Think SaaS. Put differently, it's all about being able to build a specific feature (e.g. content type, form/view mode, etc.) and ship it to any other D8 instance.

Yes, the idea here is not to solve the dev to stage to prod deployment issues but to primarily spin up a new D8 dev instance and configure it like a full-featured application. And core does that very well out of the box.

Building the app

Back in 2014 when I started learning D8 and built a PoC of a REST-based D8 endpoint, I had to wipe my test site almost daily and create it from scratch again as core was constantly changing. Then I realized CM was perfect for this use case. Back then I had to work around UUID issues. Allow a site to be installed from existing configuration demonstrates our headache isn't over just yet. But the concept was essentially the same as it is today:

  • Spin up a new D8 instance
  • Enable all required contrib/custom modules/themes
  • Export your site's configuration
  • Version-control your CM sync directory
  • Add all CM files under version control
  • Build a simple feature (e.g. content type)
  • Export your site's configuration
  • Copy the new/modified files for later use (thanks git diff)
  • Add all new/modified CM files under version control
  • Rinse & repeat

With this simple workflow, you'll be able to incrementally build a list of files to re-use when building a new D8 instance from scratch. Oh, and why would we even bother creating a module for that? This works great as it is, granted you'll be extra careful (TL;DR use git) about every change you make.

Spinning up a new app

To test setting up your app, the workflow then becomes:

  • Spin up a new D8 instance
  • Enable all required contrib/custom modules/themes
  • Export your site's configuration
  • Version-control your CM directory
  • Add all CM files under version control
  • Copy your previously backed up configuration files to the sync directory
  • Import your new configuration

Looking back to how life was before Drupal 8, you will likely not disagree this is much better already. Here's an example for building an app from scratch. All of this could obviously be scripted.

$ cd /path/to/sync/dir
$ for i in module1, module2, module3, module4, module5 ; do drush @site.env en -y $i ; done
$ drush @site.env cex -y
$ git init
$ git add --all && git commit -m "Initial configuration"
$ cp /path/to/configuration/backup/*.yml .
$ git status
$ drush @site.env cim -y
$ git add --all && git commit -m "New configuration"

Now, here's a real-life example, summarized through the bit we're interested in: building the app from scratch through config-import.

$ drush @d8.local cim -y
    Config                                           Operation                                                                                             
    field.storage.node.field_inline_client              create 
    field.storage.node.field_email                      create 
    field.storage.node.field_address                    create 
    node.type.client                                    create 
    field.field.node.client.field_email                 create 
    field.field.node.client.field_address               create 
    core.base_field_override.node.client.title          create 
    node.type.contract                                  create 
    field.field.node.contract.field_inline_client       create 
    core.base_field_override.node.contract.title        create 
    core.base_field_override.node.contract.promote      create 
    field.storage.paragraph.field_unit                  create 
    field.storage.paragraph.field_reference             create 
    field.storage.paragraph.field_quantite              create 
    field.storage.paragraph.field_price                 create 
    field.storage.node.field_service                    create 
    field.field.node.contract.field_service             create 
    core.entity_form_display.node.contract.default      create 
    paragraphs.paragraphs_type.service                  create 
    field.field.paragraph.service.field_unit            create 
    field.field.paragraph.service.field_reference       create 
    field.field.paragraph.service.field_quantite        create 
    field.field.paragraph.service.field_price           create 
    field.storage.node.field_telephone                  create 
    field.field.node.client.field_telephone             create 
    core.entity_form_display.node.client.default        create 
    field.storage.paragraph.field_description           create 
    field.field.paragraph.service.field_description     create 
    core.entity_view_display.paragraph.service.default  create 
    core.entity_form_display.paragraph.service.default  create 
    core.entity_view_display.node.contract.teaser       create 
    core.entity_view_display.node.contract.default      create 
    core.entity_view_display.node.client.default        create 
    user.role.editor                                    create 
    system.action.user_remove_role_action.editor        create 
    system.action.user_add_role_action.editor           create 
    auto_entitylabel.settings                           create
Import the listed configuration changes? (y/n): y  
 [notice] Synchronized configuration: create field.storage.node.field_inline_client.
 [notice] Synchronized configuration: create field.storage.node.field_email.
 [notice] Synchronized configuration: create field.storage.node.field_address.
 [notice] Synchronized configuration: create node.type.client.
 [notice] Synchronized configuration: create field.field.node.client.field_email.
 [notice] Synchronized configuration: create field.field.node.client.field_address.
 [notice] Synchronized configuration: create core.base_field_override.node.client.title.
 [notice] Synchronized configuration: create node.type.contract.
 [notice] Synchronized configuration: create field.field.node.contract.field_inline_client.
 [notice] Synchronized configuration: create core.base_field_override.node.contract.title.
 [notice] Synchronized configuration: create core.base_field_override.node.contract.promote.
 [notice] Synchronized configuration: create field.storage.paragraph.field_unit.
 [notice] Synchronized configuration: create field.storage.paragraph.field_reference.
 [notice] Synchronized configuration: create field.storage.paragraph.field_quantite.
 [notice] Synchronized configuration: create field.storage.paragraph.field_price.
 [notice] Synchronized configuration: create field.storage.node.field_service.
 [notice] Synchronized configuration: create field.field.node.contract.field_service.
 [notice] Synchronized configuration: create core.entity_form_display.node.contract.default.
 [notice] Synchronized configuration: create paragraphs.paragraphs_type.service.
 [notice] Synchronized configuration: create field.field.paragraph.service.field_unit.
 [notice] Synchronized configuration: create field.field.paragraph.service.field_reference.
 [notice] Synchronized configuration: create field.field.paragraph.service.field_quantite.
 [notice] Synchronized configuration: create field.field.paragraph.service.field_price.
 [notice] Synchronized configuration: create field.storage.node.field_telephone.
 [notice] Synchronized configuration: create field.field.node.client.field_telephone.
 [notice] Synchronized configuration: create core.entity_form_display.node.client.default.
 [notice] Synchronized configuration: create field.storage.paragraph.field_description.
 [notice] Synchronized configuration: create field.field.paragraph.service.field_description.
 [notice] Synchronized configuration: create core.entity_view_display.paragraph.service.default.
 [notice] Synchronized configuration: create core.entity_form_display.paragraph.service.default.
 [notice] Synchronized configuration: create core.entity_view_display.node.contract.teaser.
 [notice] Synchronized configuration: create core.entity_view_display.node.contract.default.
 [notice] Synchronized configuration: create core.entity_view_display.node.client.default.
 [notice] Synchronized configuration: create user.role.editor.
 [notice] Synchronized configuration: create system.action.user_remove_role_action.editor.
 [notice] Synchronized configuration: create system.action.user_add_role_action.editor.
 [notice] Synchronized configuration: create auto_entitylabel.settings.
 [notice] Finalizing configuration synchronization.
 [success] The configuration was imported successfully.

If the import was successful, reload your site and observe everything shows up like magic: site configuration, content types, custom fields, view/form modes, module configuration, views, etc.

Sure you could argue that doing so is very prone to errors, but remember that a) it's primarily for development needs and b) you need to version-control all of this to be able to go back to the last working commit and revert if necessary.

Wrapping up

Two other use cases I very much like are:

  • When I want to demonstrate an issue, I can simply share some files for someone else to import and quickly reproduce the issue with little efforts and no room for configuration mismatch.
  • When building a new feature (e.g. a Flag and a View), I can do so in dev, then export only the files I need, and import in stage or prod when I'm ready.

Building an app will obviously take much more than that but, as I hear more and more frustration about how Configuration Management was designed, I thought I'd set the record straight on how it solves my biggest problems when developing an app.

Mar 11 2017
Mar 11

Drupal 8

Drupal 8

I carried out a empathy mapping exercise at Drupal Camp London 2017 to capture the community’s perspective towards Drupal 8

The community perspective from Drupal Camp London towards Drupal 8:

Drupal 8 Empathy mapping the community's perspective 2017

Drupal 8 Empathy mapping the community's perspective 2017

I would encourage you to download the template, use it capture the community perspectives at your own Camps and meetups. The template can be downloaded here, and is best printed out as an A1 poster. 

Additionally please use the hashtag #D8Empathy to broadcast your findings. So that we can compare maps across camps to improve our understanding of the community and Drupal 8’s impact on it.

……

Lastly, If you got value from what I have shared please consider giving back by contributing towardsPeace Through Prosperity, you can follow, broadcast or donate.

Peace Through Prosperity improves the local/domestic environment for peace by nurturing prosperity in conflict affected geographies. We work to alleviate poverty, prevent radicalisation by empowering micro-entrepreneurs from marginalised communities. 

Peace Through Prosperity is innovating social transformation design and delivery using Agile frameworks to create and deliver low cost, high impact social development programs in ‘at risk’ communities.

Mar 09 2017
Mar 09

DrupalCamp London 2017 was the best camp I ever attended. “You come for the code, you stay for the community” is the Drupal motto, and this camp lived up to it with great keynotes, talks and an atmosphere which really encouraged collaboration. It also gave me the chance to share my thoughts on configuration deployment in Drupal 8 in a dedicated session.

Configuration Deployment in Drupal 8

This session was an attempt to demystify and counter the idea that deploying configuration in D8 is a nightmare. I made a comparison of tools which help you in the deployment process and ran an exercise on how to improve it.

Here are the slides:

There’s already lots of literature about CMI, and the number of contrib modules and drush extensions which aim to improve configuration management grows every month.

But as Acquia’s JAM said in his keynote, we should stop focusing on the ‘What’: ‘What we do’; ‘What this contrib module does’. We should instead focus on the ‘Why’, by getting out of the developer mindset and thinking about the user experience.

The main problem is that CMI imports/exports a monolithic configuration containing all your website settings. It will overwrite, or even destroy, everything in the destination if it doesn’t exist in the settings being imported/exported.

For a simple website this can be enough, but for medium-large projects you want to tailor the process, selecting what can be deleted, what needs to be overridden and what must remain untouched.

Segmenting Drupal 8 configurations for better configuration deployment

My exercise was to identify common patterns between the contrib modules and what a site builder needs when importing the configuration. I spotted four recurring segments from the original monolithic block:

Primary: this segment comprises the main structure of your website: Content Types, Fields, Core Settings. When imported/exported its settings need to override the destination, and missing ones need to be deleted.

Secondary: this segment contains all the new stuff which needs to remain untouched. It contains client new instances of e.g. Webforms, Views, Page Manager. The settings on this segment do not have to be deleted nor overridden.

Initial: this is the one-time only segment. On import/export we “suggest” its settings. The destination will accept the configuration if it does not already exist. Otherwise it will be skipped. Think about the Site settings’ email value. During development you put yours, but then in Production the client can change it. So we “offer” it the first time, but we won’t overwrite it during following imports.

Devel: this is not a real segment, not an important one at least. It’s more a placeholder for any environment-configuration we require. The name I used refers to a hypothetical ‘Devel’-only configuration settings segment, where we want to keep the configuration of e.g. Devel, Stage File Proxy, Web Profiler contrib modules. But this is just an example. Outside of its context (the environment) this segment doesn’t have any value.

In the slide deck you’ll find the segment details. Below is a matrix (also in the deck) comparing how the most popular config-related contrib modules can (or can NOT) work with the PSI segments.

Drupal 8 configuration deployment solutions

Continuing the CMI conversation

The audience at my talk seemed to like the approach and the naming convention. As I kept saying “naming things makes them less scary”.

I hope this convention can become the first drop of a waterfall of positive thoughts and improvements on the CMI world. I’ll see you at DrupalDevDays in Seville to continue the discussion, but in the meantime, please add your comments, thoughts and questions below.

Mar 07 2017
Mar 07

First and foremost thank you to all who made the time to attend my session on Empathy Driven Content Strategy at Drupal Camp London 2017. Thank you for sharing your time and perspectives.

This session was an evolution of two previous sessions:

There is a difference between walking in someone else’s footsteps and walking in their shoes!

‘Empathy Driven Content Strategy’ explores the transformation in content consumption, purpose, generation and how it impacts us. Looking at how empathy, touch points, sentiment analysis and emotional intelligence can be harnessed to create richer, more personalized experiences for people. With the purpose of motivating others to share the journey with us with content that is pertinent to and addresses their needs over the course of the journey.

We have seen how, over the past year empathy driven content, the use of sentiment analysis and knowing which touchpoint to invest in has played its role in both Brexit and the Trump campaigns. There are lessons behind their success for all regardless of which side of the campaign divide we may sit on.

As for getting started with Empathy maps, you can download examples and a blank canvas from the resources section below. Bear in mind the key takeaway is to ‘talk to people’ treat them as people first (customers later), to engage for the sake of understanding and keep our instinct to react in check… only when we understand can we respond.

Resources mentioned during the session:

Sentiment Analysis
Further reading

…………………..

Lastly, If you got value from what I have shared please consider giving back by contributing to @BringPTP, you can follow, broadcast or donate.

Peace Through Prosperity (PTP) improves the local/domestic environment for peace by nurturing prosperity in conflict affected geographies. We work to alleviate poverty, prevent radicalisation through empowering micro-entrepreneurs with knowledge, skills, ability and increasing their access to income and opportunities. We support small businesses, owned/managed by vulnerable and marginalised individuals/groups in society.

Peace Through Prosperity (PTP) is innovating social transformation design and delivery by using Agile frameworks to create and deliver low cost, immediate and lasting impact social development programs in ‘at risk’ communities.

Mar 07 2017
Mar 07

This weekend’s DrupalCamp London wasn’t my first Drupal event at all, I’ve been to 3 DrupalCon Europe, 4 DrupalCamp Dublin, and a few other DrupalCamps in Ireland and lots of meetups, but in this case I experienced a lot of ‘first times’ that I want to share.

This was the first time I’d attended a Drupal event representing a sponsor organisation, and as a result the way I experienced it was completely different.

Firstly, you focus more on your company’s goals, rather than your personal aims. In this case I was helping Capgemini UK to engage and recruit people for our open positions. This allowed me to socialise more and try to connect with people. We also had T-shirts so it was easier to attract people if you have something free for them. I was also able to have conversations with other sponsors to see why did they sponsor the event, some were also recruiting, but most of them were selling their solutions to prospective clients, Drupal developers and agencies.

The best of this experience was the people I found in other companies and the attendees approaching us for a T-shirt or a job opportunity.

New member of Capgemini UK perspective

As a new joiner in the Capgemini UK Drupal team I attended this event when I wasn’t even a month old in the company, and I am glad I could attend this event at such short notice in my new position, I think this tells a lot about the focus on training and career development Capgemini has and how much they care about Drupal.

As a new employee of the company this event allowed me to meet more colleagues from different departments or teams and meet them in a non-working environment. Again the best of this experience was the people I met and the relations I made.

I joined Capgemini from Ireland, so I was also new to the London Drupal community, and the DrupalCamp gave me the opportunity to connect and create relationships with other members of the Drupal community. Of course they were busy organising this great event, but I was able to contact some of the members, and I have to say they were very friendly when I approached any of the crew or other local members attending the event. I am very happy to have met some friendly people and I am committed to help and volunteer my time in future events, so this was a very good starting point. And again the best were the people I met.

Non-session perspective

As I had other duties I couldn’t attend all sessions. But I was able to attend some sessions and the Keynotes, with special mention to the Saturday keynote from Matt Glaman, it was very motivational and made me think anyone could evolve as a developer if they try and search the resources to get the knowledge. And the closing keynote from Danese Cooper was very inspirational as well about what Open Source is and what should be, and that we, the developers, have the power to make it happen. And we could also enjoy Malcom Young’s presentation about Code Reviews.

Conclusion

Closing this article I would like to come back to the best part of the DrupalCamp for me this year, which was the people. They are always the best part of the social events. I was able to catch up with old friends from Ireland, engage with people considering a position at Capgemini and introduce myself to the London Drupal community, so overall I am very happy with this DrupalCamp London and I will be happy to return next year. In the meantime I will be attending some Drupal meetups and trying to get involve in the community, so don’t hesitate to contact me if you have any question or you need my help.

Mar 06 2017
Mar 06

Creating and publishing quality content within time constraints is a common challenge for many content authors. As web engineers, we are focused on helping our clients overcome this challenge by delivering systems that are intuitive, stable, and a pleasure to operate.

During the architectural phase, it’s critical to craft the editorial experience to the specific needs of content authors to ensure the best content editing experience possible. Drupal 8 makes it even easier than previous versions for digital agencies to empower content creators and editors with the right tools to get the job done efficiently, and more enjoyably.

Our five tips to enhance the content editing experience with Drupal 8 are:

1. Don’t make authors guess - use structured content

2. Configure the WYSIWYG editor responsibly

3. Empower your editorial team with Quick-Edit

4. Enrich content with Media Embeds

5. Simplify content linking with LinkIt

1. Don’t make authors guess - use structured content

The abundance of different devices, screen sizes and form factors warrants the use of structured content. Structured content is content separated into distinct parts, each of which has a clearly defined purpose and can be edited and presented independently from one another according to context.

“How does that relate to a content editor’s experience?” - you may ask.

In years past, it was very popular to give content editors an ability to create “pages” using one big “MS Word-like” text box for writing their articles, news releases, product descriptions, etc. This approach produced content that was not reusable and was presented in one strict way. Who wants to navigate within one enormous text area to move images around?

Though those days are long behind us, and even though we all know about the importance of structured content, sometimes we still fail to utilize the concept correctly.

Drupal was one of the first Content Management Systems (CMS) to introduce the concept of structured content (node system - Drupal 3 in 2001). In fact, Drupal is no-doubt the best CMS for implementing the concept of structured content, but its ability to provide a good content authoring experience lagged behind this solid foundation.

Today, in Drupal 8, editing structured content is a joy!

With the WYSIWYG (What You See Is What You Get) editor and Quick Edit functionality in Drupal core, we can equip our content editors with the best of class authoring experience and workflow!

You can see the difference between unstructured and structured D8 content below. Instead of only one field containing all text, images, etc., the structured content stores each definitive piece of information in it’s own field, making content entry fast and presentation flexible!

Structured vs unstructured content

The benefits of Drupal 8 structured content approach:

  • The author clearly understands where each piece of information should reside and does not have to factor in markup, layout, and design while editing (see tip #2). Content entry becomes remarkably efficient and allows the author to concentrate on the essence of their message instead of format.
  • The publishing platform is easier to maintain while supporting system scalability.
  • The modular nature of structured content makes migrations between CMS versions or to a completely different CMS much more streamlined. A huge plus for those long-term thinkers!

2. Configure the WYSIWYG editor responsibly

Drupal 8 ships with WYSIWYG text editor in core. The editor even works great on mobile! In a society that is so dependent on our mobile devices - who wouldn’t like to be able to quickly edit a missed typo right from your phone?

Drupal 8 provides superior enhancements to the UX (User Experience) for content authors and editors out of the box. However, with a little configuration, things can be further improved.

When establishing the UI (User Interface) for content authors, site builders should focus on refining rather than whole-sale adoption of the available features. Customizing the WYSIWYG editor is the perfect example of subtle improvements that can immediately make a big difference.

The WYSIWYG text editor is an effective tool for simple content entry since it does not require the end user to be aware of HTML markup or CSS styles. Many great functions like text formatting options (font family, size, color, and background color), source code viewing, and indentation are available at our fingertips, but as site builders we should think twice before adding all those options to the text editor toolbar!

With great power comes great responsibility! Sometimes, when you give content editors control over the final appearance of the published content (e.g. text color, font family and size, image resizing, etc.), it can lead to an inconsistent color schemes, skewed image ratios, and unpredictable typography choices.

How do we help our content authors in avoiding common design / formatting mistakes? Simple!

Use a minimalist approach when configuring the WYSIWYG text editor. Give authors access to the most essential text formatting options that they will need for the type of content they create and nothing more. If the piece of content edited should not contain images or tables - do not include those buttons in the editor. The text editor should be used only for sections of text, not for the page layout.

A properly configured CMS should not allow content editors the ability to change the size of the text or play with image positioning within the text section or the ability to add H1 headers within auxiliary content.

Below is an example of a bad vs. good WYSIWYG configuration.

WYSIWYG editor configuration compared

Benefits of the minimal (thoughtful) WYSIWYG configuration:

  • Easy to use
  • Less confusion (though there are edge cases, most editors don’t use all the buttons)
  • Better usability on mobile devices
  • Less risk of breaking established website design

Let’s keep our content editors happy and not overcrowd their interfaces when it’s absolutely not necessary. It is our duty as software engineers to deliver systems that are easy to use, intuitive, scalable and uphold design consistency.

3. Empower your editorial team with Quick-Edit

The Quick Edit module is one of the most exciting new features that is included in Drupal 8 core. It allows content authors and editors to make updates to their content without ever leaving the page.

The days of clicking “Edit” and waiting for a separate page to load just to fix a tiny typo are gone! The Quick Edit module eliminates that extra step and allows content editors to save a great deal of time on updating content. As an added bonus - content editors can instantly see how updated content will look within the page flow.

Here’s the Quick Edit functionality in action.

Quick Edit module demo

Quick Edit configuration tip for back-end and front-end developers

To make use of the Quick Edit functionality within the website pages, entities have to be rendered on the page via View Modes and not as separate fields.

This restriction presents a challenge when there’s a needs to provide Quick Edit functionality for a page constructed by the Views module. More often than not, Views are used to single out and output individual fields from the entities. The most used Views formats are “Table” and “Grid”. They currently do not support Quick Edit functionality for usability reasons.

A workaround for this issue is to use the custom View modes for Entities and create custom Twig templates for each View mode that should be outputted by Views in order to accommodate custom layout options.

4. Enrich content with Media Embeds

In the era of social media, content editors can’t imagine their daily routine without being able to embed their Tweets or videos into the stories they publish on their sites. In Drupal 6 and the early days of Drupal 7, it was pretty challenging to provide this functionality within the WYSIWYG editor. Developers had to configure many different plugins and modules and ask them politely to cooperate.

The Drupal 8 Media initiative has placed the content author’s experience and needs at the forefront of community efforts. As a result, we have access to a great solution for handling external media - CKEditor Media Embed Module. It allows content editors to embed external resources such as videos, images, tweets, etc. via WYSIWYG editor. Here’s an example of the Tweet embed – the end result looks beautiful and requires minimal effort.

"If you're going to build a new site, build it in D8." - someone who knows what they're talking about quotes @jrbeaton @TriDUG pic.twitter.com/8w9GAuuARu

— Savas Labs (@Savas_Labs) January 27, 2017

With all this media goodness available to us, there is no reason why we shouldn’t go the extra mile and configure the CKEditor Media Embed module for our content authors!

5. Simplify content linking with LinkIt

Linking to content has always been a clumsy experience for content editors, especially when linking internally within the same site.

There was always the risk of accidentally navigating away from the page that you were actively editing (and losing any unsaved information) while searching for the page to link to. Also, the default CKEditor link button allowed editors to insert a link, assign it a target value, title, maybe an anchor name, but that was about it. If the link to the internal content changed, there was no way for the page to update and links throughout the website would end up broken.

Let’s not put our content editors through that horrible experience again. LinkIt module for Drupal 8 to the rescue!

With the LinkIt module the user does not have to copy / paste the URL or remember it. LinkIt provides a search for internal content with autocomplete field. Users can link not only to pages, but also to files that are stored within Drupal CMS.

The new and improved linking method is much more sustainable, as it recognizes when the URL of the linked content changes, and automatically produces the correct link within the page without the need to update that content manually.

LinkIt File link demo

Linking to files with LinkIt

My personal favorite feature of the LinkIt module is the flexible configuration options. The LinkIt module makes it possible to limit the type of entities (pages, posts, files) that are searchable via the link field. You can also create a custom configuration of the LinkIt autocomplete dialog for each WYSIWYG editor profile configured on your site. Plus, it is fully integrated with Drupal 8 configuration synchronization.

Final Thoughts

As site builders, there are many improvements that we can make in order to streamline the process of content authoring.

With the right mix of forethought and understanding, Drupal 8 allows web engineers to deliver content publishing platforms that are unique to the client’s specific needs, while making web authoring a productive and satisfying experience.

Mar 05 2017
Mar 05
5 March 2017

This weekend I’ve been at the fifth DrupalCamp London - a gathering of 500 or so designers, developers and business owners using Drupal.

I blogged previously about the CxO day on Friday and day 2 on Saturday. Today was the final day!

Breakfast time!! #drupal #dclondon pic.twitter.com/2EPGgkisZU

— DrupalCamp London (@DrupalCampLDN) March 5, 2017

We kicked off with Jeffrey “jam” McGuire, who’s work involves advising companies on the value that open source and contribution can bring.

This keynote presented a challenge to the audience: selling Drupal isn’t enough anymore. We need to provide value to businesses at a higher level.

There was much concern over whether Drupal 8 would make things “too hard” and alienate users with small sites. But those technical aspects aren’t really the problem. Increasingly, IT is becoming commoditised. Work that was previously high value is now much more readily available. WordPress, Squarespace and Shopify all provide a means to get a website online for no or very low cost. Medium and Facebook take things one step further - you can have an online presence without a website at all.

Jam referred to a TED talk by Simon Sinek on the what, how and why:

  • what - “do what I ask”
  • how - “help me to think”
  • why - “think for me”

By focusing on the why at the center of this circle, we can begin to create more value for clients.

Going to the "why" before the "what".. Reversing our way of thinking and understanding more. Inspirational keynote by @HornCologne #dclondon pic.twitter.com/sXc3L8Z5Dl

— Tawny Bartlett (@littlepixiez) March 5, 2017

This idea is something I’ve been thinking about for a while, and had some a discussions on in yesterday’s freelancers’ BoF. I’m keen to explore ways to diversify my Drupal offering to clients, perhaps with training, workshops or research.

After a coffee break, I heard Florian Lorétan speak on Elasticsearch. I don’t have any experience with this, but as a volunteer at DrupalCamp I was monitoring the room, and sometimes that means getting to hear talks that you otherwise wouldn’t have thought about going to.

Elasticsearch looked interesting - more widely used than Solr, and with a developer-friendly RESTful API. Florian showed an example of using it with Drupal’s serialization API to provide the search index with appropriate content.

ElasticSearch is new to me, and some of what was covered went over my head. But I’ve seen enough to pique my interest, particularly with regard to saved searches. I hope to play around with it more in future.

Next up was Mark Conroy on all too common scenario of a client signing off a Photoshop design, without considering real content or the variety of devices on which the site is browsed.

Mark’s most hated design tool is Photoshop, and his rant against it was a fun moment towards the end of the weekend. But it was good to have someone articulate the problem with using Photoshop for web design, and I found his explanation of how browsers and operating systems render fonts differently, and definition of a PSD as an approximation of how a website might look a helpful way I can in turn explain this to people.

Mark followed that up with an explanation of atomic design, and demonstrated using Pattern Lab with Drupal.

The weekend finished with a closing keynote by Danese Cooper. Danese has been involved with open source since 1999 and is currently chair of the Node.js foundation.

Danese gave a history of open source, and some of the pioneers of various projects - Larry Wall (Perl), Richard Stallman (GNU), Ian Murdock (Debian), Mitchell Baker (Mozilla) amongst others. People had to fight to get open source taken seriously, but now that they do, there is a new generation of developers who take that for granted. Many younger developers don’t know or care about the “open source vs free software” debate, for example.

Keynote by @DivaDanese at #dclondon pic.twitter.com/3g7183uWrv

— tvn (@tvnweb) March 5, 2017

Transparency is non-negotiable, however companies like to control things and people need to be reminded of that from time to time.

New recruits often don’t know when to push back, they expect code to always be transparent and aren’t aware of what rights they have because of open source. We need to stand up for things that we believe are wrong, both social (bullying etc), and technical - but be sure to support your argument.

I thought of the popularity of React at this point and it’s controversial licence.

We need to keep embracing the community, which is big in Drupal. It’s important to have a variety of people involved any open source project, and Danese referenced an article by Josh Berkus on how to destroy a community if we aren’t careful.

There are no “open source companies” per-se. Any for-profit company will always assess open source as a strategic move. But everyone needs to water the grass for projects to be sustainable, and companies must encouraged and given painless ways to financially support projects.

Ultimately, open source is about people.

To wrap up, I had a wonderful time at DrupalCamp London. It’s been the biggest DrupalCamp in the world (and it had the biggest cake).

A huge thanks to the speakers, sponsors, volunteers and core team that organised such a fantastic event!

See you next year for DrupalCamp London 2018?

Mar 04 2017
Mar 04
4 March 2017

This weekend I’ve been at the fifth DrupalCamp London - a gathering of 500 or so designers, developers and business owners using Drupal.

Friday was the CxO day, which I blogged about earlier. Saturday and Sunday are more technically focussed.

Cake is ready! Come grab some by ELG01. #dclondon pic.twitter.com/hAzZ8FhPSi

— DrupalCamp London (@DrupalCampLDN) March 4, 2017

The day kicked off with a keynote by Matt Glaman - a US based developer working on the Drupal Commerce project. Matt spoke on open source, what it is, and the impact it’s had on his life and career.

@nmdmatt from @CommerceGuys kicks off #dclondon 2017 to a record breaking crowd #drupal pic.twitter.com/3eQDywrnsM

— Paul Johnson (@pdjohnson) March 4, 2017

Matt’s Drupal journey began while working in a bar, using Drupal as a hobbyist by night. With no formal education in programming, Matt taught himself to program using open source, via the mentors he was able to find through the Drupal community.

Community is vital to any open source project. We all have things to contribute, not just code but support, inspiration and mentoring. Open source creates a demand for skills, and creates opportunities for people to learn and teach each other.

#opensource: "Be as knowledgeable as you choose to be." @nmdmatt #dclondon keynote. pic.twitter.com/F6a36pp0eE

— Jeffrey A. McGuire (@HornCologne) March 4, 2017

After coffee, the rest of the day was broken down into parallel talks.

Phil Wolstenholme spoke about accessibility, and demonstrated some of the improvements that had gone into Drupal 8. I really liked the new announce feature, used to audibly announce new content that appears outside of a full page request. Phil showed it working in conjunction with an autocomplete field, where a list of suggested results appears as you type the first few letters.

In web development you can inadvertently make something that’s difficult or impossible to use by those people who have some form of disability or impairment. I asked Phil what resources he’d advise people to look at to learn more about how to avoid this. WebAIM is a great place to start, but also learn how to use a screenreader like VoiceOver, which gives you a totally different perspective on your site.

Next, I gave my offline first talk. I’ve enjoyed doing this talk at various events over the last year. The audience asked a lot of questions which I’ll take as a good sign! There’s obviously an interest in this topic and I’m keen to see how we can use it with Drupal in the near future.

For anyone contemplating speaking at an event like this, I’d recommend it. I wrote some thoughts on this recently.

@erikerskine on Offline First - how to deliver good user experience on poor on intermittent internet https://t.co/inINg6iBOF #DCLondon pic.twitter.com/ZR0FgESkMT

— Paul Johnson (@pdjohnson) March 4, 2017

After lunch, Justine Pocock shared some basic design principles for developers. This was really helpful for me, although I don’t do a lot of design work, I still want to be able to make things that look presentable and it’s useful to have some constraints to work within. Justine took the DrupalCamp website apart and showed how just a few hours work (speeded up to a few minutes) made a huge improvement, using:

  • contrast, to make elements stand out and catch the eye
  • repetition, to bring uniformity and consistency
  • alignment, to organise and keep things tidy, like Tetris
  • proximity, to delineate things according to information architecture

Learn the rules like a pro, so you can break them like an artist

—Pablo Picasso

I followed that with a meaty technical talk on microservices by Ronald Ashri. Ronald explained how, rather than being about size, microservices are individual components each with a clear, well-defined scope and purpose.

With microservices, every part of the system does one thing well, with a defined, bounded context in which it operates. Different components can then be composed together to create a larger system. The goal is to make a system that makes change easy, safely and at scale.

OO has traditionally focused on objects, but the messages between them are arguably more important. Roland advised not to start by designing a data model, rather focus on business capabilities.

I finished the day with a BoF for freelancers. A BoF is a “birds of a feather” session - often arranged on the spur of the moment with no set agenda, by like-minded people who “flock together”. It was great to chat to others and get perspectives from those contracting as well as companies that employ freelancers. Thanks to Farez for organising!

At the end of the day we retired to the Blacksmith & Toffeemaker pub round the corner to continue the great conversations over a well earned pint.

Looking forward to tomorrow!

Mar 03 2017
Mar 03
3 March 2017

This year is the fifth DrupalCamp London, and today was my first time attending the CxO day. The CxO day is a single track event aimed at business leaders who provide Drupal services. I reckon there were about 100 people there, and more will come over the weekend.

It’s great that DrupalCamps cater for a wide audience, but I can’t help wonder if a separate CxO day leads to a bit of a divide between business and technical. I’d love to hear more talks that cross this divide. There must be many people who could share and learn from each other but don’t get to meet.

Social for the CxO day in full swing #dclondon pic.twitter.com/javR8En13U

— DrupalCamp London (@DrupalCampLDN) March 3, 2017

The day kicked off with breakfast, followed by a talk by Andre Spicer on the stupidity paradox, a pitfall for many companies. Companies often start off well, wanting to appear smart, and hire the “brightest and best”. But smart people think independently, and this is inconvenient. Sadly companies sometimes revert to managing that stupidity, through bureaucracy, copying each other and taking the safe option. This can lead to an action-oriented or results-oriented culture with an overemphasis on leadership. Workers realise that it doesn’t pay to be smart or get in the way, and are rewarded by climbing the corporate ladder.

Andre shared the example of the VW emissions scandal as one such example of this that brought short term gains but a much larger long term repercussion.

So what can we do about this? Teams need people that ask questions, that play devil’s advocate. People that ask “why?”, and don’t accept “because I said so?” or “because we’ve always done it this way” in response.

"Smart people think independently, which is inconvenient" #dclcxo #dclondon pic.twitter.com/DuUp2hPWQc

— Chandeep Khosa (@ChandeepKhosa) March 3, 2017

Next talk was Paul Reeves, Drupal and I. Paul followed on from the first talk by sharing his journey with Drupal beginning with Drupal 4, candidly sharing his mistakes along the way. Initially hating Drupal, he preferred writing his own code, but reached a turning point at DrupalCon 2008. It was the community behind Drupal, with solutions to problems that other people had already found. He discovered work he’d been involved in for a client was able to be incorporated into and improve Drupal 5.

Paul advised avoiding a monolithic architecture where you need to learn the entire system to get things done. You don’t need to (and can’t) do that with Drupal. Instead, use it wisely and people can be productive on different levels.

Next was Benn Finn. Ben is one of the co-founders of Sibelius, and shared some insights into how ideas new features for Sibelius were prioritised.

Functionality isn’t the only thing that can be deemed a new feature. Rather, anything that’s a marketable benefit to a customer should be considered. That could be an improved interface, compatibility, better speed etc. There are always more ideas than it’s possible to implement, so we need to choose carefully.

Ideas can come from yourself, your customers or developers. Often these people alone don’t have a big enough picture of what you’re doing to suggesting solutions, nor can they tell you how to prioritise your features.

Companies often choose features by gut feel, but gut feel is a really unreliable method to choose what to prioritise. People tend to choose features that excite them personally, there is a bias towards bigger features. Instead we need to do a cost benefit analysis: feature priority = benefit / cost. It’s hard to predict these values, but we can start by identifying the proportion of users who will use a feature, multiplied by how much they will pay, and divide by the development time. Then review your estimates later to see how accurate they were. Ben went into detail about how they did this at Sibelius.

After lunch Alex Elkins spoke (at short notice!) on problems not solutions. Alex advised caution in focusing on solutions too early on. Identifying a problem is the key to having a successful product or service, and this must happen first before before we try to come up with a solution. Many unsuccessful products are a result of not solving a real problem. So what makes a problem a good candidate to try to solve? It needs to be valid, important, well defined, actionable and not already have a solution.

To finish, Sarah Wood, founder of Unruly, gave a keynote interview. Sarah shared her story of how she came to found Unruly, after working in academia and wanting to have more time with family and make a bigger impact. Unruly’s’ work includes the We’re the Superhumans video promoting the Paralympics.

Infectious energetic passionate presentation with @sarahfwood on startups, viral social media, data insights in advertising #dclondon pic.twitter.com/2EIUT9QVfn

— Paul Johnson (@pdjohnson) March 3, 2017

Successful video content needs two things in order for it to be shared. Firstly, it must solicit an emotion. Make someone laugh or cry, surprise, shock, or inspire. Secondly, invoke a social motivation to do something with that feeling.

Sarah wouldn’t do anything differently if she did it all again. She advised not to focus too much on what you would have done, instead look at what you’re doing now, and what you want to do differently both now and in future.

It was a great day, I learnt a lot and had some great conversations. I’m looking forward to the rest of the weekend.

Mar 02 2017
Mar 02
March 2nd, 2017

You might have heard about high availability before but didn’t think your site was large enough to handle the extra architecture or overhead. I would like to encourage you to think again and be creative.

Background

Digital Ocean has a concept they call a floating IPs. A Floating IP is an IP address that can be instantly moved from one Droplet to another Droplet in the same data center. This idea is great, it allows you to keep your site running in the event of failure.

Credit

I have to give credit to BlackMesh for handling this process quite well. The only thing I had to do was create the tickets to change the architecture and BlackMesh implemented it.

Exact Problem

One of our support clients had the need for a complete site relaunch due to a major overhaul in the underlying architecture of their code. Specifically, they had the following changes:

  1. Change in the site docroot
  2. Migration from a single site architecture to a multisite architecture based on domain access
  3. Upgrade of PHP version that required a server replacement/upgrade in linux distribution version

Any of these individually could have benefited from this approach. We just bundled all of the changes together to delivering minimal downtime to the sites users.

Solution

So, what is the right solution for a data migration that takes well over 3 hours to run? Site downtime for hours during peak traffic is unacceptable. So, the answer we came up with was to use a floating IP that can easily change the backend server when we are ready to flip the switch. This allows us to migrate our data on a new separate server using it’s own database (essentially having two live servers at the same time).

Benefits

Notice that we won’t need to change the DNS records here which meant we didn’t have to wait for DNS records to propagate all over the internet. The new site was live instantly.

Additional Details

Some other notes during the transition that may lead to separate blog posts:

  1. We created a shell script to handle the actual deployment and tested it before the actual “go live” date to minimize surprises.
  2. A private network was created to allow the servers to communicate to each other directly and behind the scenes.
  3. To complicate this process, during development (prelaunch) the user base grew so much we had to off load the Solr server on to another machine to reduce server CPU usage. This means that additional backend servers were also involved in this transition.

Go-Live (Migration Complete)

After you have completed your deployment process, you are ready to switch the floating ip to the new server. In our case we were using “keepalived” which responds to a health check on the server. Our health check was a simple php file that responded with the text true or false. So, when we were ready to switch we just changed the health checks response to false. Then we got an instant switch from the old server to the new server with minimal interruption.

Acceptable Losses

There were a few things we couldn’t get around:

  1. The need for a content freeze
  2. The need for a user registration freeze

The reason for this was that our database was the database updates required the site to be in maintenance mode while being performed.

A problem worth mentioning:

  1. The database did have a few tables that would have to have acceptable losses. The users sessions table and cache_form table both were out of sync when we switched over. So, any new sessions and saved forms were unfortunately lost during this process. The result is that users would have to log in again and fill out forms that weren’t submitted. In the rare event that a user changed their name or other fields on their preferences page those changes would be lost.

Additional Considerations

  1. Our mail preferences are handled by third parties
  2. Comments aren’t allowed on this site

Recommended Posts

  • Engineers find solving complex problems exciting, but as I’ve matured as an engineer, I’ve learned that complexity isn’t always as compelling as simplicity.
  • Cloudflare Bug May Have Created Security Leak Cloudflare, a major internet host, had some unusual circumstances that caused their servers to output information that contained private information such as HTTP…
  • When you already have a design and are working with finalized content, high fidelity wireframes might be just what the team needs to make decisions quickly.
Chris Martin
Chris Martin

Chris Martin is a junior engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Feb 28 2017
Feb 28

Late last year we blogged about the SystemSeed company trip to Minsk in order to work, bond and be merry. Each were achieved in equal degrees and we came away with revitalised enthusiasm for our work and a shift in what exactly that work would contain.

I’m happy to announce today a turning point at SystemSeed where we will begin to offer service beyond solely Drupal development. For the past 7 years+ we have branded ourselves as Drupal Enthusiasts™ but times change and we must all keep up, not with trends but an ever-moving landscape.

Drupal+

From Nov 2016 we have been developing with ReactJS and NodeJS as well as Drupal backend micro-services on internal projects. From Feb 2017 we starting rolling our this new stack to client projects that will benefit from it.

We aim to provide much of this work back the open source communities through Drupal.org and through our SystemSeed Github account in order to help support others who wish to benefit from this code and allow others to support us in our move to embrace the latest, greatest technologies.

Our first client project to embrace this change will include:

  • New and exciting payment methods! :O
  • API-first micro-services across the backend.
  • Public accessible API design as standard, for public consumption as the client permits.
  • Fully decoupled ReactJS & NodeJS frontend GUIs.
  • API versioning. Eg, the ability to turn on site upgrades per micro-service between separate sites running the same distribution (note - not distro-wide upgrades, but a custom mix of individual micro-service upgrades) 
  • Health Monitoring dashboards as standard checking:
    • All micro-services.
    • All regression and unit tests (live and multi-dev)
  • Admin UX discovery to drive intuitive CRUD of systems data.
  • Enhanced Devops (per multi-dev build)
    • Automated high % test coverage of all code written
    • Automated documentation for code
    • Automated security checks within CI 
    • Automated open source upgrade checks within CI
    • Automated performance checks within CI
    • High sensitivity alerts to cover downtime or performance issues

This really is just the tip of the iceberg and as we open source our new toolset we’ll give a deeper insight into what strategy was followed and how everyone may benefit from this work.

#staytuned

Feb 27 2017
Feb 27
February 27th, 2017

Drupal at the Beach.
(The Very Windy Beach)

Every year in February, Drupalers from across the country travel to San Diego to get away from the harsh winter and enjoy the perfect 72 degree California weather. Attendees visit Pacific Beach, walk down the boardwalk, and sometimes even go sailing.

Picture of former Web Chefs sailing.Former Web Chefs Matt Grill and Dustin Younse sail through Mission Bay after a weekend at SANDCamp 2016.

This year, however, attendees were met with … a little weather.

San Diegans, like myself, always find weather surprising and novel to the point where any time it rains for more than 10 minutes, we describe it as “really coming down”. But this time it really was pouring. 75 mph gusts of wind, cloudy skies, and a strong atmospheric river causing record rainfall. Drupal was not at the beach this year.

Weather map showing storms over San Diego.SANDCamp 2017: A little weather.

Drupal Near the Beach

Falling in mid-February every year, SANDCamp affords many speakers the opportunity to field test trainings and sessions before they’re given at DrupalCon.

Drupal 8 with React.js and Waterwheel.js

With the help of my fellow Web Chefs, I presented the first iteration of my training API First Drupal 8 with React.js and Waterwheel.js which I’m happy to announce will also be given at Drupalcon Baltimore! In the training, we took the canonical JavaScript application, a todo list built with React, and hooked it up to Drupal 8 through a new JavaScript library called Waterwheel.js. Todos were stored in a headless Drupal site via the JSON API module and we even provided a login page, and a like button for todos. Overall, the feedback on the training was excellent. People enjoyed learning how to turn Drupal 8 into a world class content API while also getting their feet wet with a frontend JavaScript framework like React. I’m looking forward to improving the training and giving it at Drupalcon Baltimore this year.

Every Project is a Story

One notable session was Dwayne McDaniel’s talk Every project is a story: Applying storytelling to your client interactions in which he explained how the patterns that form good stories, form good projects, budgets, and discoveries. Dwayne explored these story structures and how they can help translate clients’ and stakeholders’ dreams into real plans.

Kalastatic

The session that caught my interest the most was From Prototype to Drupal Site with Kalastatic. Through a case study, Crispin explained the benefits of component driven design and showed off an open-source framework Kalamuna built called Kalastatic. It’s a kss-node driven static site framework for building prototypes and living style guides that integrate with Drupal. It’s a tool very similar to Emulsify, Four Kitchens’ component-driven prototyping tool and Drupal 8 theme. It is great to see the Drupal community converge on component driven development as a solid pattern for building frontends.

Keynote Surprise!

Due to the inclement weather California was experiencing that week, the scheduled keynote speaker, Darin Andersens, had his flight cancelled and couldn’t be there. Luckily Todd, Four Kitchen’s CEO and Co-Founder, always has a keynote in his back pocket. He fired up his laptop and gave his talk on The Future of The CMS, pontificating on where the web is going and what CMSes like Drupal must do to stay relevant.

Always Be Keynoting. https://t.co/OIqmOBur3L

— Four Kitchens (@FourKitchens) February 17, 2017

Thanks, SANDcamp!

Maybe I’ll see you at SANDcamp next year! Also, if you’ll be at DrupalCon Baltimore, sign up for my training API First Drupal 8 with React.js and Waterwheel.js, and check out the other Four Kitchens Web Chefs, too!

Recommended Posts

Luke Herrington
Luke Herrington

Luke Herrington writes JavaScript for work and for fun; he enjoys hacking on new technology and reading about the ethics of artificial intelligence.

Feb 21 2017
Feb 21

My last post talked about how Docker microcontainers speed up the software development workflow. Now it’s time to dive into how all this applies to Drupal.

I created a collection of Docker configuration files and scripts to make it easy to run Drupal. If you want to try it out, follow the steps in the README file.

The repository is designed using the microcontainers concept, so that each Drupal site will end up with 3 containers of it’s own (Apache, MySQL and Drush containers), which are linked together, to run our application. If you want to serve a new site, you need to create separate containers.

In theory you could re-use containers for different web applications. However, in practice, Docker containers are resource-cheap and easy to spin up. So it’s less work to run separate containers for separate applications than it is to configure each application to play nice with the other applications running on the same container (e.g.: configuring VirtualHosts and port mappings). Or at least this is what my colleague M Parker believes.

Plus, configuring applications to play nice with each other in the same container kind of violates the “create once, run anywhere” nature of Docker.

How it works

My repository uses the docker-compose program. Docker-compose is controlled with the docker-compose.yml file, which tells Docker which containers to start, how to network them together so they serve Drupal, and how to connect them to the host machine. This means serving the Drupal repository filesystem and mapping a port on the host machine to one of the ports in one of the containers.

A useful tip to remember is that docker-compose ps will tell you the port mappings as shown in the screenshot below. This is useful if you don’t map them explicitly to ports on the host machine.

Docker terminal

Networking

If you’ve ever tried setting up a bunch of containers manually (without docker-compose), it is worth noting (and not very well documented in the Docker docs, unfortunately) that you don’t need to explicitly map port 3306:3306 for the mysql container, because docker-compose sets up a miniature network for containers run from the same docker-compose.yml. It also sets up hostnames between each container in the same docker-compose.yml. This means that the web container can refer to the mysql-server machine with the hostname mysql-server, and, even if you implicitly map 3306 to some random port on the​ host machine, web can talk to mysql-server on port 3306.

Note in this case that the container running MySQL is named db, so, when you’re installing Drupal, on step 4 (“Database configuration”) of the Drupal 7 install script, you have to expand “Advanced options”, and change “Database host” from localhost to db!

Filesystem

It is possible to put the Drupal filesystem into a container (which you might want to do if you wanted to deploy a container to a public server). However, it doesn’t really make sense for development, because most of the time, you’re changing the files quite frequently.

To get around this requirement for a development environment, we mount the current folder (often referred to as ‘.’) to /var/www/html in the container, which matches where the current directory is mounted in all three containers. This is done with the ‘volumes’ directive in the docker-compose.yml file. The ’working_dir’ directive says “when you run the Drush command in the Drush container, pretend it’s running from /var/www/html”, which is the equivalent of ‘cd /var/ww/html’ before you run a drush command.

So when you run the Drush command in the Drush container, it sees that it’s currently in a Drupal directory and proceeds to load the database connection information from sites/default/settings.php which tells it how to connect to the mysql on the db container with the correct credentials. (recall the links directive makes sure that the drush container can access the db container so it can connect to it on port 3306).

The Drush container

The drush container is a bit special because it runs a single command, and is re-created every time a Drush command is used.

If you look at the step 9 of my Docker configuration files you’ll see it says…

  • Run Drush commands with:
USER_ID=$(id -u) docker-compose run --rm drush $rest_of_drush_command

… i.e.: docker-compose run --rm drush i.e. start the container named drush, pass it $rest_of_drush_command

Docker terminal containers

If you look at the Dockerfile for Mparker’s Dockerfile, you’ll see it contains a line saying ‘ENTRYPOINT [“drush”]’. ENTRYPOINT is a variant of the CMD command which passes all the rest of the ‘docker run’ parameters to the command specified by the ENTRYPOINT line.

So what happens when you run that ‘docker-compose run’ line is that it creates a new container from the ‘mparker17/mush’ image, with all the configuration from the ‘docker-compose.yml’ file. When that container runs, it automatically runs the ‘drush’ command, and docker-compose passes ‘$rest_of_drush_command’ to the ‘drush’ command. When the ‘drush’ command is finished, the container stops, and the ‘–rm’ thing we specified deletes the container afterwards

Running USER_ID=$(id -u) before a command sets an environment variable that persists for that command; i.e.: when docker-compose runs, an environment variable $USER_ID exists; but $USER_ID goes away when docker-compose is finished running. You can leave out the USER_ID=$(id -u) if you add that line to your shell’s configuration. Essentially what that environment variable does is set the user account that the Drush command runs as. If you don’t specify the user account, then Docker defaults to root.

The main reason why I do this is so that if I ask Drush to make changes to the filesystem (e.g.: download a module, run drush make, etc.) that the files are owned by me, not root (i.e.: so I don’t have to go around changing ownership permissions after I run the drush command)

It may only be necessary on Windows/Macintosh, because the virtual machine that Docker runs in on Win/Mac has different user IDs — I think if you run a Docker command from a Linux machine, your user id is already correct; but because a Docker command on a Mac/Win is run with your Mac/Win user ID (e.g.: 501) but gets passed to the docker VM’s ‘docker’ user (which runs as user 1000), some problems arise unless you’re explicit about it.

Acknowledgements Lastly, I would like to thank Matt Parker here, who has been mentoring me since the beginning of setting up docker and telling me better ways to do it. He also recommends reading the Docker book if you want to explore this further.

Feb 20 2017
Feb 20

I’ve been building websites for the last 10 years. Design fads come and go but image galleries have stood the test of time and every client I’ve had has asked for one.

There are a lot of image gallery libraries out there, but today I want to show you how to use Juicebox.

Juicebox is an HTML5 responsive image gallery and it integrates with Drupal using the Juicebox module.

Juicebox is not open source, instead it offers a free version which is fully useable but you are limited to 50 images per gallery. The pro version allows for unlimited images and more features.

If you’re looking for an alternative solution look at Slick, which is open source, and it integrates with Drupal via the Slick module. I will cover this module in a future tutorial.

In this tutorial, you’ll learn how to display an image gallery from an image field and how to display a gallery using Views.

Getting Started

First, go download and install the Juicebox module.

Using Drush:

$ drush dl juicebox
$ drush en juicebox

Download Juicebox Library

Go to the Juicebox download page and download the free version.

Extract the downloaded file and copy the jbcore folder within the zip file into /libraries and rename the jbcore directory to juicebox.

Once everything has been copied and renamed, the path to juicebox.js should be /libraries/juicebox/juicebox.js.

Create a Gallery Using Fields

We’ll first look at how to create a gallery using just an image field. To do this, we’ll create an image field called “Image gallery” and this field will be used to store the images.

1. Go to Structure, “Content types” and click on “Manage fields” on the Article row.

2. Click on “Add field” and select Image from “Add a new field”.

3. Enter “Image gallery” into Label and click on “Save and continue”.

4. Change “Allowed number of values” to Unlimited and click on “Save field settings”.

You’ll need to do this if you want to store multiple images.

5. On the Edit page leave it as is and click on “Save settings”.

Configure Juicebox Gallery Formatter

Now that we’ve created the image fields, let’s configure the actual Juicebox gallery through the field formatter.

1. Click “Manage display”, and select “Juicebox Gallery” from the Format drop-down on the “Image gallery” field.

2. Click on the cogwheel to configure the gallery. Now there a lot of options but the only change we’ll make is to set the image alt text as the caption.

3. Click on the “Lite config” field-set and change the height to 500px.

4. Reorder the field so it’s below Body.

5. Click on Save at the bottom of the page.

Now if you go and create a test article and add images into the gallery you should see them below the Body field.

Create a Gallery Using Views

You’ve seen how to create a gallery using just the Juicebox gallery formatter, let’s now look at using Views to create a gallery.

We’ll create a single gallery that’ll use the first image of every gallery on the Article content type.

1. Go to Structure, Views and click on “Add view”.

2. Fill in the “Add new view” form with the values defined in Table 1-0.

Table 1-0. Create a new view

Option Value View name Article gallery Machine name article_gallery Show Content type of Article sorted by Newest first Create a page Unchecked Create a block Unchecked


3. Click on Add in the Fields section.

4. Search for the “Image gallery” field and add it to the view.

5. Change the Format from “Unformatted list” to “Juicebox Gallery” and click on Apply.

6. On the “Page: Style options” select the image field in “Image Source” and “Thumbnail Source”. The one you added to the View earlier.

You can configure the look and feel by expanding the “Lite config” field-set. You can change the width and height, text color and more.

7. Click on Apply.

8.Click on Add next to Master and select Page from the drop-down.

9. Make sure you set a path in the “Page settings” section. Add something like /gallery.

10. Do not forget to save the View by clicking on Save.

11. Make sure you have some test articles and go to /gallery. You should see a gallery made up of the first image from each gallery.

Summary

The reason I like Juicebox in Drupal is because it’s easy to set up. With little effort you can get a nice responsive image gallery from a field or a view. The only downside I can see is that it’s not open source.

FAQ

Q: I get the following error message: “The Juicebox Javascript library does not appear to be installed. Please download and install the most recent version of the Juicebox library.”

This means Drupal can’t detect the Juicebox library in the /libraries directory. Refer to the “Getting started” section.

Feb 20 2017
Feb 20

Overview

Savas Labs has been using Docker for our local development and CI environments for some time to streamline our systems. On a recent project, we chose to integrate Phase 2’s Pattern Lab Starter theme to incorporate more front-end components into our standard build. This required building a new Docker image for running applications that the theme depends on. In this post, I’ll share:

  • A Dockerfile used to build an image with Node, npm, PHP, and Composer installed
  • A docker-compose.yml configuration and Docker commands for running theme commands such as npm start from within the container

Along the way, I’ll also provide:

  • A quick overview of why we use Docker for local development
    • This is part of a Docker series we’re publishing, so be on the lookout for more!
  • Tips for building custom images and running common front-end applications inside containers.

Background

We switched to using Docker for local development last year and we love it - so much so that we even proposed a Drupalcon session on our approach and experience we hope to deliver. Using Docker makes it easy for developers to quickly spin up consistent local development environments that match production. In the past we used Vagrant and virtual machines, even a Drupal-specific flavor DrupalVM, for these purposes, but we’ve found Docker to be faster when switching between multiple projects, which we often do on any given Sunworkday.

Usually we build our Docker images from scratch to closely match production environments. However, for agile development and rapid prototyping, we often make use of public Docker images. In these cases we’ve relied on Wodby’s Docker4Drupal project, which is “a set of docker containers optimized for Drupal.”

We’re also fans of the atomic design methodology and present our clients interactive style guides early to facilitate better collaboration throughout. Real interaction with the design is necessary from the get-go; gone are the days of the static Photoshop file at the outset that “magically” translates to a living design at the end. So when we heard of the Pattern Lab Starter Drupal theme which leverages Pattern Lab (a tool for building pattern-driven user interfaces using atomic design), we were excited to bake the front-end components in to our Docker world. Oh, the beauty of open source!

Building the Docker image

To experiment with the Pattern Lab Starter theme we began with a vanilla Drupal 8 installation, and then quickly spun up our local Docker development environment using Docker4Drupal. We then copied the Pattern Lab Starter code to a new custom/theme/patter_lab_starter directory in our Drupal project.

Running the Phase 2 Pattern Lab Starter theme requires Node.js, the node package manager npm, PHP, and the PHP dependency manager Composer. Node and npm are required for managing the theme’s node dependencies (such as Gulp, Bower, etc.), while PHP and Composer are required by the theme to run and serve Pattern Lab.

While we could install these applications on the host machine, outside of the Docker image, that defeats the purpose of using Docker. One of the great advantages of virtualization, be it Docker or a full VM, is that you don’t have to rely on installing global dependencies on your local machine. One of the many benefits of this is that it ensures each team member is developing in the same environment.

Unfortunately, while Docker4Drupal provides public images for many applications (such as Nginx, PHP, MariaDB, Mailhog, Redis, Apache Solr, and Varnish), it does not provide images for running the applications required by the Pattern Lab Starter theme.

One of the nice features of Docker though is that it is relatively easy to create a new image that builds upon other images. This is done via a Dockerfile which specifies the commands for creating the image.

To build an image with the applications required by our theme we created a Dockerfile with the following contents:

FROM node:7.1
MAINTAINER Dan Murphy <[email protected]>

RUN apt-get update && \
    apt-get install -y php5-dev  && \
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \

    # Directory required by Yeoman to run.
    mkdir -p /root/.config/configstore \

    # Clean up.
    apt-get clean && \
    rm -rf \
      /root/.composer \
      /tmp/* \
      /usr/include/php \
      /usr/lib/php5/build \
      /var/lib/apt/lists/*

# Permissions required by Yeoman to run: https://github.com/keystonejs/keystone/issues/1566#issuecomment-217736880
RUN chmod g+rwx /root /root/.config /root/.config/configstore

EXPOSE 3001 3050

The commands in this Dockerfile:

  • Set the official Node 7 image as the base image. This base image includes Node and npm.
  • Install PHP 5 and Composer.
  • Make configuration changes necessary for running Yeoman, a popular Node scaffolding system used to create new component folders in Pattern Lab.
  • Expose ports 3001 and 3050 which are necessary for serving the Pattern Lab style guide.

From this Dockerfile we built the image savaslabs/node-php-composer and made it publicly available on DockerHub. Please check it out and use it to your delight!

One piece of advice I have for building images for local development is that while Alpine Linux based images may be much smaller in size, the bare-bones nature and lack of common packages brings with it some trade-offs that make it more difficult to build upon. For that reason, we based our image on the standard DebianJessie Node image rather than the Alpine variant.

This is also why we didn’t just simply start from the wodby/drupal-php:7.0 image and install Node and npm on it. Unfortunately, the wodby/drupal-php image is built from alpine:edge which lacks many of the dependencies required to install Node and npm.

Now a Docker purist might critique this image and recommend only “one process per container”. This is a drawback of this approach, especially since Wodby already provides a PHP image with Composer installed. Ideally, we’d use that in conjunction with separate images that run Node and npm.

However, the theme’s setup makes that difficult. Essentially PHP scripts and Composer commands are baked into the theme’s npm scripts and gulp tasks, making it difficult to untangle them. For example, the npm start command runs Gulp tasks that depend on PHP to generate and serve the Pattern Lab style guide.

Due to these constraints, and since this image is for local development, isn’t being used to deploy a production app, and encapsulates all of the applications required by the Pattern Lab Starter theme, we felt comfortable with this approach.

Using the image

To use this image, we specified it in our project’s docker-compose.yml file (see full file here) by adding the following lines to the services section:

node-php-composer:
 image: savaslabs/node-php-composer:1.2
 ports:
   - "3050:3050"
   - "3001:3001"
 volumes_from:
   - php

This defines the configuration that is applied to a node-php-composer container when spun up. This configuration:

  • Specifies that the container should be created from the savaslabs/node-php-composer image that we built and referenced previously
  • Maps the container ports to our host ports so that we can access the Pattern Labs style guide locally
  • Mounts the project files (that are mounted to the php container) so that they are accessible to the container.

With this service defined in the docker-compose.yml we can start using the theme!

First we spin up the Docker containers by running docker-compose up -d.

Once the containers are running, we can open a Bash shell in the theme directory of the node-php-composer container by running the command:

docker-compose run --rm --service-ports -w /var/www/html/web/themes/custom/pattern_lab_starter node-php-composer /bin/bash

We use the --service-ports option to ensure the ports used for serving the style guide are mapped to the host.

Once inside the container in the theme directory, we install the theme’s dependencies and serve the style guide by running the following commands:

npm install --unsafe-perm
npm start

Voila! Once npm start is running we can access the Pattern Lab style guide at the URL’s that are outputted, for example http://localhost:3050/pattern-lab/public/.

Note: Docker runs containers as root, so we use the --unsafe-perm flag to run npm install with root privileges. This is okay for local development, but would be a security risk if deploying the container to production. For information on running the container as an unprivileged user, see this documentation.

Gulp and Bower are installed as theme dependencies during npm install, therefore we don’t need either installed globally in the container. However, to run these commands we must shell into the theme directory in the container (just as we did before), and then run Gulp and Bower commands as follows:

  • To install Bower libraries run $(npm bin)/bower install --allow-root {project-name} --save
  • To run arbitrary Gulp commands run $(npm bin)/gulp {command}

Other commands listed in the Pattern Lab Starter theme README can be run in similar ways from within the node-php-composer container.

Conclusion

Using Docker for local development has many benefits, one of which is that developers can run applications required by their project inside containers rather than having to install them globally on their local machines. While we typically think of this in terms of the web stack, it also extends to running applications required for front-end development. The Docker image described in this post allows several commonly used front-end applications to run within a container like the rest of the web stack.

While this blog post demonstrates how to build and use a Docker image specifically for use with the Pattern Lab Starter theme, the methodology can be adapted for other uses. A similar approach could be used with Zivtech’s Bear Skin theme, which is another Pattern Lab based theme, or with other contributed or custom themes that rely on npm, Gulp, Bower, or Composer.

If you have any questions or comments, please post them below!

Feb 15 2017
Feb 15

We use Docker for our development environments because it helps us adhere to our commitment to excellence. It ensures an identical development platform across the team while also achieving parity with the production environment. These efficiency gains (among others we’ll share in an ongoing Docker series) over traditional development methods enable us to spend less time on setup and more time building amazing things.

Part of our workflow includes a mechanism to establish and update the seed database which we use to load near-real-time production content to our development environments as well as our automated testing infrastructure. We’ve found it’s best to have real data throughout the development process, rather than using stale or dummy data which runs the risk of encountering unexpected issues toward the end of a project. One efficiency boon we’ve recently implemented and are excited to share is a technique that dramatically speeds up database imports, especially large ones. This is a big win for us since we’re often importing large databases multiple times a day on a project. In this post we’ll look at:

  • how much faster data volume imports are compared to traditional database dumps piped to mysql
  • how to set up a data volume import with your Drupal Docker stack
  • how to tie in this process with your local and continuous integration environments

The old way

The way we historically imported a database was to pipe a SQL database dump file into the MySQL command-line client:

mysql -u{some_user} -p{some_pass} {database_name} < /path/to/database.sql

An improvement upon the default method above which we’ve been using for some time allows us to monitor import progress utilizing the pv command. Large imports can take many minutes, so having insight into how much time remains is helpful to our workflow:

pv /path/to/database.sql | mysql -u{some_user} -p {some_pass} {database_name}

On large databases, though, MYSQL imports can be slow. If we look at a database dump SQL file, we can see why. For example, a 19 MB database dump file we are using in one of our test cases further on in this post contains these instructions:

--
-- Table structure for table `block_content`
--

DROP TABLE IF EXISTS `block_content`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `block_content` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `revision_id` int(10) unsigned DEFAULT NULL,
  `type` varchar(32) CHARACTER SET ascii NOT NULL COMMENT 'The ID of the target entity.',
  `uuid` varchar(128) CHARACTER SET ascii NOT NULL,
  `langcode` varchar(12) CHARACTER SET ascii NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `block_content_field__uuid__value` (`uuid`),
  UNIQUE KEY `block_content__revision_id` (`revision_id`),
  KEY `block_content_field__type__target_id` (`type`)
) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8mb4 COMMENT='The base table for block_content entities.';
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `block_content`
--

LOCK TABLES `block_content` WRITE;
/*!40000 ALTER TABLE `block_content` DISABLE KEYS */;
set autocommit=0;
INSERT INTO `block_content` VALUES (1,1,'basic','a9167ea6-c6b7-48a1-ac06-6d04a67a5d54','en'),(2,2,'basic','2114eee9-1674-4873-8800-aaf06aaf9773','en'),(3,3,'basic','855c13ba-689e-40fd-9b00-d7e3dd7998ae','en'),(4,4,'basic','8c68671b-715e-457d-a497-2d38c1562f67','en'),(5,5,'basic','bc7701dd-b31c-45a6-9f96-48b0b91c7fa2','en'),(6,6,'basic','d8e23385-5bda-41da-8e1f-ba60fc25c1dc','en'),(7,7,'basic','ea6a93eb-b0c3-4d1c-8690-c16b3c52b3f1','en'),(8,8,'basic','3d314051-567f-4e74-aae4-a8b076603e44','en'),(9,9,'basic','2ef5ae05-6819-4571-8872-4d994ae793ef','en'),(10,10,'basic','3deaa1a9-4144-43cc-9a3d-aeb635dfc2ca','en'),(11,11,'basic','d57e81e8-c613-45be-b1d5-5844ba15413c','en');
/*!40000 ALTER TABLE `block_content` ENABLE KEYS */;
UNLOCK TABLES;
commit;

When we pipe the contents of the MySQL database dump to the mysql command, the client processes each of these instructions sequentially in order to (1) create the structure for each table defined in the file, (2) populate the database with data from the SQL dump and (3) do post-processing work like create indices to ensure the database performs well. The example here processes pretty quickly, but if your site has a lot of historic content, as many of our clients do, then the import process can take enough time that it throws a wrench in our rapid workflow!

What happens when mysql finishes importing the SQL dump file? The database contents (often) live in /var/lib/mysql/{database}, so for example for the block_content table mentioned above, assuming you’re using the typically preferred InnoDB storage engine, there are two files called block_content.frm and block_content.ibd in /var/lib/mysql/{database}/. The /var/lib/mysql directory will also contain a number of other directories and files related to the configuration of the MySQL server.

Now, suppose that instead of sequentially processing the SQL instructions contained in a database dump file, we were able to provide developers with a snapshot of the /var/lib/mysql directory for a given Drupal site. Could this swap faster than the traditional database import methods? Let’s have a look at two test cases to find out!

MySQL import test cases

The table below shows the results of two test cases, one using a 19 MB database and the other using a 4.7 GB database.

Method Database size Time to drop tables and restore (seconds) Traditional mysql 19 MB 128 Docker data volume restore 19 MB 11 Traditional mysql 4.7 GB 606 Docker data volume restore 4.7 GB 85

In other words, the MySQL data volume import completes, on average, in about 11% of the time, or 9 times faster, than a traditional MySQL dump import would take!

Since a GIF is worth a thousand words, compare these two processes side-by-side (both are using the same 19 MB source database; the first is using a data volume restore process while the second is using the traditional MySQL import process). You can see that the second process takes considerably longer!

Docker data volume restore

Traditional MySQL database dump import

Use MySQL volume for database imports with Docker

Here’s how the process works. Suppose you have a Docker stack with a web container and a database container, and that the database container has data in it already (your site is up and running locally). Assuming a database container name of drupal_database, to generate a volume for the MySQL /var/lib/mysql contents of the database container, you’d run these commands:

# Stop the database container to prevent read/writes to it during the database
# export process.
docker stop drupal_database
# Now use the carinamarinab/backup image with the `backup` command to generate a
# tar.gz file based on the `/var/lib/mysql` directory in the `drupal_database`
# container.
docker run --rm --volumes-from drupal_database carinamarina/backup backup \
--source /var/lib/mysql/ --stdout --zip > db-data-volume.tar.gz

With the 4.7 GB sample database above, this process takes 239 seconds and results in 702 MB compressed file.

We’re making use of the carinamarina/backup image produced by Rackspace to create an archive of the database files.

You can then distribute this file to your colleagues (at Savas Labs, we use Amazon S3), or make use of it in continuous integration builds (more on that below), using these commands:

# Copy the data volume tar.gz file from your team's AWS S3 bucket.
if [ ! -f db/db-data-volume.tar.gz ]; then aws s3 cp \
s3://{your-bucket}/mysql-data-volume/db-data-volume.tar.gz db-data-volume.tar.gz; fi
# Stop the database container to prevent read/writes during the database
# restore process.
docker stop drupal_database
# Remove the /var/lib/mysql contents from the database container.
docker run --rm --volumes-from drupal_database alpine:3.3 rm -rf /var/lib/mysql/*
# Use the carinamarina/backup image with the `restore` command to extract
# the tar.gz file contents into /var/lib/mysql in the database container.
docker run --rm --interactive --volumes-from drupal_database \
carinamarina/backup restore --destination /var/lib/mysql/ --stdin \
--zip < db-data-volume.tar.gz
# Start the database container again.
docker start drupal_database

So, not too complicated, but it will require a change in your processes for generating seed databases to distribute to your team for local development, or for CI builds. Instead of using mysqldump to create the seed database file, you’ll need to use the carinamarina/backup image to create the .tar.gz file for distribution. And instead of mysql {database} < database.sql you’ll use carinamarina/backup to restore the data volume.

In our team’s view this is a small cost for the enormous gains in database import time, which in turn boosts productivity to the tune of faster CI builds and refreshes of local development environments.

Further efficiency gains: integrate this process with your continuous integration workflow

The above steps can be manually performed by a technical lead responsible for generating and distributing the MySQL data volume to team members and your testing infrastructure. But we can get further productivity gains by automating this process completely with Travis CI and GitHub hooks. In outline, here’s what this process looks like:

1. Generate a new seed database SQL dump after production deployments

At Savas Labs, we use Fabric to automate our deployment process. When we deploy to production (not on a Docker stack), our post-deployment tasks generate a traditional MySQL database dump and copy it to Amazon S3:

def update_seed_db():
    run('drush -r %s/www/web sql-dump \
    --result-file=/tmp/$(date +%%Y-%%m-%%d)--post-deployment.sql --gzip \
    --structure-tables-list=cache,cache_*,history,search_*,sessions,watchdog' \
    % env.code_dir)
    run('/usr/local/bin/aws s3 cp /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz \
    s3://{bucket-name}/seed-database/database.sql.gz --sse')
    run('rm /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz')

2. When work is merged into develop, create a new MySQL data volume archive

We use git flow as our collaboration and documentation standard for source code management on our Drupal projects. Whenever a developer merges a feature branch into develop, we update the MySQL data volume archive dump for use in Travis CI tasks and local development. First, there is a specification in our .travis.yml file that calls a deployment script:

deploy:
  provider: script
  script:
    - resources/scripts/travis-deploy.sh
  skip_cleanup: true
  on:
    branch: develop

And the travis-deploy.sh script:

#!/usr/bin/env bash

set -e

make import-seed-db
make export-mysql-data
aws s3 cp db-data-volume.tar.gz \
s3://{bucket-name}/mysql-data-volume/db-data-volume.tar.gz --sse

This script: (1) imports the traditional MySQL seed database file from production, and then (2) creates a MySQL data volume archive. We use a Makefile to standardize common site provisioning tasks for developers and our CI systems.

3. Pull requests and local development make use of the MySQL data volume archive

Now, whenever developers want to refresh their local environment by wiping the existing database and re-importing the seed database, or, when a Travis CI build is triggered by a GitHub pull request, these processes can make use of an up-to-date MySQL data volume archive file which is super fast to restore! This way, we ensure we’re always testing against the latest content and configuration, and avoid running into costly issues having to troubleshoot inconsistencies with production.

Conclusion

We’ve invested heavily in Docker for our development stack, and this workflow update is a compelling addition to that toolkit since it has substantially sped up MySQL imports and boosted productivity. Try it out in your Docker workflow and we invite comments to field any questions and hear about your successes. Stay tuned for further Docker updates!

Feb 13 2017
Feb 13

The definition of “what a search page is” varies from project to project. Some clients are happy with the core Search module, others want a full blown search engine.

Drupal offers a wide range of options when it comes to building custom search pages. You can create a basic search page using the core Search module or if you’re looking for something advanced you could use Search API.

In the recorded webinar I cover the following:

  • Search in Drupal 7 (1:24)
  • What’s new in Drupal 8 (8:37)
  • Create Search Page using Core Search (15:24)
  • Create Search Page using Views (19:20)
  • How to Modify Search Results (22:34)
  • Introduction to Search API (25:28)
  • How to Create Facets (38:20)

Modules Mentioned in Webinar

Extra Resources

Feb 12 2017
Feb 12

This is first of a series of blogs to support traditional project managers I am coaching. To help get their bearings in deep and murky waters of Agile projects and Scrum teams.

Before the scrum purists amongst you vehemently shake your heads or berate me on the title, consider being pragmatic. In the Professional Services world there is always a project manager to manage complexity and facilitate the Scrum team(s). My remit is to facilitate and empower the role to help the team, business and customers succeed, rather than debate its applicability and existence.

“Obey the principles without being bound by them.”
– Bruce Lee

I’ll be deep diving into a PM’s role in context to specific Scrum ceremonies in upcoming posts, however its seems apt to start with some health warnings.

Toxic Behaviour for a Scrum team

Toxic Behaviour for a Scrum team

A) This is not a guide for you to try and replace the Scrum master (you cannot) or the product owner (you cannot), or both! (you cannot). Nor is it a reference for you to justify imposing your will on the team (you cannot). It’s a guide to enable you to add ‘value’ to the ‘Scrum team’ and fulfill your purpose of managing risk on complex engagements.

B) Please don’t try to fake it till you make it! you will be caught out and the team will loose respect for you. if you don’t know, embrace not knowing and work to change that. Learn, up-skill, ask for help, do a pre-project retrospective on your own experience and discuss it with your Scrum master and/or Agile coach (if you have one). If you go in waving your strengths and weaknesses we will respect you for your courage and openness… they are part of our value system

C) Own your failures and reflect on them with the scrum master and/or an Agile coach. Don’t look for patsies, the team
will see through it and you will be called out on it. If you are failing, own it, be courageous and open, respect the knowledge and skills your Scrum team has (and you don’t) and in turn you will earn their respect. If you blame someone else for your shortcomings so that you can hide behind them you do not have the DNA to be in an agile environment. 

Agility, Scrum it’s a culture thing!

Agile Culture

Agile Culture

In order for you (a PM) to facilitate a Scrum team (yes your role is one of facilitation only) it is essential that you understand and embrace an agile culture, not just follow parts of a framework (Scrum). Toe-dipping is not going to work, you’re either committed or not.. its time to quit being a chicken and start living like a Pig

……………………

Lastly, If you got value from what I have shared please consider giving back by contributing to @BringPTP, you can follow, broadcast or donate.

Peace Through Prosperity (PTP) works to improve the environment for peacebuilding by nurturing prosperity in conflict affected communities. We work to alleviate poverty and secure livelihoods through empowering micro-entrepreneurs with knowledge, skills and increasing their access to income and opportunities. We support small businesses, owned/managed by vulnerable and marginalised individuals/groups in society.

Feb 11 2017
Feb 11

Docker, a container-based technology which I just came across, is great for setting up environments. It was first introduced to the world by Solomon Hykes, founder and CEO of dotCloud at Python Developers Conference in Santa Clara, California, in March 2013. The project was quickly open-sourced and made available on GitHub, where anyone can download and contribute to it.

Containers vs. Virtual Machines

You might be wondering, “What is the difference between Containers (like Docker) and Virtual Machines”?

Well, virtual machines (VM) work by creating a virtual copy of a computer’s hardware, and running a full operating-system on that virtual hardware. Each new VM that you create results in a new copy of that virtual hardware, which is computationally expensive. Many people use VMs because they allow you to run an application in a separate environment which can have it’s own versions of software and settings, which are different from the host machine.

On the other hand, container technologies like Docker, isolate the container’s environment, software, and settings, in a sandbox; but all sandboxes share the same operating-system kernel and hardware as the host computer. Each new container results in a new sandbox. This enables us to pack a lot more applications into a single physical server as compared to a virtual machine.

Docker containers are isolated enough that the root process in a container cannot see the host machine’s processes or filesystem. However, it may still be able to make certain system calls to the kernel that a regular user would not, because in Docker, the kernel is shared with the host machine. This is also why Docker containers are not virtual machines and thus a lot faster.

Note, however, that Docker relies on a technology which is only available in the Linux kernel. When you run Docker on a Windows or Macintosh host machine, Docker and all it’s containers run in a virtual machine.

That said, there are two projects trying to bring Docker-style containers natively to OS/X , Dlite and Xhyve. But last I heard, these projects were still very experimental. So consider yourself warned.

When you are done with a container, on a Mac host machine, it’s probably good to suspend the containers, because they run in a virtual machine and that has a lot of overhead. But on a Linux host machine, there would be no need to suspend them because they would not create (much) additional overhead (no more than, say, MAMP).

Docker is a tool that promises to scale into any environment, streamlining the workflow and responsiveness of agile software organizations.

Docker’s Architecture

This is a diagram explaining the basic client-server architecture which docker uses. Docker architecture

Image source: http://www.docker.com

Important Terminology

  • Docker daemon: A Docker engine which runs on the host machine as shown in the image above.
  • Docker client: A Docker cli which is used to interact with the daemon.

Workflow components

  • Docker image: A read-only disk image in which environment & your application resides.
  • Docker container: A read/writeable instance of an image, which you can start, stop, move, and delete.
  • Docker registry: A public or private repository to store images.
  • Dockerfile: A Dockerfile is instructions for how to build a single image. You can think of a Dockerfile as kind of Vagrantfile, or a single Chef cookbook, or an Ansible script, or a Puppet script.

Microservices

Because Docker allows you to run so many containers at the same time, it has popularized the idea of microservices: a collection of containers, each of which contain a single program, all of which work together to run a complex application (e.g. Drupal).

Taking Drupal as an example, every Drupal site has at least two dependencies: an HTTP server (Apache, Nginx, etc.) running PHP; and MySQL. The idea of microservices would involve packaging Apache+PHP separately from MySQL; as opposed to most Drupal virtual machine images which bundle them together into the same VM. For more complicated setups, you could add another container for Solr, another container for LDAP, etc.

For me, the main advantage of using microservices is that it’s easier to update or swap one dependency of an application without affecting the rest of it. Another way of looking at this is that microcontainers make it easier to modify one piece without waiting a long time for the virtual machine to rebuild.

When I was using a virtual machine on a particularly complex project, if I needed to make a change to a setting, I had to make that change in the Puppet config, then run vagrant destroy && vagrant up and wait two hours for it to tell me that the new configuration wasn’t compatible with some other piece of the system. At which point I had to repeat the two hour process, which wasted a lot of time.

If I had been using Docker (properly), then I could have just changed the setting for that one program, rebuild that program’s container (5 seconds), and not have to worry that one piece of the machine needed at least Java 6 and the other piece of the machine could not work without Java 5.

Now that you know the possibilities with Docker, watch this space to find out how all this applies to Drupal.

Feb 09 2017
Feb 09

Not sure who's to blame, but we have a new HTML validation method from GoDaddy. It is an improvement from the "none HTML validation at all" phase they went through, but took me a while to make it work with apache. The problem was the dot/hidden directory they request to put your validation code in: /.well-known/pki-validation/godaddy.html

In my case there were a couple of reasons why this was difficult:

  • I didn't know about the hidden directory (.) block in Apache.
  • In my case some domains run the whole site over HTTPS, so I needed to make the new rules co-exist with the old HTTPS redirection rules.
  • I have a mixture of hostings. For some sites I control apache, so I could use Virtual Host configurations. But for others (like the ones running on Acquia) I need to create .htaccess rules.

The solution was much simpler than I anticipated, but quite difficult to debug. Finally I made it work for both environments.

I could have used the DNS ownership verification method, but in my case that means I would need to involve the people owning the domain. In my experience that takes longer and it can become really involved when owner doesn't know anything about DNS.

Using Virtual Host config (possible on self hosted sites)


RewriteEngine  on
RewriteRule    "^/\.well-known/pki-validation/godaddy\.html/" "/godaddycode.txt" [PT]
RewriteRule    "^/\.well-known/pki-validation/godaddy\.html$" "/godaddycode.txt" [PT]
    

If the site is only running on HTTPS and I have a redirection rule I'll work around these URLs. The rules below will work together with the one above:


RewriteCond %{REQUEST_URI} =!/.well-known/pki-validation/godaddy.html
RewriteCond %{REQUEST_URI} =!/.well-known/pki-validation/godaddy.html/
RewriteRule ^(.*)$ https://www.mydomain.com/ [R=permanent,L]
    

Using only .htaccess rules (and with no HTTPS redirection):


# GoDaddy verification rewrite rules
<IfModule mod_rewrite.c>
  RewriteRule    "^.well-known/pki-validation/godaddy.html/" "/godaddycode.txt" [PT,L]
  RewriteRule    "^.well-known/pki-validation/godaddy.html$" "/godaddycode.txt" [PT,L]
</IfModule>
    

Using .htaccess rules when site is only running over HTTPS:


# GoDaddy with HTTPS redirection rules
<IfModule mod_rewrite.c>
  # GoDaddy PassThrough rules
  RewriteRule    "^.well-known/pki-validation/godaddy.html/" "/godaddycode.txt" [PT,L]
  RewriteRule    "^.well-known/pki-validation/godaddy.html$" "/godaddycode.txt" [PT,L]
  # Set "protossl" to "s" if we were accessed via https://.  This is used later
  # if you enable "www." stripping or enforcement, in order to ensure that
  # you don't bounce between http and https.
  RewriteRule ^ - [E=protossl]
  RewriteCond %{HTTPS} on
  RewriteRule ^ - [E=protossl:s]
  # Redirect HTTP to HTTPS
  RewriteCond %{HTTP:X-Forwarded-Proto} !=https
  RewriteCond %{REQUEST_URI} !=/godaddycode.txt
  RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
</IfModule>
    

And to make this work on Acquia I had to borrow some rules from D8 .htaccess

So I replaced these sections/rules:


# Protect files and directories from prying eyes (D7)
<FilesMatch "\.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)(|~|\.sw[op]|\.bak|\.orig|\.save)?$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|^#.*#$|\.php(~|\.sw[op]|\.bak|\.orig\.save)$">
  Order allow,deny
</FilesMatch>

# Block access to "hidden" directories whose names begin with a period... (D7)
RewriteRule "(^|/)\." - [F]
    

With these D8 sections/rules:


# Protect files and directories from prying eyes (D8)
<FilesMatch "\.(engine|inc|install|make|module|profile|po|sh|.*sql|theme|twig|tpl(\.php)?|xtmpl|yml)(~|\.sw[op]|\.bak|\.orig|\.save)?$|^(\.(?!well-known).*|Entries.*|Repository|Root|Tag|Template|composer\.(json|lock))$|^#.*#$|\.php(~|\.sw[op]|\.bak|\.orig|\.save)$">
  <IfModule mod_authz_core.c>
    Require all denied
  </IfModule>
  <IfModule !mod_authz_core.c>
    Order allow,deny
  </IfModule>
</FilesMatch>

# Block access to "hidden" directories whose names begin with a period... (D8)
RewriteRule "(^|/)\.(?!well-known)" - [F]
    

I hope this helps someone else. I know it took me some time to figure it out and couldn't find an specific blog post about it.

Note: Just to be super clear, you should put the code given by GoDaddy into a file called godaddycode.txt on your docroot directory.

Feb 07 2017
Feb 07

In Drupal 8 unpublished content is, by default, not accessible to users without the right privileges, who will receive a 403 Forbidden page instead. Although this behaviour is understandable, and good security practice, it can lead to really terrible user experiences and even create issues for SEO performance. In this post I’ll look at solving this problem by redirecting users to appropriate content instead of leaving them with an unexplained 403.

When is unpublished not unpublished?

In my scenario I’m using the experimental content_moderation module, so actually my content can have several different states, even if the content status is unpublished. For example, an ‘Archived’ moderation state where content is unpublished.

It’s easy to see how, in a publishing setup like this, links that point to content that is now unpublished might remain on the web for some time, leading to lots of 403 errors and frustrated users.

I’ve looked at some solutions, for Drupal 7 mainly, which more or less all mess up with the “node/%node” route access callback:

This is from the Unpublished Nodes Redirect module.

I personally don’t like altering core access callbacks. Besides making your code look like spaghetti, while checking user access permissions against a piece of content – which should return Allow or Deny – it actually interrupts the process and redirect.

What if something is wrong with your access service and, by mistake, your code Allows the request?

I opt for a different approach, making use of Drupal 8 Events.

Using Drupal 8 Events for intelligent redirects

Let’s imagine we’re building a site for an entertainments company with a content type of ‘shows’. Each show node is only published on the site for a limited time, after which it is archived.

Our goal is to redirect users to an ‘/all-shows’ landing page instead of the 403 Forbidden page, if the requested show node is unpublished.

Step 1: Find the right Event

First, we must find the right Event to hook into. Here’s the list of Events in core.

RoutingEvents::ALTER

looked like a good candidate, but we don’t want to touch default behaviours. We are happy with our route, we just need to conditionally change the response.

KernelEvents::RESPONSE

is our man.

Step 2: The Redirect On Unpublished Event Service

In order to hook into the event we need an event_subscriber service.

(NB: ResponseSubscriber.php is located in /src/)

The my_module.services.yml file will notify drupal about our service configuration, where to find its class (namespace Drupal\my_module\ResponseSubscriber) and which group our service belongs to (event_subscribe).

The ResponseSubscriber class is where our codes lives. Here we define which event we subscribe to (KernelEvents::RESPONSE) and the callback to call when fired (self method ‘alterResponse’).

You can also use other services to add additional conditions. For example, if you want to redirect only if the user is authenticated (i.e. anonymous users will still get a 403 error):

Summary

We can use Drupal 8’s Events to create intelligent redirects for unpublished content, sending users to relevant pages instead of returning a 403 Forbidden page. If we make use of additional services, we can even create different behaviours for different types of user, tailoring the redirect behaviour for different audiences.

Check the full code on github

Jan 31 2017
Jan 31

Drupal Global Sprint Weekend, which took place last weekend sees Drupal developers from around the world come together to contribute to the open source content management system’s codebase. On Saturday, in London, we took the opportunity to give back to the Drupal community by hosting a sprint at the Manifesto studio.

Wait, what’s a Drupal sprint?

For those who don’t know what a code sprint is, here’s a good definition taken directly from the Drupal website:

A code sprint is getting developers for a set amount of time
– usually one to two days – and just writing code. That’s it.

You’re not teaching anything. Participants will learn from others as they go,
but the focus is not on instruction. The goal is to create working software.

Essentially we just made some time and space for Drupal developers to work on outstanding issues for Drupal core or its most popular modules. But there was also room for Drupal newcomers to get to grips with the CMS and work towards becoming core contributors by completing the lessons in the Drupal Ladder.

So, in effect, we were running both an Issue Sprint and a Learn Sprint.

What did we achieve?

We are quite proud to say that our first experience of running a Drupal sprint was an amazing success:

  • There were 10 participants, 6 of whom were first time contributors;
  • We had Drupal Ladder and Sprint areas running simultaneously;
  • We consumed 3 pots of coffee, 15 bags of tea, 5 cans of Coke and 10 pizzas during the day;
  • Our concentration was interrupted just once (apart from the lunch break) by a 15-minute hailstorm;
  • We worked on 17 Issues during the day, 3 of which are RTBC (Reviewed and Tested by the Community) and 1 is Fixed (to be honest it was already fixed, we just flagged it).

lunch at Drupal Global Sprint Weekend, London 2017

All issues are tracked on the Drupal website. If you look for just those with the SprintWeekend2017 tag, you’ll see that out of the 188 issues tackled over the whole weekend, worldwide, we worked on almost 10% of them!

What’s next?

In the Issue Sprint area we tried to focus on experimental modules like Workflow, Content Moderation and Datetime Range as these need to be stable before Drupal 8.4.0-beta1 (in six months), while the guys in the Drupal Ladder room tackled some ‘Novice’ issues.

Each and every attendee left the studio asking when the next Sprint would be – which makes us feel over the moon! Giving back to Drupal is important, easy and fun, so our prompt answer was always ‘Very soon!

Many thanks to everyone who participated and supported. If you’d like to stay informed about future Drupal sprints at Manifesto, please sign up for our newsletter.

Jan 30 2017
Jan 30
January 30th, 2017

Welcome to the final episode of Season 2 of Sharp Ideas! On this episode, Randy and Doug talk to Four Kitchens directors Elia Albarran, Todd Nienkerk, and Aaron Stanush, about keeping your team happy, working ethically with clients, and how to prepare your people for the future of work.

Broadcasting directly to you from wherever the web meets business and design. You can listen to us on SoundCloud (on the site or download the app!) or find us with your other favorite podcasts on the Stitcher app.

Recommended Posts

Douglas Bigham
Douglas Bigham

Doug is a writer and ex-academic with a background in digital publics and social language use. He likes dark beer, bright colors, and he speaks a little Klingon.

Jan 26 2017
Jan 26

If you ever need to modify content pages, Display Suite is a good choice. It offers a lot of flexibility without learning a brand new interface. You just use the standard “Manage display” page to select a layout and move fields into regions.

Yesterday, I presented a webinar on how to use Display Suite in Drupal 8. The webinar went for around 50 minutes and I covered the following:

  • What’s new in Drupal 8 (2:20)
  • How to set up a Display Suite layout on a view mode (8:20)
  • How to change the wrapper elements (11:03)
  • How to add custom CSS classes (14:15)
  • How to use Display Suite fields (16:10)
  • How to use the “Display Suite Switch View Mode” sub-module (29:00)
  • And finally, how to override a layout (35:47)

Extra Resources

  1. Using Display Suite in Drupal 8: How to Customize Content Pages
  2. Using Display Suite in Drupal 8: How to Use Display Suite Fields
  3. Using Display Suite in Drupal 8: How to Use Switch View Mode Sub-module
Share
Jan 21 2017
Jan 21

Everyone loves a fast website. It’s one of the critical goals for every web developer to build a site that’s twice as fast than a previous one. And ‘BigPiping’ a website makes it super fast. BigPipe is a fundamental redesign of the dynamic web page serving system.

Typically, 80-90% of the loading time is spent on front end which is very huge.

A few important metrics to observe are:

  • Time To First Byte (TTBF): Time taken between requesting html page and starting to receive the first byte of response. In this time, the client and the browser can’t do anything.
  • Time To Interact (TTI): Completely dependent on use case, but it’s what really matters.
  • Page load time: Total load time until loading is complete.

Facebook’s story on Bigpipe

The basic idea here is to break the webpage into small chunks called pagelets and pipelining them to go through several execution. You know why Facebook loads very fast? That’s because Facebook uses Bigpipe for loading content. Facebook loads the structure of the page in chunks and the elements which are difficult to load come afterwards.

It seems like Facebook loads very fast, but it actually takes 5-6 seconds for everything to load. What happens is that it loads the unchangeable parts first and personalised parts later, like friends list, groups, pages etc.

Facebook’s homepage performance after using Bigpipe caching

Bigpipe rating chart

Source: Facebook

The graph shows that BigPipe reduces user perceived latency by half in most browsers.

Bigpipe in Drupal 8

In Drupal 7 or any other CMS, the webpage gets comparatively slower ones we add customisations or personalisation it. After Using Bigpipe in Drupal 8, this is no longer an issue.

This technology was only available to Linkedin and FB. But now it’s available as a module in Drupal 8.

Facebook did the streaming of content in so called pagelets. Wim & Fabian (Author of Drupal 8 Bigpipe module) named it “auto-placeholdering” for Drupal 8 which differentiates the static sections of the page from dynamic ones.

Here is the example of comparison between standard drupal caching & bigpipe: Here is the link to video

Source: Drie Buytaert/ Youtube

If we provide the correct cacheability metadata, Drupal will be able to automatically deliver personalised parts of the page later, without you having to write a single line of code.

Wim Leers, the author of this module explained it as follows:

“The problem we were solving: Drupal 7 can’t really cache its output because it lacks metadata for caching. It generates just about everything you see on the page on every page load, even things that don’t change. Drupal 8 does far less work when serving a page, because we brought cacheability metadata to everything. BigPipe is the most powerful way to put that cacheability metadata to use and it delivers the most significant performance improvements.

BigPipe was not possible before Drupal 8. So, no, it’s the other way around: BigPipe has changed Drupal 8 and made it the fastest Drupal yet.” - Wim Leers.

How BigPipe works

To exploit the parallelism between web server and browser, BigPipe first breaks web pages into multiple chunks called pagelets. Just as a pipelining microprocessor divides an instruction’s life cycle into multiple stages (such as “instruction fetch”, “instruction decode”, “execution”, “register write back” etc.), BigPipe breaks the page generation process into several stages:

  • Request parsing: Web server parses and sanity checks the HTTP request.
  • Data fetching: Web server fetches data from storage tier.
  • Markup generation: Web server generates HTML markup for the response.
  • Network transport: The response is transferred from web server to browser.
  • CSS downloading: Browser downloads CSS required by the page.
  • DOM tree construction and CSS styling: Browser constructs DOM tree of the document, and then applies CSS rules on it.
  • JavaScript downloading: Browser downloads JavaScript resources referenced by the page.
  • JavaScript execution: Browser executes JavaScript code of the page.

(From the Facebook Engineering blog)

I’d really like to thank Wim Leers & Fabian and others who worked really hard on bringing this caching strategy to Drupal 8 core with 8.1 release.

Jan 20 2017
Jan 20

Prefer a tutorial? Then read “How to Manage Media Assets in Drupal 8

Yesterday I presented a webinar on how to manage media assets in Drupal 8. The webinar went for just over an hour and we looked at the current state of media management in Drupal 7 and what’s new in Drupal 8.

I spent the rest of the time demonstrating how to:

Jan 18 2017
Jan 18

Magnifying glass over Google search bar

A few tweaks and modules later, Drupal has easy to build SEO friendly websites. To achieve it, there are two sides involved:

  • Developers and designers will apply technical enhancements (making a good use of the core and contributing modules, write semantic and valid HTML prototypes).
  • Clients create good content.

Below are a few things you can do to improve SEO on your website just by working with content (texts, images, files).

Text content

Title

When you create a page on a website, the page title you decide on is used in several different places so it’s important to get it right and make sure it’s clear and useful.

Page title will appear:

  • On the page (usually as h1 heading)
  • In the main menu
  • In the url page
  • On listings linking to the page (from your site and also from social media sources)

All of above are picked up by search engines, so it’s important to include relevant keywords in your titles.

Page titles should be clear and descriptive. If titles are too long to fit into a menu, or if you want to have a different menu link then the page title you could use the ‘menu link title’ field to display.

Drupal has a feature that allows you to specify a ‘menu link title’, you can find this at the bottom of the edit page form in the “menu settings” tab > “Menu link title”.

Please note, spaces in titles will be converted into dashes in the url, so do not use  dashes in titles. Maybe you could replace dashes with a colon to avoid “double dashed” urls.

Meta description tag

The meta description is the excerpt that displays under the page title and site name on the search engine’s results page. If it’s not filled in, the body copy will be used instead. This may lead to a cut off excerpt, but you could manually fill the ‘meta description’, or use the ‘summary’ field to avoid it.

In Drupal, the body text field on a page is accompanied by a summary field. It is important to fill this in. Sometimes, it’s used on the site as a teaser to promote users to click the page and read the full copy.  Remember, it will be picked up as the meta description for the page if no meta description was manually added.

Headings

When adding or editing content to the Body, in the WYSIWYG toolbar at the top of the text field, you’ll see a dropdown with a few headings options. Commonly, you will have a choice of heading 2, heading 3, heading 4, and normal/paragraph.

When starting a new section on a page, use one of the headings defined in the dropdown. Headings are picked up by search engines and will contribute to your search rank.

Besides helping out with SEO, headings are designed to draw the reader’s eye so that they are able to find what they were looking for much easier. They are also useful for good content structure if the copy is long.

For SEO purposes you should only have the h1 tag used once on a page. H1 is commonly the page title.

Anchor Texts

This is the text that links to something else. For example, if I would like to point you to the about us page, then the anchor text is the (commonly) blue text you see.

Search engines compare the text written in the anchor to the link “behind” it. So if they anchor text includes keywords or phrases that will add value over time.

Anchor text is read by screen readers so it plays an important role in complain with accessibility requirements.

Please make sure that your anchor text is also descriptive.

Length

As a general rule the copy should be as long as it needs to be. Online content is not read in the same way as printed content, so keep things concise, clear and straightforward bearing in mind the user experience, not only the SEO.

As a reference, some SEO advisers recommend around 200 words as a minimum for page copy.

Files (Images, documents)

Filenames

Filenames should follow the following convention to eliminate technical problems and to improve SEO.

  • Use full words
  • Replace spaces with dashes
  • Do not use special characters, just letters, numbers and dashes.

Filenames should be also descriptive.

Some good examples are: mobomo-logo-red.jpg, partnership-agreement-2017.pdf

Reference: https://support.google.com/webmasters/answer/114016?hl=en

Alt Text

This is a descriptive text that appears if an image cannot be loaded and is also used by screen readers. So here SEO is directly implicated with Accessibility. It’s especially important if the image also acts as a link.

This text should clearly describe the image.

Filesize

Main thing you should know about files in web: Large file sizes slow down page load.

Users tends to abandon pages if the load time is greater than 3 seconds. So search engines “don’t like” to direct users to slow sites.

So, you can help to keep the page speed to a minimum by making sure the files you add are light.

A general rule is to try to keep images filesize below 70k, this sometimes is hard especially with large images (banners for example), so let’s say images should not ever be larger than 600k.  

Format

All images should be saved in jpeg format.

Documents should be saved as pdf or doc (for editable documents).

Other

404 and 403 pages

We are going to set up these pages for you, but it’s important that you fill them with accurate content given your audience. For example, you could add a link to your Homepage here or to an Archive / Search page to help your audience finding what they were looking for.

Please note these points listed above are changes you can apply without any tech support, they are just Content edits you can apply by yourself when adding / updating content for your website.

Here’s a few SEO related Drupal modules that makes developers lives easier.

Jan 17 2017
Jan 17

The weekend of Saturday 28th January is the Drupal Global Sprint. It’s a worldwide event to help Drupal and its community improve; an opportunity for anyone who loves Drupal to contribute to core or contrib modules; while non-technical people can get involved with testing, reviewing, documentation or translations.

With sprints taking place in over 20 cities around the globe we wanted to get involved, but surprisingly couldn’t find a local sprint in London. So we’ve made our own.

We’re opening our studio to anyone in the Drupal Community who wants to participate in the Drupal Global Sprint. Everyone is welcome; if you have built a site in Drupal, you can contribute. We will split into groups and work on Drupal core issues. Bring your laptop. For new folks: you can get a head start by making an account on Drupal.org and getting some contribution tools. Developers can install git before coming and git clone Drupal 8 core.

We’ll be throwing on some pizzas and refreshments to keep brains nourished and agile.

Where and when

Manifesto, 1st Floor, 141-143 Shoreditch High Street, London E1 6JE (Map)

Saturday 28th January, 10:30 am – 5:00 pm

Sign up

Please register via Eventbrite, selecting the ticket type (Beginner, Intermediate, Experienced) which best reflects your skill level so that we’re able to better organise the sprint teams.

If you have any questions at all, drop us a line: [email protected]

Jan 16 2017
Jan 16

Join us for a FREE live webinar this week on managing media assets in Drupal 8.
Click here to save your seat!

I attended a core conversation titled “LET’S FIX FILE AND MEDIA HANDLING ONCE AND FOR ALL” at DrupalCon Prague in 2013.

This got my attention, not because the title was in all caps, but because Drupal needed to fix media management, as the title says: “ONCE AND FOR ALL”.

Let’s face it, Drupal doesn’t handle media very well when compared to other systems. I’ve worked with clients who are used to a certain level of functionality when it comes to managing images or videos on websites.

In Drupal 7 you had a few options.

You could use the Media module. But embedding images through the editor could be buggy depending on which module you’d use to implement the editor, i.e., Wysiwyg or CKEditor.

Then you have Scald, which is a pretty good module. Another module which has been around for a long time is IMCE.

However, adding media management into a Drupal site isn’t as straightforward as you think. That’s why I attended the core conversation in Prague. I too thought Drupal needed a great and robust media system.

Fast forward a couple of years since DrupalCon Prague and things have changed.

Thanks to the work from the Drupal Media team, managing media in Drupal 8 has got a lot better.

Now they are working on getting this functionality in Drupal core, which I think is absolutely amazing.

In this tutorial, I’ll show you how to set up media management in Drupal 8.

3 Parts to Handling Media in Drupal

Everyone has their own definition of media management. In this tutorial, I’m going to focus on three parts:

  1. Storing assets
  2. Embedding assets
  3. Browsing assets

I want to give users the ability to create media assets. Then have a button in the editor which they can use to browse assets and then embed them.

We’ll utilize three modules to handle this: Media Entity, Entity Embed and Entity Browser.

What’s Happened to the Media Module?

In Drupal 7, the Media module was jam-pack with a lot of functionality. In Drupal 8, a lot of its functionality has been broken out into seperate modules. There is a Drupal 8 version of Media and it’ll be used to create an out-of-the-box solution. The module doesn’t do much other than ship a bunch of configuration.

Part 1: How to Create Media Entities

To store media assets you’ll need the Media Entity module. The module itself won’t handle any media, it’s just a base entity.

So you’ll need to download other modules which are media providers. For example, if you want to handle images then download and install “Media entity image“. If you want to handle audio files you’ll need the “Media entity audio” module.

For a full list of media handlers go to the Media Entity project page.

I’m only going to focus on two types of assets in the tutorial: images and embedded videos (YouTube or Vimeo).

Let’s start by downloading the following modules:

Then install, “Media image entity” and “Video embed media” (sub-module of “Video embed field”)

Using Drush:

$ drush dl media_entity entity media_entity_image video_embed_field
$ drush en media_entity_image video_embed_media

Create Image Media Bundle

To handle images we’ll need to create a media type for images.

1. Go to Structure and click on “Media bundles”.

From this page you manage all the different media bundles. This is similar to the “Content types” page.

2. Click on “Add media bundle”.

3. Enter Image into Label, “Used for images.” into Description and select Image from the “Type provider” drop-down.

Ignore the other fields for now and scroll to the bottom and click on “Save media bundle”.

You can ignore the “Field with source information” drop-down. We’ll need to create a field and map it after.

4. Now click on “Manage fields” from the Operations drop-down.

We need to create an Image field that’ll be used to store the actual image field.

5. Click on “Add field”, select Image from “Add a new field” and enter Image into the Label field.

6. Leave the “Field settings” page as is and click on “Save field settings” at the bottom.

7. Leave the Edit page as is and click on “Save settings” at the bottom.

8. Click on the Edit tab from the “Manage fields” page to edit the media bundle.

9. Make sure the “Field with source information” drop-down has selected the image field which we created and click on “Save media bundle”.

Type Provider Configuration

The “Media entity” is like any other entity type: it’s fieldable. You can add custom fields, you can configure the form display and general display like you do with content types.

The only difference is, we need to tell the media bundle which field will store the actual file. If you’re creating a document media bundle, then you’d create a File field and select that in “Field with source information”.

Field Mapping

The “Field mapping” section lets you store metadata from the image into custom fields. If you want to store the width, then you’ll need to create a text field and select it from the Width drop-down.

Take note, the possible metadata properties are determined by the type of provider. You’d get different options if you were configuring a document media bundle.

Create Embed Video Media Bundle

Now it’s time to create another media bundle and this one will be used for embedding videos, i.e., YouTube or Vimeo videos.

1. Go back to “Media bundles” and click on “Add media bundles”.

2. Enter “Video embed” into Label, “Used for embedding videos.” into Description and select “Video embed field” from the “Type provider” drop-down.

3. Scroll to the bottom and click on “Save media bundle”.

We won’t have to create a “Video embed” field and map it across like we did for the Image bundle because “Video embed media” module did it for us.

Take note of this message:

So we’ve created our media bundles now let’s look at how to create a media asset.

How to Create a Media Assets

At this point, you can only create assets from the Media page.

1. Go to Content and click on Media.

From this page you can add a media asset and view existing ones.

2. To create an asset just click on “Add media”.

Go ahead and create an image and embeddable video.

Part 2: How to Embed Media Entities

Creating media assets is useful but if you can’t embed them what’s the point of having them.

In this section we’ll embed assets directly into the editor using Entity Embed.

The Entity Embed module allows a site builder to create a button which lets an editor embed entities into a text area, hence the name Entity Embed. It can be used to embed any type of entity not just media bundles. So be creative, you could use it to embed event content types.

To begin, download the following modules:

Using Drush,

$ drush dl embed entity_embed
$ drush en entity_embed

Create Embed Button

1. Go to Configuration and click on “Text editor embed buttons”.

2. Click on “Add embed button”.

3. Add Media into Label, select Entity from the “Embed type” and Media from the “Entity type” drop-down.

4. Once an entity type has been chosen, you can choose which media bundles can be embedded. If none are selected, then all are available.

And finally, upload a button icon which’ll be used in the editor. The default button is just an “E”.

Use this one from the media module: http://cgit.drupalcode.org/media/plain/images/media_embed_icon.png?h=8.x-1.x

At this point you should have two buttons, the Media button which we created and the Node button that comes default with the module.

Add Embed Button to Editor

We created the embed button, now we need to add it to the editor.

1. Go to Configuration, “Text formats and editors” and click Configure on the “Basic HTML” (or any text format) row.

2. Move the icon from the “Available buttons” into the “Active toolbar”.

From this:

To this:

Configure Filters

The next part we need to do is configure the filters.

We need to make sure a few things happen:

  1. Correct ordering of filters or the embedding may not work
  2. Making sure the “Allowed HTML tags” list accepts the tags used by Entity Embed

Configure “Allowed HTML tags” list

As soon as we added the button to the active toolbar, the following tags should be in the “Allowed HTML tags” list:

<drupal-entity data-entity-type data-entity-uuid data-entity-embed-display 
data-entity-embed-display-settings data-align data-caption data-embed-button>

Make sure these tags are in the text field. If not then embedding media assets WILL NOT WORK.

Enable “Display embedded entities”

Enable the “Display embedded entities” filter. This is required for the embedding to work.

Confirm Order of “Align images” and “Caption images”

The Entity Embed README.txt mentions if you’re using the “Align images” and “Caption images” filters, to order “Align images” before “Caption images”.

Problem with “Restrict images to this site” Filter

The “Restrict images to this site” Filter stops an image being displayed if you embed it and select an image style.

The filter stops a user from pointing to an image which is not hosted on the site. For example, if you’re Drupal site is hosted at my-drupal.com, then it won’t allow you to add an image such as <img src="http://random-site.com/image.jpg" />, all your images need to be <img src="http://my-drupal.com/image.jpg" />.

There is an open issue on drupal.org about it.

The workaround for now, unfortunately, is to remove the filter.

Once everything has been configured, make sure you click on “Save configuration” at the bottom of the page.

The filters list should look like this:

How to Embed Media into a Page

Now that the “Basic HTML” text format has been configured, we should be able to embed assets.

1. Go to Content, “Add content” and click on Article.

2. Click on the embed button and a pop-up should appear with an autocomplete field.

Search for the asset using its “Media name” and click on Next.

3. Select Thumbnail from “Display as”, select an image style, align and add a caption.

Then click on Embed.

4. Once embedded you should see the image on the right with the caption.

Save the page and you’re good to go.

Embedding YouTube Videos

In the section above it was easy to embed an image. You simply choose it, selected a thumbnail size and you’re done.

But if you try and embed a YouTube video using the “Video embed” bundle we created earlier. You’ll just see the video thumbnail and not an embedded player, not the desired result.

Create Media Bundle View Mode

The simple solution is to create a custom view mode for the “Video embed” media bundle. Let’s do this now.

1. Go to Structure, “Display modes”, “View modes” and click on “Add view mode”. Then click on Media.

2. Call this view mode Embed and click on Save.

3. Go to Structure, “Media bundles” and go to the “Manage display” page for the “Video embed” bundle.

4. Enable the Embed view mode which we just created by clicking on “Custom display settings” and select it, then click on Save.

5. Go to the view mode by clicking on it on the top left. Remove all the fields except “Video URL”. Make sure Video is selected from Format and “– Hidden –” from Label.

Then click on Save.

Now when you embed a video, select Embed from “Display as”. If you can’t see the new view mode clear the site cache.

Part 3: How to Browse Media Entities

When we chose an asset, we were given just a single autocomplete field.

This is not ideal. You shouldn’t expect your editors to remember the asset name. It’ll be better to have some sort of browser where we can see all the available media assets.

We’ll use Entity Browser to create browsing functionality and best of all, it integrates nicely with Entity Embed.

Let’s set this up now.

To begin, go download the following modules:

Using Drush,

$ drush dl entity_browser ctools
$ drush en entity_browser ctools

How to Create an Entity Browser

There are two steps involved in creating a browser using the module.

First you’ll need to create a view using a display called “Entity browser”. This view will be used to list out all assets. Then you’ll need to configure an entity browser and select the created view.

Create Entity Browser View

1. Go to Structure, Views and click on “Add view”.

2. Fill out the “Add new view” form, using the values defined in Table 1.0 and click on “Save and edit”.

Table 1-0. Create a new view

Option Value View name Entity browser Machine name entity_browser Show Media type of All sorted by Newest first Create a page Unchecked Create a block Unchecked

3. Next to the Master tab click on “Add” and click on “Entity browser.

It’s important that you select the “Entity browser” display or you won’t be able to select this view when we’re configuring the actual browser.

Let’s change the view to a table so it looks a little better.

4. Click on “Unformatted list” next to Format.

5. From the pop-up, select Table and click on Apply.

At this point we’ve switched the view from a list to a table.

Now we need to add two fields: Thumbnail and “Entity browser bulk select form”.

6. Click on Add next to Fields, add the Thumbnail field.

This will display a thumbnail of the media asset.

7. Then add the “Entity browser bulk select form”.

This field is used to select the asset when browsing. It is a required field.

8. Reorder the fields so they’re as follows:

9. Once complete the preview should look like the image below:

10. Don’t forget to click on Save.

Create Entity Browser

Now that we’ve created the view, let’s configure the browser.

1. Go to Configuration, “Entity browsers” and click on “Add entity browser”.

2. Enter “Assets browser” into Label, select iFrame from “Display plugin” and Tabs from “Widget selector plugin”.

Leave “Selection display plugin” as “No selection display”.

Then click on Next

Do not select Model if you’re using the browser with Entity Embed it isn’t compatible (Issue #2819871).

3. On the Display page, configure a width and height if you like but do check “Auto open entity browser. This will save an extra click when embedding.

Then click on Next.

4. Just click Next on “Widget selector” and “Selection display”.

5. On the Widgets page, select “Upload images” from the “Add widget plugin”. Change the Label to “Upload images”.

6. Then select View from the drop-down.

7. From the “View : View display” drop-down, select the view which we created earlier.

If you can’t see your view, make sure you select “Entity browser” when configuring it:

8. Once configured the Widgets page should look like:

Configure Entity Embed to use Browser

Entity Embed now needs to be linked with the browser we created.

1. Go to Configuration, “Text editor embed buttons” and edit the embed button.

2. You should see a drop-down called “Entity browser”, select the browser you just created and click on Save.

Using the Entity Browser

Go into an article or page and click on the Entity Embed button.

You should now see a pop-up with two tabs: “Upload images and view.

From the “Upload images” tab, you can upload a new image and it’ll create an Image media bundle.

If you click on view, you’ll see all the media assets.

To embed an asset, just choose which one you want and click on “Select entities”.

How do you Add a YouTube Video from the Entity Browser Page?

I haven’t figured this out yet. If you know how, leave a comment.

Summary

Adding functionality to a Drupal 8 site to handle media assets can be done and it’s fairly solid. But as you can see there’s a lot of configuration involved. Hats off to the Drupal Media team for creating a flexible suite of modules. With the “Media in Drupal 8 Initiative” in the works, things are looking very promising.

Extra Resources

FAQs

Q: I created a new view mode but can’t see it when I embed an asset.

Clear the site cache.

Q: When I embed an image and select an image style all I see is a red cross.

Disable the “Restrict images to this site” filter.

Jan 13 2017
Jan 13
I realize I haven't updated my blog in awhile, because I've been in the thick of developing and rolling out our new site (check it out!). This was taking our existing Drupal 7 site for FSR Magazine and migrating it to this new site, which is responsive and better catered to deal with all food service news. Unfortunately, we ended up killing the Drupal 8 upgrade because it was just too complex, given our custom module codebase. D8 has a mindset change, where everything is an Entity and it takes a lot of OOP and YAML files to get things done.

I figure it would cost us hundreds of hours to upgrade to D8 and it was tempting to upgrade to D8 for the configuration management, Big Pipe, and better caching features. But that's time and money we didn't want to spend on the upgrade.

I've talked with some folks at my local Drupal User Group (shoutout TriDUG!) and several others are in the same boat, where existing sites aren't upgrading, but rather, they're doing new sites in D8.

At some point, we'll need to do something, but we're able to do what we need with D7. If pressed, it may be easier to port to Backdrop vs. upgrade to D8.

Jan 12 2017
Jan 12

Yesterday I presented WebWash’s first webinar on Page Manager and Panels. I had lots of fun doing the presentation and was asked some pretty good questions at the end.

In the video above, I cover the following:

  • What’s new in Drupal 8
  • Demonstrate how to create a custom page
  • Show you how to use multiple variants
  • Demonstrate Panels IPE (in-place editor)
  • Finally, I show you how to use Bootstrap Layouts

The video goes for about 45 minutes with questions at the end.

Register for the next webinar: How to Manage Media Assets in Drupal 8.

Jan 11 2017
Jan 11
January 11th, 2017

American Craft Council and Four Kitchens Take the Best in Biz Gold!

The American Craft Council and Four Kitchens have been named gold winners for Website of the Year in the Best in Biz Awards, the only independent business awards program judged by members of the press and industry analysts.

The American Craft Council is a national, nonprofit educational organization that has celebrated and promoted American craft for more than 75 years through its award-winning magazine, American Craft, juried fine craft shows in Baltimore, Atlanta, Saint Paul, and San Francisco, an extensive library and archives (print and digital) of craft resources, and more.

“We’re so thrilled to share this honor with Four Kitchens,” said ACC’s executive director, Chris Amundsen. “They have been a fantastic partner to work with on our website redesign. With their guidance and expertise, our new site now better serves our members and the broader craft community, and it more effectively helps us fulfill our mission to champion craft. We’ve received such a positive reception from the craft community, and now it’s wonderful to be recognized with a Best in Biz award for all we’ve achieved together.”

“Our partnership with American Craft Council has been a wonderful experience,” said Todd Ross Nienkerk, CEO and co-founder of Four Kitchens. “We’re very happy to have earned this award for such an interesting project, and I applaud the hard work of everyone at ACC and the expertise of the Four Kitchens Web Chefs who led the way.”

Winners of Best in Biz Awards 2016 were determined based on scoring from an independent panel of 50 judges from widely known newspapers, business, consumer and technology publications, TV outlets, and analyst firms.

Read the full press release here.

Do you have a project you’d like to run by us? Give us a shout!
Would you like to join an award-winning team? We’re hiring!

Recommended Posts

Lucy Weinmeister
Lucy Weinmeister

Lucy Weinmeister is the marketing coordinator at Four Kitchens. She loves to share all the new and exciting things the Web Chefs are cooking up at 4K. She is forever reading a book.

Client Stories

Launch announcements, blog posts about the project management process on a specific job, technical posts about the implementation of a feature for a specific client, announcements that a client’s website has won an award.

Read more Client Stories
Jan 11 2017
Jan 11
11 January 2017

In September last year I gave a talk at DrupalCon in Dublin, on the topic of offline first. I wanted to reflect a little on that experience.

The topic covered how modern browsers allow us to build websites to work better under poor or non-existent network conditions. I chose this because my previous experience writing native mobile apps has given me some insight into the issues with mobile connectivity. Until recently, that’s something that native apps have generally been better at addressing than the web, but now that’s changing.

I wanted to talk for a number of reasons. Firstly, I felt by sharing I could contribute to the Drupal community at large. This is a relatively new area, one in which people are still figuring out opportunities, particularly how to apply it to Drupal. Secondly, I hoped that by stretching myself there would be a sense of personal and career development for myself.

When I received the email informing me my talk had been accepted I was really excited. I hadn’t spoken at this size of event before, so I wasn’t expecting my proposal to be picked. My aim had been to build up my speaking experience and perhaps try for something like this the following year!

If you want to speak at a conference, you do need to start off at a smaller event. DrupalCon itself asks for speakers to have had previous experience, and ideally you want to be giving an established talk you’re confident on rather than a brand new one!

If you’re based near London, Drupal Show and Tell is an ideal place to start - 3 short talks once a month, about 15-20 minutes each. They are always looking for new speakers too! It’s friendly and informal, a nice opportunity to test out a topic without the huge commitment of a larger event.

I first spoke there in May. I came away feeling like the subject was received well, but also that the talk needed restructuring. A useful exercise, as these are the kind of things you only find out by putting yourself in front of a real audience.

I then worked on the material, turning it into a longer, 35-40 minute session, which I presented at the Brighton and Bristol DrupalCamps. It’s easy to underestimate how much effort goes into such a talk. The abstract itself takes quite a lot of time to prepare, but it’s what delegates will use to decide whether to attend or not. You want to get this right - who are the people that will most benefit from the talk? What experience do they need to have beforehand, in order to learn something new?

If you aren’t that experienced in public speaking, get some help! One of the benefits to being based at The Skiff co-working space is the community, and I was fortunate enough to attend a public speaking workshop run by Steve Bustin. Public speaking isn’t something that comes naturally to me, so I learnt a lot and was certainly out of my comfort zone!

Leading up to giving a talk, I’d recommend you run though it as often as you can, preferably with an audience. Local companies will often be willing to have you give it to their their staff - you get the practice and they get a conference talk for free. In my case, The Unit in Brighton were willing to be guinea pigs.

Some practicalities at the conference itself: firstly, pace yourself during the event and do something relaxing the night before. Secondly, get to the venue early and spend some time in the room while it’s quiet. It will take away any unfamiliarity of the room, and also you’ll have plenty of time to deal with any technical problems! I had some projector issues, but the conference staff were excellent at sorting them out.

I really enjoyed the talk itself. I felt prepared, something that was probably the biggest contributor to how well it went. People in the audience asked some good questions, and a nice little spontaneous discussion happened straight after.

A couple of things weren’t so good. Firstly, I wished there had been a more diverse audience. This is something that we as an industry have to get better at, and I’m glad work is being done in the Drupal community to help. Secondly, DrupalCon has an anonymous feedback mechanism for talks. My feedback was generally positive, but it still left me with thoughts of what I could have done better. After the talk, when the adrenaline has gone, its easy to feel quite vulnerable.

Overall though, I’m really glad I did it. I learnt a lot from the experience and developed personally as a result. I’d love to talk again, perhaps jointly with someone else next time.

Jan 10 2017
Jan 10

Drupal is an open source project and really depends on its community to move forward. It is all about getting to know the CMS, spreading the knowledge and contribute to projects.
I will give you some ways to get involved, even if you are not a developer there is a task for you!

A group of Drupal mentors at DrupalCon 2016 in Dublin

Drupal Mentors – DrupalCon Dublin 2016 by Michael Cannon is licenced under CC BY-SA 2.0

Participating in user support

Sharing your knowledge with others is very important to the community: it is a nice thing to do and you might also learn some things by doing so. Whatever your skill level, you can give back to the community with online support. There are many places where you can give support starting with the Support Forums. You can also go to Drupal Answers which is more active than the forums or subscribe to the Support Mailing list. If you prefer real-time chat, you can also join #drupal-support channel on IRC or the Slack channels.

Helping out on documentation

Community members can write, review and improve different sorts of documentation for the project: community documentation on drupal.org, programming API reference, help pages inside the core software, documentation embedded in contributed modules and themes etc.
Contributing is a good way to learn more about Drupal and share your knowledge with others. Beginners are particularly encouraged to participate as they are more likely to know where documentation is lacking.
If you are interested, check out the new contributor tasks for anyone and writers.

Translating Drupal interface in your own language

The default language for the administration interface is English but there are about 100 available languages for translations. There is always a need for translations as many of these translation sets are incomplete or can be improved for core and contributed modules.
All translations are now managed by the translation server. If you are willing to help, all you have to do is logging into drupal.org and join a language team. There is even a video to learn how the translation system works and a documentation.

You can also help to translate documentation into your language. Most language-specific communities have their own documentation so you should get in touch with them directly. To learn more, see the dedicated page.

Improving design and usability

The idea is to improve the usability especially in Drupal 8 regarding the administration interface. The focus is mainly on content creation and site building. The community has done many research to understand the problems that users run into and how the new improvements performs. The purpose is also to educate developers and engage designers in order to grow the UX-team. You can visit the Drupal 8 UX page for more details and join the usability group.

Writing a blog post about Drupal

Writing a blog post about Drupal is a good way to share your knowledge and expertise. There are many subjects to explore, technical or not: talking about a former project you developed or writing a tutorial, telling about the state of a version or sharing about an event you attended… And if you are lucky enough your post can be published on the Weekly Drop, the official Drupal newsletter!

Don’t forget to reference your blog post on Planet Drupal, this platform is an aggregated list of feeds from around the web which shares relevant Drupal-related knowledge and information.

You can also find our Drupal related blog posts on the Liip blog.

Testing core and modules

Testing Drupal projects is necessary to make the platform stable and there are many things to test! If you have a technical background, you can help to review patches or to write unit tests.
For non-technical people, you can provide some feedback about usability of the administration interface that will help to improve the user experience. Follow the process to give a proper feedback.

Contributing to development

There are many ways to contribute code in core and “contrib” projects such as modules or themes.
You can first help to improve existing projects by submitted patches. This would be the natural thing to do when you work with a module and you notice a bug or a missing feature: search in the corresponding issue queue if the problem have been noticed before. If not, post a message explaining the issue and add a snippet of code if you found a potential fix. Then you can create a patch and submit it into the issue queue.
You can also contribute to new projects by creating your very own module or theme or create a sandbox for more experimental projects.

Attending events

The Drupal association organizes many events all around the world to promote the CMS and gather the community.

One of the biggest events are the Drupalcons. A Drupalcon gathers thousands of people and lasts about one week including 3 full days of conferences. These conferences cover many topics: site building, user experience, security, content authoring etc. You can also join sprints to contribute to Drupal projects and social events to meet the members of the community. Check out our report about DrupalCon Barcelona 2015!

“Drupal Dev Days” conferences occur once a year and gather developers to discuss and present topics technically relevant to the community. You can join sprints, intensive coding sessions and technical conferences.

You can also join DrupalCamps to meet your local community. These events last one or two days and focus on sharing knowledge amongst the community. You can attend conferences and sprints.

There are also many Drupal meetups which are free events happening in many cities in the world. Presentations and discussions finish around nice drinks and appetizers.

Sponsoring events

The community holds conventions and meetups in many countries and being a sponsor will not only help Drupal development but it will also enable you to be noticeable within the community. There are different levels of sponsorings that will offer from mentions on social media to advertising online and at the exhibition space of the event. All you have to do is getting in touch with the event organizers. By the way, Liip will sponsor the Drupal Mountain Camp in Davos this year!

Offering a donation

You can give donations to the Drupal association through the website in order to support drupal.org infrastructure and maintenance, worldwide events such as Drupalcons. The donations are either in Euros or Dollars.

You can also become a member of the Drupal Association for the same purpose, as an individual member or an organization member. The minimal fees are 15 Euros. Find more information about membership on drupal.org.

Conclusion

Drupal projects are constantly improving thanks to passionate volunteers who work on many subjects: development, documentation, marketing, events organization, supports… There is for sure a task that will suit you and it only takes small time commitment to make changes.
So join the great Drupal community and start getting involved!

Jan 09 2017
Jan 09

Join us for a FREE live webinar this week on Page Manager and Panels in Drupal 8.

Click here to save your seat!

Panels has always been my go-to module when it comes to building custom pages in Drupal 7.

Now in Drupal 8 things have changed.

A lot of what Panels did in Drupal 7 has been moved over to Page Manager. Panels itself doesn’t offer a user interface and it is just a variant type in Drupal 8. Also, Page Manager is now its own project, whereas, in Drupal 7 it was part of the Ctools module.

Panels in Drupal 8 integrates with Page Manager and offers a custom variant type which allows you to select different layouts and manage blocks in the layouts. On its own, Panels doesn’t really do anything, you need something like Page Manager to utilize it.

So with that being said, what can Page Manager do?

The module can be used to create arbitrary landing pages such as a homepage or category landing pages. Then Panels can be used to select a layout and add blocks to layout regions.

In this tutorial, we’ll create a homepage which displays a different layout if a user is logged in or not.

Getting Started

Before we can begin, go download the following modules:

Then install Panels and Page Manager UI.

Drush:

$ drush dl panels layout_plugin ctools page_manager
$ drush en panels page_manager_ui

Drupal Console:

$ drupal module:download panels --latest
$ drupal module:download layout_plugin --latest 
$ drupal module:download ctools --latest 
$ drupal module:download page_manager --latest 
$ drupal module:install panels 
$ drupal module:install page_manager_ui

Create Custom Homepage

Let’s begin this tutorial by first creating a custom homepage.

1. Go to Structure, Pages and click on “Add page”.

2. Enter Homepage into “Administrative title” and “homepage” into Path.

3. From the “Variant type” drop-down select Panels. This is where Panels integrates with “Page manager”. Leave everything as is and click on Next.

If you can’t see Panels in the drop-down make sure you’ve installed Panels.

4. On the “Configure variant” page, select Standard from Builder and click on Next.

5. On the Layout page, select “Two column” and click on Next.

If you can’t see layouts in the drop-down try rebuilding the site cache.

6. From the Content page, you can select which blocks appear in which region. Just click on Finish and we’ll configure it in the next section.

Now we’ve completed the “Add page” wizard.

Let’s now customize the layout.

7. Click on Content on the left and from here you can add blocks to regions and change the page title.

8. To add blocks, just click on “+ Add new block”, click on “Recent comments”, select “Left side” from Region and click on “Add block”.

9. Again, click on “+ Add new block”, click on “Recent content”, select “Right side” from Region and click on “Add block”.

Finally, enter “Custom homepage” into “Page title”.

10. Scroll to the bottom and click on “Update and save”.

Set Page as Front Page

Once you’ve saved the page, go directly to “/homepage” and you should see the page with the two blocks on each side.

Now let’s set this page as the front page.

1. Go to Configuration, “Basic site settings” and change the “Default front page” to “/homepage”. Don’t forget the forward slash. This is new in Drupal 8.

2. Now if you go to the homepage you should see the new layout with the two columns.

Page Variants

One thing I loved about Panels in Drupal 7 was the ability to create different page variants. In Drupal 8 Page Manager, has taken over this functionality.

So what are variants?

Variants let you configure multiple layouts on a single page and you configure a selection criteria to determine which variant should be displayed.

For example, we’ll configure the homepage to display a different variant depending if the user is logged in or not. The correct variant is displayed based off the configured selection criteria.

Let’s configure the homepage now.

1. Go to Structure, Pages and click Edit on the Homepage row.

We already have a variant called Panels. Let’s change it so it’s only visible to authenticated users (logged in users).

2. Click on General under the Panels variant and change the Label to “authenticated user”. Then click on “Update and save”.

3. Click on “Selection criteria”, select “User Role” and click “Add condition”.

4. Check “Authenticated user” under “When the user has the following roles” and click Save. Again, click on “Update and save”.

5. Click on Content and change the “Page title” to “authenticated user”. We’ll do this so we know for sure that the right variant is selected.

6. Now click on “Update and save”.

At this point, the variant will only be visible for authenticated user. If you access it as an anonymous user you’ll get a “page not found” error.

Add Variant

Now we need to create another variant for anonymous users.

1. Go back into the homepage edit page and click on “Add variant” in the top right.

2. Enter in Default into Label, select Panels from Type and click on Next.

We won’t need to configure any “Selection criteria” because this should be used as the default homepage. When building sites with variants you should have a default variant ordered last which’ll be used if no selection criteria returns true.

3. On the “Configure variant” page, just click on Next.

4. On the Layout page, select “Two column” from the Layout drop-down and click on Next.

5. Leave the Content page as is and click on Finish.

6. Click on Content and add the “Recent comments” and “Recent content” blocks like we did before.

7. Add “Default homepage” into the “Page title” field so we know the right variant is picked up.

Now test the homepage with two browsers: one as an anonymous user and another as an authenticated user.

Order of Variants

The order of the variant is important. Any variant that has no selection criteria will always return true and be displayed. Ones with a selection criteria should be ordered above ones without any.

So in our case, Page Manager will check the selection criteria for the authenticated user variant, if the user viewing it is logged in then it’ll return true and display the variant. If not, it’ll go to the next variant and check its selection criteria and continue until one returns true.

If no variant returns true, then the user will get a “page not found”.

Variants can be reordered by clicking on “Reorder variants” in the top right corner.

Summary

As you’ve seen, Page Manager and Panels are great for creating custom pages. The page variant functionality alone is worth installing and using the modules. If this is the first time you’re using either module give yourself a bit of time to get up-to-speed with them. They’re powerful, but if you’ve never used them before they can be a bit complicated.

FAQ

Q: Can’t see Panels in the drop-down?

Make sure you’ve downloaded and installed Panels.

Jan 09 2017
Jan 09
9 January 2017

Today, I started looking at some of the proposals to include layouts within Drupal core from version 8.3 onwards.

This initiative aims to take the functionality that currently exists for laying out blocks and regions, and to use it for displaying other things, such as content entity view and form modes.

Some of this work started life in contrib, in the layout plugin module. Although this module is still alpha status, both the panels and display suite modules use it. Those modules can, therefore, share layouts. However, this module seems to be a stepping stone for what will eventually end up as a core module. Somewhat confusingly, it has a different name.

I’ve decided to focus only on two small modules, either in or planned for Drupal core:

Layout discovery module

Layout discovery is currently in Drupal 8.3 as an experimental module.

This is a very simple API module that allows for the discovery of layouts provided by other modules. It replaces the Layout plugin module mentioned above.

Providing your own layouts is pretty straightforward and documented. The most basic use case is a YAML file that defines a layout and it’s regions, along with a corresponding twig template. More complicated stuff can be done too - a dynamic layout builder would provide the layout definitions to be discovered by this module too, but probably by implementing a deriver class.

I was able to create a very simple layout with ease:

two_column:
  label: Two column
  category: Erik's layouts
  template: templates/two-column
  regions:
    main:
      label: Main content
    secondary:
      label: Secondary content

The template is equally easy, just put the markup you want for the layout and then refer to {{ content.main }} and {{ content.secondary }} in the appropriate places.

Field layout module

Field layout is a proposed new module, not yet added to Drupal core. [Update: as of 26 Jan this is now part of Drupal core]

This alters the manage display and manage form display settings forms. Currently a Drupal site builder can use these forms to control the ordering in which each field is displayed. If you want to do anything more involved, you need to write a twig template for a particular display. The field layout module enhances this, allowing the site builder to choose a predefined layout and populate it’s regions with fields.

Think of it as a cut down version of display suite.

Content type manage display form showing left and right regions to arrange fields The manage display form with a two column layout enabled.

Rendering

I studied the field layout module to see how it works, and how I might use layouts in other settings. It turns out rendering a layout programmatically is quite straightforward. To use the two_column layout defined above, my render array would look like this:

$output = [
  '#theme' => 'layout__two_col',
  'main' => [ /* render array for main content */ ],
  'secondary' => [ /* render array for secondary content */ ],
];

I think this is going to be really useful to have in Drupal core.

Jan 06 2017
Jan 06
January 6th, 2017

On this episode of Sharp Ideas, Randy and Doug talk to Jen Lampton, co-founder of Backdrop CMS, about changes in the Drupal community, the importance of open source, and how to make sure we’re hearing a diversity of voices in our projects.

Broadcasting directly to you from wherever the web meets business and design. You can listen to us on SoundCloud (on the site or download the app!) or find us with your other favorite podcasts on the Stitcher app.

Recommended Posts

Douglas Bigham
Douglas Bigham

Doug is a writer and ex-academic with a background in digital publics and social language use. He likes dark beer, bright colors, and he speaks a little Klingon.

Jan 05 2017
Jan 05

As a follow-up to my previous blog post about the usage of Migrate API in Drupal 8, I would like to give an example, how to import multilingual content and translations in Drupal 8.

Prepare and enable translation for your content type

Before you can start, you need to install the “Language” and “Content Translation” Module. Then head over to “admin/config/regional/content-language” and enable Entity Translation for the node type or the taxonomy you want to be able to translate.

As a starting point for setting up the migrate module, I recommend you my blog post mentioned above. To import data from a CVS file, you also need to install the migrate_source_csv module.

Prerequisites for migrating multilingual entities

Before you start, please check the requirements. You need at least Drupal 8.2 to import multilingual content. We need the destination option “translations”, which was added in a patch in Drupal 8.2. See the corresponding drupal.org issue here.

Example: Import multilingual taxonomy terms

Let’s do a simple example with taxonomy terms. First, create a vocabulary called “Event Types” (machine name: event_type).

Here is a simplified dataset:

Id Name Name_en 1 Kurs Course 2 Turnier Tournament

You may save this a csv file.

Id;Name;Name_en 1;Kurs;Course 2;Turnier;Tournament

Id;Name;Name_en

1;Kurs;Course

2;Turnier;Tournament

The recipe to import multilingual content

As you can see in the example data,  it contains the base language (“German”) and also the translations (“English”) in the same file.

But here comes a word of warning:

Don’t try to import the term and its translation in one migration run. I am aware, that there are some workarounds with post import events, but these are hacks and you will run into troubles later.

The correct way of importing multilingual content, is to

  1. create a migration for the base language and import the terms / nodes. This will create the entities and its fields.
  2. Then, with an additional dependent migration for each translated language, you can then add the translations for the fields you want.

In short: You need a base migration and a migration for every language. Let’s try this out.

Taxonomy term base language config file

In my example, the base language is “German”. Therefore, we first create a migration configuration file for the base language:

This is a basic example in migrating a taxonomy term in my base language ‘de’.

Put the file into <yourmodule>/config/install/migrate.migration.event_type.yml and import the configuration using the drush commands explained in my previous blog post about Migration API.

id: event_type label: Event Types source: plugin: csv # Full path to the file. Is overriden in my plugin path: public://csv/data.csv # The number of rows at the beginning which are not data. header_row_count: 1 # These are the field names from the source file representing the key # uniquely identifying each node - they will be stored in the migration # map table as columns sourceid1, sourceid2, and sourceid3. keys: - Id ids: id: type: string destination: plugin: entity:taxonomy_term process: vid: plugin: default_value default_value: event_type name: source: Name language: 'de' langcode: plugin: default_value default_value: 'de' #Absolutely necessary if you don't want an error migration_dependencies: {}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

id: event_type

label: Event Types

source:

  plugin: csv

  # Full path to the file. Is overriden in my plugin

  path: public://csv/data.csv

  # The number of rows at the beginning which are not data.

  header_row_count: 1

  # These are the field names from the source file representing the key

  # uniquely identifying each node - they will be stored in the migration

  # map table as columns sourceid1, sourceid2, and sourceid3.

  keys:

    - Id

ids:

  id:

    type: string

destination:

  plugin: entity:taxonomy_term

process:

  vid:

   plugin: default_value

   default_value: event_type

  name:

    source: Name

    language: 'de'

  langcode:

    plugin: default_value

    default_value: 'de'

#Absolutely necessary if you don't want an error

migration_dependencies: {}

Taxonomy term translation migration configuration file:

This is the example file for the English translation of the name field of the term.

Put the file into <yourmodule>/config/install/migrate.migration.event_type_en.yml and import the configuration using the drush commands explained in my previous blog post about Migration API.

id: event_type_en label: Event Types english source: plugin: csv # Full path to the file. Is overriden in my plugin path: public://csv/data.csv # The number of rows at the beginning which are not data. header_row_count: 1 keys: - Id ids: id: type: string destination: plugin: entity:taxonomy_term translations: true process: vid: plugin: default_value default_value: event_type tid: plugin: migration source: id migration: event_type name: source: Name_en language: 'en' langcode: plugin: default_value default_value: 'en' #Absolutely necessary if you don't want an error migration_dependencies: required: - event_type

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

id: event_type_en

label: Event Types english

source:

  plugin: csv

  # Full path to the file. Is overriden in my plugin

  path: public://csv/data.csv

  # The number of rows at the beginning which are not data.

  header_row_count: 1

  keys:

    - Id

ids:

  id:

    type: string

destination:

  plugin: entity:taxonomy_term

  translations: true

process:

  vid:

    plugin: default_value

    default_value: event_type

  tid:

    plugin: migration

    source: id

    migration: event_type

  name:

    source: Name_en

    language: 'en'

  langcode:

     plugin: default_value

     default_value: 'en'

#Absolutely necessary if you don't want an error

migration_dependencies:

  required:

    - event_type

Explanation and sum up of the learnings

The key in the migrate configuration to import multilingual content are the following lines:

destination: plugin: entity:taxonomy_term translations: true

destination:

  plugin: entity:taxonomy_term

  translations: true

These configuration lines instruct the migrate module, that a translation should be created.

tid: plugin: migration source: id migration: event_type

tid:

  plugin: migration

  source: id

  migration: event_type

This is the real secret. Using the process plugin migration,  we maintain the relationship between the node and its translation.The wiring via the tid field make sure, that Migrate API will not create a new term with a new term id. Instead, the existing term will be loaded and the translation of the migrated field will be added. And thats exactly what we need!

Now go ahead and try to create a working example based on my explanation. Happy Drupal migrations!

Dec 29 2016
Dec 29

Ecommerce has come to mean much more than simply offering goods for sale online.  People don’t just consume web content anymore; they interact with it. They expect their digital experience to be engaging, personal, and seamless, no matter what device they use.

Hence the rise of the Digital Experience Platform (DXP). In most cases, a DXP is a collaboration of software products (owned by different companies) that are designed to integrate with each other to cover these bases:

  • Storage of all data (including images, content, product specs, and customer data) in one centralized location
  • Content management for delivering content and building customer journeys
  • Data delivery for all your business logic and service integrations
  • Personalization based on user, application, and device being used
  • Marketing automation, testing, and reporting

In our opinion, Drupal has long been a DXP. Drupal started as a world-class content management system; with the addition of Drupal Commerce several years ago, it became one of the first true DXPs. Drupal Commerce was never a stand-alone ecommerce platform -- it was built onto the content management system using the same architecture. And it was open source, so it could integrate with virtually any other service. This was powerful stuff.

But Drupal did have a bit of weak spot in the DXP framework: the area of point of sale (POS). Drupal had a simple POS module that worked, but in most cases integration with a 3rd party POS was required to cover all facets of the traditional POS functionality that so many other companies provide.

Until now!

For the past year, Acro Media has been developing an improved version of the Drupal POS with a full list of features and new functionality being added regularly.

There are major benefits to a POS that integrates seamlessly into the Drupal and Drupal Commerce framework:

Accessibility
Connect to your business information from anywhere and from any device. Access reports, troubleshoot issues, and check on any location without needing to be in-store.

Real-Time Data
Access up-to-the-minute sales and inventory data from any POS location without having to wait for the end-of-day consolidation.

Single Database
Store your inventory, content, product info, pricing, customer data, and digital assets in one system.    

Open Source
Control virtually every aspect of the source code and customize it to suit your needs.

Scalability
React quickly to change and expand your business model by adding stores and distribution centers and taking advantage of multichannel retail opportunities.

With the Drupal POS system, we believe Drupal is even more of a Digital Experience Platform than it once was. Drupal offers true content management, a powerful commerce engine, and now a complete point-of-sale system that will seamlessly align your digital experience across all channels.

Dec 23 2016
Dec 23
December 23rd, 2016

On this episode of Sharp Ideas, Doug and Randy are joined from the basement of BADCamp X by Jon Peck and Heather Rodriguez.

Recorded on-site at BADCamp 2016, we’re talking the history and principles of BADCamp (the Bay Area Drupal Camp), the importance of human diversity in the tech world, the values and ethics of the open source movement, and staying aware of imposter syndrome when you’re giving back to your community.

Broadcasting directly to you from wherever the web meets business and design. You can listen to us on SoundCloud (on the site or download the app!) or find us with your other favorite podcasts on the Stitcher app.

Recommended Posts

Douglas Bigham
Douglas Bigham

Doug is a writer and ex-academic with a background in digital publics and social language use. He likes dark beer, bright colors, and he speaks a little Klingon.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web