Jul 11 2018
Jul 11
July 11th, 2018

Someone recently asked the following question in Slack. I didn’t want it to get lost in Slack’s history, so I thought I’d post it here:

Question: I’m setting a CSS background image inside my Pattern Lab footer template which displays correctly in Pattern Lab; however, Drupal isn’t locating the image. How is sharing images between PL and Drupal supposed to work?

My Answer: I’ve been using Pattern Lab’s built-in data.json files to handle this lately. e.g. you could do something like:

footer-component.twig:

... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %} ... {%footer_background_image=footer_background_image|default('/path/relative/to/drupal/root/footer-image.png')%}

This makes the image load for Drupal, but fails for Pattern Lab.

At first, to fix that, we used the footer-component.yml file to set the path relative to PL. e.g.:

footer-component.yml:

footer_background_image: /path/relative/to/pattern-lab/footer-image.png footer_background_image:/path/relative/to/pattern-lab/footer-image.png

The problem with this is that on every Pattern Lab page, when we included the footer copmonent, we had to add that line to the yml file for the page. e.g:

basic-page.twig:

... {% include /whatever/footer-component.twig %} ... {%include/whatever/footer-component.twig%}

basic-page.yml:

... footer_background_image: /path/relative/to/pattern-lab/footer-image.png ... footer_background_image:/path/relative/to/pattern-lab/footer-image.png

Rinse and repeat for each example page… That’s annoying.

Then we realized we could take advantage of Pattern Labs global data files.

So with the same footer-component.twig file as above, we can skip the yml files, and just add the following to a data file.

theme/components/_data/paths.json: (* see P.S. below)

{ "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png" }     "footer_background_image":"/path/relative/to/pattern-lab/footer-image.png"

Now, we can include the footer component in any example Pattern Lab pages we want, and the image is globally replaced in all of them. Also, Drupal doesn’t know about the json files, so it pulls the default value, which of course is relative to the Drupal root. So it works in both places.

We did this with our icons in Emulsify:

_icon.twig

paths.json

End of the answer to your original question… Now for a little more info that might help:

P.S. You can create as many json files as you want here. Just be careful you don’t run into name-spacing issues. We accounted for this in the header.json file by namespacing everything under the “header” array. That way the footer nav doesn’t pull our header menu items, or vise-versa.

example homepage home.twigthat pulls menu items for the header and the footer from data.json files

header.json

footer.json

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Mar 08 2018
Mar 08

This post was originally published on May 22, 2013 and last updated March 8, 2018 thanks to some helpful input by Steve Elkins.

Drupal 7 is a haus at combining CSS & JS files. This can help boost page performance & optimization easily, but if not used right, can do the complete opposite. In this post, we’ll go over how to load JS & CSS files based on conditionals like URL, module, node, views and more.

Before we dive in, get somewhat familiar with the drupal_add_js and drupal_add_css functions. We’ll use these to load the actual JS and CSS files.

hook_init – runs on every page

/**
 * Implements hook_init()
 *
 * @link https://api.drupal.org/api/drupal/modules%21system%21system.api.php/function/hook_init/7.x
 */
function HOOK_init() {

  // Using the equivalent of Apache's $_SERVER['REQUEST_URI'] variable to load based on URL
  // @link https://api.drupal.org/api/drupal/includes!bootstrap.inc/function/request_uri/7
  if (request_url() === 'your-url-path') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
}

Using hook_init is one of the simplest methods to load specific JS and CSS files (don’t forget to replace HOOK with the theme or module machine name).

Be careful, this method get’s ran on every page, so it’s best to use this method only when you actually need to check every page for your conditional. A good example, loading module CSS and JS files. A bad example, loading node-specific CSS and JS files. We’ll go over that next.

There’s also a similar preprocess function, template_preprocess_page you could use, but it too get’s ran on every page and is essentially the same as hook_init.

template_preprocess_node – runs on node pages

/**
 * Implements template_preprocess_node()
 *
 * @link https://api.drupal.org/api/drupal/modules%21node%21node.module/function/template_preprocess_node/7.x
 */
function TEMPLATE_preprocess_node(&$vars) {
  // Add JS & CSS by node type
  if( $vars['type'] == 'your-node-type') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
   
  // Add JS & CSS to the front page
  if ($vars['is_front']) {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
   
  // Given an internal Drupal path, load based on node alias.
  if (drupal_get_path_alias("node/{$vars['#node']->nid}") == 'your-node-id') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
}

Using template_preprocess_node is perfect when loading JS and CSS files based on nodes (don’t forget to replace TEMPLATE with the theme machine name). Since it only get’s run on nodes, it’s great to use when you want to load CSS and JS files on specific node types, front pages, node URLs, etc.

template_preprocess_views_view – runs every view load

/**
 * Implements template_preprocess_views_view()
 *
 * @link https://api.drupal.org/api/views/theme%21theme.inc/function/template_preprocess_views_view/7.x-3.x
 */
function TEMPLATE_preprocess_views_view(&$vars) {
  // Get the current view info
  $view = $vars['view'];

  // Add JS/CSS based on view name
  if ($view->name == 'view_name') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }

  // Add JS/CSS based on current view display
  if ($view->current_display == 'current_display_name') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
}

Using template_preprocess_node is useful when loading JS and CSS files when a particular view is being used (don’t forget to replace TEMPLATE with the theme machine name).

Helpful Methods for Conditionals

Here’s a few helpful Drupal methods you can use for your conditionals. Have one you use often? Let me know in the comments below.

  • request_uri – Returns the equivalent of Apache’s $_SERVER[‘REQUEST_URI’] variable.
  • drupal_get_path_alias – Given an internal Drupal path, return the alias set by the administrator.

Looking like a foreign language to you?

Not a developer or just lost looking at the code snipplets above? Shoot me a question in the comments below, or give these ‘plug-and-play’ modules a try for a GUI alternative:

Like this:

Like Loading...

Author: Ben Marshall

Red Bull Addict, Self-Proclaimed Grill Master, Entrepreneur, Workaholic, Front End Engineer, SEO/SM Strategist, Web Developer, Blogger

Feb 21 2018
Feb 21

Selected sessions for Drupalcon Nashville have just been announced! Mediacurrrent will be presenting seven sessions and hosting a training workshop. 

From exploring new horizons in decoupled Drupal to fresh perspectives on improving editorial UX and achieving GDPR compliance, check out what the Mediacurrent team has in store for Drupalcon 2018:
 

Speakers: Matt Davis, Director of Emerging Technology at Mediacurrent and Jeremy Dickens, Senior Drupal Developer at The Weather Company / IBM
Session Track: Horizons

During the course of an ongoing decoupling project for weather.com, the team found that the lack of page configurability was a distinct pain point for site administrators and product owners. To meet this challenge, the weather.com team built Project Moonracer, a Drupal 8-based solution that allowed for the direct modification of page configuration on a completely decoupled front-end by developing a unique set of data models to move page configuration back into the hands of the site owners.  

Takeaways:

  • Gain a greater understanding of the decoupled UI problem space as a whole
  • See specific API and UI considerations and lessons learned from our experience
  • Catch a glimpse into some possible futures of editorial interfaces in an increasingly decoupled world

Speaker: Bob Kepford, Lead Drupal Architect at Mediacurrent
Session Track: Back End Development 

Wouldn’t it be nice if you could type one command that booted your vagrant box, started displaying watchdog logs, set up the correct Drush alias, and provided easy access to your remote servers? Or maybe you use tools like Grunt, Gulp, or Sass. What if you could launch all of your tools for a project with one command? In this session, attendees will see how to use the terminal every day to get work done efficiently and effectively

You’ll learn:

  • How to use free command line applications to get work done.
  • How to better use the command line tools you already know.
  • How to customize your command line to behave the way you want it to. I guarantee attendees will walk away with at least one new tip, trick, or tool.

Speaker: Jay Callicott, VP of Technical Operations at Mediacurrent 
Session Track: Site Building 

If you have ever googled to find “top Drupal modules” you probably have read Mediacurrent’s popular, long-running blog series on the top modules for Drupal, authored by our own Jay Callicott. In this session, follow him on a leisurely stroll through the best modules that Drupal 8 has to offer as Jay presents an updated list of his top picks. Like a guided tour of the Italian countryside, you can sit back and enjoy as your guide discusses the benefits of each module. By the end of this session, you will have been introduced to at least a few modules that will challenge the boundaries of your next project.

Speakers: Mediacurrent's Dawn Aly, VP of Digital Strategy and Mark Shropshire, Open Source Security Lead
Session Track: Business

Data security legislation like the GDPR (enforcement begins May 28th, 2018) allows users to control how and if their personal data is used by companies. This shift in control fundamentally changes how companies can collect, store, and use information about prospects and customers. While understanding and implementing privacy related regulation in web projects is a necessity, related knowledge and skill sets become a real business differentiator and a key part of a user’s privacy experience (PX).

Key Topics:

  • Practical interpretation of the GDPR 
  • How to determine if you are at risk for compliance 
  • Repeatable process for assessing security risks in Drupal websites 
  • Security by design
  • Impact to data, analytics, and personalization strategies 

Speakers: Kevin Basarab, Director of Development at Mediacurrent and Mike Priscella, Engineering Manager at Thrillist/ Group Nine Media. 
Session Track: Ambitious Digital Experiences 

In this session, we'll dive into how Group Nine Media (parent company of Thrillist.com, TheDodo.com, and others) are evolving the Drupal 8 editorial user experience and contributing that back to the community. We'll not only look into their use case but also explore what modules and options are out there for improving editorial UX without custom development work.

  • How is design/UX reversing to focus on the editorial experience?
  • What contrib modules currently enhance the editorial experience?
  • How can a better editorial experience be beneficial to your client? 

Speakers: Mediacurrent Senior Front End Developer Mario Hernandez; Cristina Chumillas, Designer and Frontend Developer at Ymbra; Lauri Eskola, Drupal Developer at Druid Oy
Session Track: Core Conversations 

The Out-of-the-Box initiative team is working on improving the first-time user experience of Drupal. The team is creating a new installation profile with the main goal of demonstrating how powerful Drupal is for creating beautiful websites for real life use cases.

The alpha version for The Out of the Box initiative has been committed to Drupal 8.6.x. But, what is it and what will it bring to core?
 

Speakers: A panel of community organizers, including Mediacurrent Senior Developer April Sides 
Session Track: Building Community

This conversation is a space for camp organizers (and attendees) to discuss all things event planning, from venue selection and budgeting to session programming and swag. 

Training Presenters: Mediacurrent Senior Front End Developers Mario Hernandez and Eric Huffman

With the component-based approach becoming the standard for Drupal 8 theming, we’re beginning to see some slick front end environments show up in Drupal themes. The promise that talented front enders with little Drupal knowledge can jump right in is much closer to reality.  However, before diving into this new front end bliss there are still some gotchas, plus lots of baked in goodies Drupal provides that one will need to have a handle on before getting started.

This training will focus on the UI_Patterns module, which although still in Release Candidate state, it already solves many problems originated from the Drupal integration process.

Additional Resources
Drupalcon Baltimore 2017 - SEO, I18N, and I18N SEO| Blog 
Drupalcon: Not Just for Developers| Blog 
The Real Value of Drupalcon | Blog 

Feb 01 2018
Feb 01
February 1st, 2018

Paragraphs is a powerful Drupal module that makes gives editors more flexibility in how they design and layout the content of their pages. However, they are special in that they make no sense without a host entity. If we talk about Paragraphs, it goes without saying that they are to be attached to other entities.
In Drupal 8, individual migrations are built around an entity type. That means we implement a single migration for each entity type. Sometimes we draw relationships between the element being imported and an already imported one of a different type, but we never handle the migration of both simultaneously.
Migrating Paragraphs needs to be done in at least two steps: 1) migrating entities of type Paragraph, and 2) migrating entities referencing imported Paragraph entities.

Migration of Paragraph entities

You can migrate Paragraph entities in a way very similar to the way of migrating every other entity type into Drupal 8. However, a very important caveat is making sure to use the right destination plugin, provided by the Entity Reference Revisions module:

destination: plugin: ‘entity_reference_revisions:paragraph’ default_bundle: paragraph_type destination:plugin:entity_reference_revisions:paragraphdefault_bundle:paragraph_type

This is critical because you can be tempted to use something more common like entity:paragraph which would make sense given that Paragraphs are entities. However, you didn’t configure your Paragraph reference field as a conventional Entity Reference one, but as an Entity reference revisions field, so you need to use an appropriate plugin.

An example of the core of a migration of Paragraph entities:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: 'feed.url/endpoint' ids: id: type: integer item_selector: '/elements' fields: - name: id label: Id selector: /element_id - name: content label: Content selector: /element_content process: field_paragraph_type_content/value: content destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: paragraph_type migration_dependencies: { } plugin:urldata_fetcher_plugin:httpdata_parser_plugin:jsonurls:'feed.url/endpoint'    type:integeritem_selector:'/elements'    name:id    label:Id    selector:/element_id    name:content    label:Content    selector:/element_contentfield_paragraph_type_content/value:contentdestination:plugin:'entity_reference_revisions:paragraph'default_bundle:paragraph_typemigration_dependencies:{  }

To give some context, this assumes the feed being consumed has a root level with an elements array filled with content arrays with properties like element_id and element_content, and we want to convert those content arrays into Paragraphs of type paragraph_type in Drupal, with the field_paragraph_type_content field storing the text that came from the element_content property.

Migration of the host entity type

Having imported the Paragraph entities already, we then need to import the host entities, attaching the appropriate Paragraphs to each one’s field_paragraph_type_content field. Typically this is accomplished by using the migration_lookup process plugin (formerly migration).

Every time an entity is imported, a row is created in the mapping table for that migration, with both the ID the entity has in the external source and the internal one it got after being imported. This way the migration keeps a correlation between both states of the data, for updating and other purposes.

The migration_lookup plugin takes an ID from an external source and tries to find an internal entity whose ID is linked to the external one in the mapping table, returning its ID in that case. After that, the entity reference field will be populated with that ID, effectively establishing a link between the entities in the Drupal side.

In the example below, the migration_lookup returns entity IDs and creates references to other Drupal entities through the field_event_schools field:

field_event_schools: plugin: iterator source: event_school process: target_id: plugin: migration_lookup migration: schools source: school_id field_event_schools:  plugin:iterator  source:event_school  process:    target_id:      plugin:migration_lookup      migration:schools      source:school_id

However, while references to nodes or terms basically consist of the ID of the referenced entity, when using the entity_reference_revisions destination plugin (as we did to import the Paragraph entities), two IDs are stored per entity. One is the entity ID and the other is the entity revision ID. That means the return of the migration_lookup processor is not an integer, but an array of them.

process: field_paragraph_type_content: plugin: iterator source: elements process: temporary_ids: plugin: migration_lookup migration: paragraphs_migration source: element_id target_id: plugin: extract source: '@temporary_ids' index: - 0 target_revision_id: plugin: extract source: '@temporary_ids' index: - 1 field_paragraph_type_content:  plugin:iterator  source:elements  process:    temporary_ids:      plugin:migration_lookup      migration:paragraphs_migration      source:element_id    target_id:      plugin:extract      source:'@temporary_ids'      index:        -0    target_revision_id:      plugin:extract      source:'@temporary_ids'      index:        -1

What we do then is, instead of just returning an array (it wouldn’t work obviously), use the extract process plugin with it to get the integer IDs needed to create an effective reference.

Summary

In summary, it’s important to remember that migrating Paragraphs is a two-step process at minimum. First, you must migrate entities of type Paragraph. Then you must migrate entities referencing those imported Paragraph entities.

More on Drupal 8

Top 5 Reasons to Migrate Your Site to Drupal 8

Creating your Emulsify 2.0 Starter Kit with Drush

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jan 18 2018
Jan 18
January 18th, 2018

What are Spectre and Meltdown?

Have you noticed your servers or desktops are running slower than usual? Spectre and Meltdown can affect most devices we use daily. Cloud servers, desktops, laptops, and mobile devices. For more details go to: https://meltdownattack.com/

How does this affect performance?

We finally have some answers to how this is going to affect us. After Pantheon patched their servers they released an article showing the 10-30% negative performance impact that servers are going to have. For the whole article visit: https://status.pantheon.io/incidents/x9dmhz368xfz

I can say that I personally have noticed my laptop’s CPU is running at much higher percentages than before the update for similar tasks.
Security patches are still being released for many operating systems, but traditional desktop OSs appear to have been covered now. If you haven’t already, make sure your OS is up to date. Don’t forget to update the OS on your phone.

Next Steps?

So what can we do in the Drupal world? First, you should follow up with your hosting provider and verify they have patched your servers. Then you need to find ways to counteract the performance loss. If you are interested in performance recommendations, Four Kitchens offers both frontend and backend performance audits.

As a quick win, if you haven’t already, upgrade to PHP7 which should give you a performance boost around 30-50% on PHP processes. Now that you are more informed about what Spectre and Meltdown are, help with the performance effort by volunteering or sponsoring a developer on January 27, 2018 and January 28, 2018 for the Drupal Global Sprint Weekend 2018, specifically on performance related issues: https://groups.drupal.org/node/517797

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Dec 20 2017
Dec 20
December 20th, 2017

One of the most common requests we get in regards to Emulsify is to show concrete examples of components. There is a lot of conceptual material out there on the benefits of component-driven development in Drupal 8—storing markup, CSS, and JavaScript together using some organizational pattern (à la Atomic Design), automating the creation of style guides (e.g., using Pattern Lab) and using Twig’s include, extends and embed functions to work those patterns into Drupal seamlessly. If you’re reading this article you’re likely already sold on the concept. It’s time for a concrete example!

In this tutorial, we’ll build a full site header containing a logo, a search form, and a menu – here’s the code if you’d like to follow along. We will use Emulsify, so pieces of this may be specific to Emulsify and we will try and note those where necessary. Otherwise, this example could, in theory, be extended to any Drupal 8 project using component-driven development.

Planning Your Component

The first step in component-driven development is planning. In fact, this may be the definitive phase in component-driven development. In order to build reusable systems, you have to break down the design into logical, reusable building blocks. In our case, we have 3 distinct components—what we would call in Atomic Design “molecules”—a logo, a search form, and a menu. In most component-driven development systems you would have a more granular level as well (“atoms” in Atomic Design). Emulsify ships with pre-built and highly flexible atoms for links, images, forms, and lists (and much more). This allows us to jump directly into project-specific molecules.

So, what is our plan? We are going to first create a molecule for each component, making use of the atoms listed above wherever possible. Then, we will build an organism for the larger site header component. On the Drupal side, we will map our logo component to the Site Branding block, the search form to the default Drupal search form block, the menu to the Main Navigation block and the site header to the header region template. Now that we have a plan, let’s get started on our first component—the logo.

The Logo Molecule

Emulsify automatically provides us with everything we need to print a logo – see components/_patterns/01-atoms/04-images/00-image/image.twig. Although it is an image atom, it has an optional img_url variable that will wrap the image in a link if present. So, in this case, we don’t even have to create the logo component. We merely need a variant of the image component, which is easy to do in Pattern Lab by duplicating components/_patterns/01-atoms/04-images/00-image/image.yml and renaming it as components/_patterns/01-atoms/04-images/00-image/image~logo.yml (see Pattern Lab documentation).

Next, we change the variables in the image~logo.yml as needed and add a new image_link_base_class variable, naming it whatever we like for styling purposes. For those who are working in a new installation of Emulsify alongside this tutorial, you will notice this file already exists! Emulsify ships with a ready-made logo component. This means we can immediately jump into mapping our new logo component in Drupal.

Connecting the Logo Component to Drupal

Although you could just write static markup for the logo, let’s use the branding block in Drupal (the block that supplies the theme logo or one uploaded via the Appearance Settings page). These instructions assume you have a local Drupal development environment complete with Twig debugging enabled. Add the Site Branding block to your header region in the Drupal administrative UI to see your branding block on your page. Inspect the element to find the template file in play.

In our case there are two templates—the outer site branding block file and the inner image file. It is best to use the file that contains the most relevant information for your component. Seeing as we need variables like image alt and image src to map to our component, the most relevant file would be the image file itself. Since Emulsify uses Stable as a base theme, let’s check there first for a template file to use. Stable uses core/themes/stable/templates/field/image.html.twig to print images, so we copy that file down to its matching directory in Emulsify creating templates/fields/image.html.twig (this is the template for all image fields, so you may have to be more specific with this filename). Any time you add a new template file, clear the cache registry to make sure that Drupal recognizes the new file. Now the goal in component-driven development is to have markup in components that simply maps to Drupal templates, so let’s replace the default contents of the image.html.twig file above ( <img{{ attributes }}> ) with the following:

{% include "@atoms/04-images/00-image/image.twig" with { img_url: "/", img_src: attributes.src, img_alt: attributes.alt, image_blockname: "logo", image_link_base_class: "logo", } %} {%include"@atoms/04-images/00-image/image.twig"with{  img_url:"/",  img_src:attributes.src,  img_alt:attributes.alt,  image_blockname:"logo",  image_link_base_class:"logo",

We’re using the Twig include statement to use our markup from our original component and pass a mixture of static (url, BEM classes) and dynamic (img alt and src) content to the component. To figure out what Drupal variables to use for dynamic content, see first the “Available variables” section at the top of the Drupal Twig file you’re using and then use the Devel module and the kint function to debug the variables themselves. Also, if you’re new to seeing the BEM class variables (Emulsify-specific), see our recent post on why/how we use these variables (and the BEM function) to pass in BEM classes to Pattern Lab and the Drupal Attributes object. Basically, this include statement above will print out:

<a class="logo" href="https://www.fourkitchens.com/"> <img class="logo__img" src=”/themes/emulsify/logo.svg" alt="Home"> </a> <aclass="logo"href="/">    <imgclass="logo__img"src=/themes/emulsify/logo.svg" alt="Home">

We should now see our branding block using our custom component markup! Let’s move on to the next molecule—the search form.

The Search Form Molecule

Component-driven development, particularly the division of components into controlled, separate atomic units, is not always perfect. But the beauty of Pattern Lab (and Emulsify) is that there is a lot of flexibility in how you markup a component. If the ideal approach of using a Twig function to include other smaller elements isn’t possible (or is too time consuming), simply write custom HTML for the component as needed for the situation! One area where we lean into this flexibility is in dealing with Drupal’s form markup. Let’s take a look at how you could handle the search block. First, let’s create a form molecule in Pattern Lab.

Form Wrapper

Create a directory in components/_patterns/02-molecules entitled “search-form” with a search-form.twig file with the following contents (markup tweaked from core/themes/stable/templates/form/form.html.twig):

<form {{ bem('search')}}> {% if children %} {{ children }} {% else %} <div class="search__item"> <input title="Enter the terms you wish to search for." size="15" maxlength="128" class="form-search"> </div> <div class="form-actions"> <input type="submit" value="Search" class="form-item__textfield button js-form-submit form-submit"> </div> {% endif %} </form> <form{{bem('search')}}>  {%ifchildren%}    {{children}}  {%else%}    <divclass="search__item">      <inputtitle="Enter the terms you wish to search for."size="15"maxlength="128"class="form-search">    </div>    <divclass="form-actions">      <inputtype="submit"value="Search"class="form-item__textfield button js-form-submit form-submit">    </div>  {%endif%}

In this file (code here) we’re doing a check for the Drupal-specific variable “children” in order to pass one thing to Drupal and another to Pattern Lab. We want to make the markup as similar as possible between the two, so I’ve copied the relevant parts of the markup by inspecting the default Drupal search form in the browser. As you can see there are two classes we need on the Drupal side. The first is on the outer <form>  wrapper, so we will need a matching Drupal template to inherit that. Many templates in Drupal will have suggestions by default, but the form template is a great example of one that doesn’t. However, adding a new template suggestion is a minor task, so let’s add the following code to emulsify.theme:

/** * Implements hook_theme_suggestions_HOOK_alter() for form templates. */ function emulsify_theme_suggestions_form_alter(array &$suggestions, array $variables) { if ($variables['element']['#form_id'] == 'search_block_form') { $suggestions[] = 'form__search_block_form'; } } * Implements hook_theme_suggestions_HOOK_alter() for form templates.functionemulsify_theme_suggestions_form_alter(array&$suggestions,array$variables){  if($variables['element']['#form_id']=='search_block_form'){    $suggestions[]='form__search_block_form';

After clearing the cache registry, you should see the new suggestion, so we can now add the file templates/form/form--search-block-form.html.twig. In that file, let’s write:
{% include "@molecules/search-form/search-form.twig" %} {%include"@molecules/search-form/search-form.twig"%}

The Form Element

We have only the “search__item” class left, for which we follow a similar process. Let’s create the file components/_patterns/02-molecules/search-form/_search-form-element.twig, copying the contents from core/themes/stable/templates/form/form-element.html.twig and making small tweaks like so:

{% set classes = [ 'js-form-item', 'search__item', 'js-form-type-' ~ type|clean_class, 'search__item--' ~ name|clean_class, 'js-form-item-' ~ name|clean_class, title_display not in ['after', 'before'] ? 'form-no-label', disabled == 'disabled' ? 'form-disabled', errors ? 'form-item--error', ] %} {% set description_classes = [ 'description', description_display == 'invisible' ? 'visually-hidden', ] %} <div {{ attributes.addClass(classes) }}> {% if label_display in ['before', 'invisible'] %} {{ label }} {% endif %} {% if prefix is not empty %} <span class="field-prefix">{{ prefix }}</span> {% endif %} {% if description_display == 'before' and description.content %} <div{{ description.attributes }}> {{ description.content }} </div> {% endif %} {{ children }} {% if suffix is not empty %} <span class="field-suffix">{{ suffix }}</span> {% endif %} {% if label_display == 'after' %} {{ label }} {% endif %} {% if errors %} <div class="form-item--error-message"> {{ errors }} </div> {% endif %} {% if description_display in ['after', 'invisible'] and description.content %} <div{{ description.attributes.addClass(description_classes) }}> {{ description.content }} </div> {% endif %} </div>   setclasses=[    'js-form-item',    'search__item',    'js-form-type-'~type|clean_class,    'search__item--'~name|clean_class,    'js-form-item-'~name|clean_class,    title_displaynotin['after','before']?'form-no-label',    disabled=='disabled'?'form-disabled',    errors?'form-item--error',  setdescription_classes=[    'description',    description_display=='invisible'?'visually-hidden',<div{{attributes.addClass(classes)}}>  {%iflabel_displayin['before','invisible']%}    {{label}}  {%endif%}  {%ifprefixisnotempty%}    <spanclass="field-prefix">{{prefix}}</span>  {%endif%}  {%ifdescription_display=='before'anddescription.content%}    <div{{description.attributes}}>      {{description.content}}    </div>  {%endif%}  {{children}}  {%ifsuffixisnotempty%}    <spanclass="field-suffix">{{suffix}}</span>  {%endif%}  {%iflabel_display=='after'%}    {{label}}  {%endif%}  {%iferrors%}    <divclass="form-item--error-message">      {{errors}}    </div>  {%endif%}  {%ifdescription_displayin['after','invisible']anddescription.content%}    <div{{description.attributes.addClass(description_classes)}}>      {{description.content}}    </div>  {%endif%}

This file will not be needed in Pattern Lab, which is why we’ve used the underscore at the beginning of the name. This tells Pattern Lab to not display the file in the style guide. Now we need this markup in Drupal, so let’s add a new template suggestion in emulsify.theme like so:

/** * Implements hook_theme_suggestions_HOOK_alter() for form element templates. */ function emulsify_theme_suggestions_form_element_alter(array &$suggestions, array $variables) { if ($variables['element']['#type'] == 'search') { $suggestions[] = 'form_element__search_block_form'; } }   * Implements hook_theme_suggestions_HOOK_alter() for form element templates.functionemulsify_theme_suggestions_form_element_alter(array&$suggestions,array$variables){    if($variables['element']['#type']=='search'){      $suggestions[]='form_element__search_block_form';

And now let’s add the file templates/form/form-element--search-block-form.html.twig with the following code:

{% include "@molecules/search-form/_search-form-element.twig" %} {%include"@molecules/search-form/_search-form-element.twig"%}

We now have the basic pieces for styling our search form in Pattern Lab and Drupal. This was not the fastest element to theme in a component-driven way, but it is a good example of complex concepts that will help when necessary. We hope to make creating form components a little easier in future releases of Emulsify, similar to what we’ve done in v2 with menus. And speaking of menus…

The Main Menu

In Emulsify 2, we have made it a bit easier to work with another complex piece of Twig in Drupal 8, which is the menu system. The files that do the heavy-lifting here are components/_patterns/02-molecules/menus/_menu.twig  and components/_patterns/02-molecules/menus/_menu-item.twig  (included in the first file). We also already have an example of a main menu component in the directory

themes/emulsify/components/_patterns/02-molecules/menus/main-menu themes/emulsify/components/_patterns/02-molecules/menus/main-menu

which is already connected in the Drupal template

templates/navigation/menu--main.html.twig templates/navigation/menu--main.html.twig

Obviously, you can use this as-is or tweak the code to fit your situation, but let’s break down the key pieces which could help you define your own menu.

Menu Markup

Ignoring the code for the menu toggle inside the file, the key piece from themes/emulsify/components/_patterns/02-molecules/menus/main-menu/main-menu.twig is the include statement:

<nav id="main-nav" class="main-nav"> {% include "@molecules/menus/_menu.twig" with { menu_class: 'main-menu' } %} </nav> <navid="main-nav"class="main-nav">  {%include"@molecules/menus/_menu.twig"with{    menu_class:'main-menu'

This will use all the code from the original heavy-lifting files while passing in the class we need for styling. For an example of how to stub out component data for Pattern Lab, see components/_patterns/02-molecules/menus/main-menu/main-menu.yml. This component also shows you how you can have your styling and javascript live alongside your component markup in the same directory. Finally, you can see a more simple example of using a menu like this in the components/_patterns/02-molecules/menus/inline-menu component. For now, let’s move on to placing our components into a header organism.

The Header Organism

Now that we have our three molecule components built, let’s create a wrapper component for our site header. Emulsify ships with an empty component for this at components/_patterns/03-organisms/site/site-header. In our usage we want to change the markup in components/_patterns/03-organisms/site/site-header/site-header.twig to:

<header class="header"> <div class="header__logo"> {% block logo %} {% include "@atoms/04-images/00-image/image.twig" %} {% endblock %} </div> <div class="header__search"> {% block search %} {% include "@molecules/search-form/search-form.twig" %} {% endblock %} </div> <div class="header__menu"> {% block menu %} {% include "@molecules/menus/main-menu/main-menu.twig" %} {% endblock %} </div> </header> <headerclass="header">  <divclass="header__logo">    {%blocklogo%}      {%include"@atoms/04-images/00-image/image.twig"%}    {%endblock%}  </div>  <divclass="header__search">    {%blocksearch%}      {%include"@molecules/search-form/search-form.twig"%}    {%endblock%}  </div>  <divclass="header__menu">    {%blockmenu%}      {%include"@molecules/menus/main-menu/main-menu.twig"%}    {%endblock%}  </div>

Notice the use of Twig blocks. These will help us provide default data for Pattern Lab while giving us the flexibility to replace those with our component templates on the Drupal side. To populate the default data for Pattern Lab, simply create components/_patterns/03-organisms/site/site-header/site-header.yml and copy over the data from components/_patterns/01-atoms/04-images/00-image/image~logo.yml and components/_patterns/02-molecules/menus/main-menu/main-menu.yml. You should now see your component printed in Pattern Lab.

Header in Drupal

To print the header organism in Drupal, let’s work with the templates/layout/region--header.html.twig file, replacing the default contents with:

{% extends "@organisms/site/site-header/site-header.twig" %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {%extends"@organisms/site/site-header/site-header.twig"%}{%blocklogo%}  {{elements.emulsify_branding }}{%endblock%}{%blocksearch%}  {{elements.emulsify_search }}{%endblock%}{%blockmenu%}  {{elements.emulsify_main_menu }}{%endblock%}

Here, we’re using the Twig extends statement to be able to use the Twig blocks we created in the component. You can also use the more robust embed statement when you need to pass variables like so:

{% embed "@organisms/site/site-header/site-header.twig" with { variable: "something", } %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {% endembed %} {%embed"@organisms/site/site-header/site-header.twig"with{  variable:"something",  {%blocklogo%}    {{elements.emulsify_branding }}  {%endblock%}  {%blocksearch%}    {{elements.emulsify_search }}  {%endblock%}  {%blockmenu%}    {{elements.emulsify_main_menu }}  {%endblock%}{%endembed%}

For our purposes, we can simply use the extends statement. You’ll notice that we are using the elements variable. This variable is currently not listed in the Stable region template at the top, but is extremely useful in printing the blocks that are currently in that region. Finally, if you’ve added the file, be sure and clear the cache registry—otherwise, you should now see your full header in Drupal.

Final Thoughts

Component-driven development is not without trials, but I hope we have touched on some of the more difficult ones in this article to speed you on your journey. If you would like to view the branch of Emulsify where we built this site header component, you can see that here. Feel free to sift through and reverse-engineer the code to figure out how to build your own component-driven Drupal project!

This fifth episode concludes our five-part video-blog series for Emulsify 2.x. Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Nov 13 2017
Nov 13
November 13th, 2017

Welcome to the fourth episode in our video series for Emulsify 2.x. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to best use a DRY Twig approach when working in Emulsify. This blog post accompanies a tutorial video, embedded at the end of this post.

DRYing Out Your Twigs

Although we’ve been using a DRY Twig approach in Emulsify since before the 2.x release, it’s a topic worth addressing because it is unique to Emulsify and provides great benefit to you workflow. After all, what drew you to component-driven development in the first place? Making things DRY of course!

In component-driven development, we build components once and reuse them together in different combination—like playing with Lego. In Emulsify, we use Sass mixins and BEM-style CSS to make our CSS as reusable and isolated as possible. DRY Twig simply extends these same benefits to the HTML itself. Let’s look at an example:

Non-DRY Twig:

<h2 class=”title”> <a class=”title__link” href=”/”>Link Text</a> </h2> <h2class=title><aclass=title__link” href=/>LinkText</a>

DRY Twig:

<h2 class=”title”> {% include "@atoms/01-links/link/link.twig" with { "link_content": “Link Text”, "link_url": “/”, "link_class": “title__link”, } %} </h2> <h2class=title>{%include"@atoms/01-links/link/link.twig"with{"link_content":LinkText,"link_url":/,"link_class":title__link”,

The code with DRY Twig is more verbose, but by switching to this method, we’ve now removed a point of failure in our HTML. We’re not repeating the same HTML everywhere! We write that HTML once and reuse it everywhere it is needed.

The concept is simple, and it is found everywhere in the components directory that ships in Emulsify. HTML gets written mostly as atoms and is simply reused in larger components using the default include, extends or embed functions built into Twig. We challenge you to try this in a project, and see what you think.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Oct 26 2017
Oct 26
October 26th, 2017

Welcome to the third episode in our video series for Emulsify 2.x. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how Emulsify works with the BEM Twig extension. This blog post accompanies a tutorial video, embedded at the end of this post.

Background

In Emulsify 2.x, we have enhanced our support for BEM in Drupal by creating the BEM Twig extension. The BEM Twig extension makes it easy to deliver classes to both Pattern Lab and Drupal while using Drupal’s Attributes object. It also has the benefit of simplifying our syntax greatly. See the code below.

Emulsify 1.x:

{% set paragraph_base_class_var = paragraph_base_class|default('paragraph') %} {% set paragraph_modifiers = ['large', 'red'] %} <p class="{{ paragraph_base_class_var }}{% for modifier in paragraph_modifiers %} {{ paragraph_base_class_var }}--{{ modifier }}{% endfor %}{% if paragraph_blockname %} {{ paragraph_blockname }}__{{ paragraph_base_class_var }}{% endif %}"> {% block paragraph_content %} {{ paragraph_content }} {% endblock %} </p> {%setparagraph_base_class_var=paragraph_base_class|default('paragraph')%}{%setparagraph_modifiers=['large','red']%}<pclass="{{ paragraph_base_class_var }}{% for modifier in paragraph_modifiers %} {{ paragraph_base_class_var }}--{{ modifier }}{% endfor %}{% if paragraph_blockname %} {{ paragraph_blockname }}__{{ paragraph_base_class_var }}{% endif %}">  {%blockparagraph_content%}    {{paragraph_content }}  {%endblock%}

Emulsify 2.x:

<p {{ bem('paragraph', ['large', 'red']) }}> {% block paragraph_content %} {{ paragraph_content }} {% endblock %} </p> <p{{bem('paragraph',['large','red'])}}>  {%blockparagraph_content%}    {{paragraph_content }}  {%endblock%}

In both Pattern Lab and Drupal, this function above will create p class=”paragraph paragraph--large paragraph--red”, but in Drupal it will use the equivalent of p{{ attributes.addClass('paragraph paragraph--large paragraph--red') }}, appending these classes to whatever classes core or other plugins provide as well. Simpler syntax + Drupal Attributes support!

We have released the BEM Twig function open source under the Drupal Pattern Lab initiative. It is in Emulsify 2.x by default, but we wanted other projects to be able to benefit from it as well.

Usage

The BEM Twig function accepts four arguments, only one of which is required.

Simple block name:
h1 {{ bem('title') }}

In Drupal and Pattern Lab, this will print:

h1 class="title"

Block with modifiers (optional array allowing multiple modifiers):

h1 {{ bem('title', ['small', 'red']) }}

This creates:

h1 class="title title--small title--red"

Element with modifiers and block name (optional):

h1 {{ bem('title', ['small', 'red'], 'card') }}

This creates:

h1 class="card__title card__title--small card__title--red"

Element with block name, but no modifiers (optional):

h1 {{ bem('title', '', 'card') }}

This creates:

h1 class="card__title"

Element with modifiers, block name and extra classes (optional, in case you need non-BEM classes):

h1 {{ bem('title', ['small', 'red'], 'card', ['js-click', 'something-else']) }}

This creates:

h1 class="card__title card__title--small card__title--red js-click something-else"

Element with extra classes only (optional):

h1 {{ bem('title', '', '', ['js-click']) }}

This creates:

h1 class="title js-click"

Ba da BEM, Ba da boom

With the new BEM Twig extension that we’ve added to Emulsify 2.x, you can easily deliver classes to Pattern Lab and Drupal, while keeping a nice, simple syntax. Thanks for following along! Make sure you check out the other posts in this series and their video tutorials as well!

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach| Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Oct 13 2017
Oct 13
October 13th, 2017

Welcome to the second episode in our new video series for Emulsify. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to create an Emulsify 2.0 starter kit with Drush. This blog post follows the video closely, so you can skip ahead or repeat sections in the video by referring to the timestamps for each section.

PURPOSE [00:15]

This screencast will specifically cover the Emulsify Drush command. The command’s purpose is to setup a new copy of the Emulsify theme.

Note: I used the word “copy” here and not “subtheme” intentionally. This is because the subtheme of your new copy is Drupal Core’s Stable theme, NOT Emulsify.

This new copy of Emulsify will use the human-readable name that your provide, and will build the necessary structure to get you on your way to developing a custom theme.

REQUIREMENTS [00:45]

Before we dig in too deep I recommend that you have the following installed first:

  • a Drupal 8 Core installation
  • the Drush CLI command at least major version 8
  • Node.js preferably the latest stable version
  • a working copy of the Emulsify demo theme 2.X or greater

If you haven’t already watched the Emulsify 2.0 composer install presentation, please stop this video and go watch that one.

Note: If you aren’t already using Drush 9 you should consider upgrading as soon as possible because the next minor version release of Drupal Core 8.4.0 is only going to work with Drush 9 or greater.

RECOMMENDATIONS [01:33]

We recommend that you use PHP7 or greater as you get some massive performance improvements for a very little amount of work.

We also recommend that you use composer to install Drupal and Emulsify. In fact, if you didn’t use Composer to install Emulsify—or at least run Composer install inside of Emulsify—you will get errors. You will also notice errors if npm install failed on the Emulsify demo theme installation.

AGENDA [02:06]

Now that we have everything setup and ready to go, this presentation will first discuss the theory behind the Drush script. Then we will show what you should expect if the installation was successful. After that I will give you some links to additional resources.

BACKGROUND [02:25]

The general idea of the command is that it creates a new theme from Emulsify’s files but is actually based on Drupal Core’s Stable theme. Once you have run the command, the demo Emulsify theme is no longer required and you can uninstall it from your Drupal codebase.

WHEN, WHERE, and WHY? [02:44]

WHEN: You should run this command before writing any custom code but after your Drupal 8 site is working and Emulsify has been installed (via Composer).

WHERE: You should run the command from the Drupal root or use a Drush alias.

WHY: Why you should NOT edit the Emulsify theme’s files. If you installed Emulsify the recommended way (via Composer), next time you run composer update ALL of your custom code changes will be wiped out. If this happens I really hope you are using version control.

HOW TO USE THE COMMAND? [03:24]

Arguments:

Well first it requires a single argument, the human-readable name. This name can contain spaces and capital letters.

Options:

The command has defaults set for options that you can override.

This first is the theme description which will appear within Drupal and in your .info file.

The second is the machine-name; this is the option that allows you to pick the directory name and the machine name as it appears within Drupal.

The third option is the path; this is the path that your theme will be installed to, it defaults to “themes/custom” but if you don’t like that you can change it to any directory relative to your web root.

Fourth and final option is the slim option. This allows advanced users who don’t need demo content or don’t want anything but the bare minimum required to create a new theme.

Note:

Only the human_readable_name is required, options don’t have to appear in any order, don’t have to appear at all, or you can only pass one if you just want to change one of the defaults.

SUCCESS [04:52]

If your new theme was successfully created you should see the successful output message. In the example below I used the slim option because it is a bit faster to run but again this is an option and is NOT required.

The success message contains information you may find helpful, including the name of the theme that was created, the path where it was installed, and the next required step for setup.

THEME SETUP [05:25]

Setting up your custom theme. Navigate to your custom theme on the command line. Type the yarn and watch as pattern lab is downloaded and installed. If the installation was successful you should see a pattern lab successful message and your theme should now be visible within Drupal.

COMPILING YOUR STYLE GUIDE [05:51]

Now that we have pattern lab successfully installed and you committed it to you version control system, you are probably eager to use it. Emulsify uses npm scripts to setup a local pattern lab instance for display of your style guide.

The script you are interested in is yarn start. Run this command for all of your local development. You do NOT have to have a working Drupal installation at this point to do development on your components.

If you need a designer who isn’t familiar with Drupal to make some tweaks, you will only have to give them your code base, have them use yarn to install, and yarn start to see your style guide.

It is however recommended the initial setup of your components is done by someone with background knowledge of Drupal templates and themes as the variables passed to each component will be different for each Drupal template.

For more information on components and templates keep an eye out for our soon to come demo components and screencasts on building components.

VIEWING YOUR STYLE GUIDE [07:05]

Now that you have run yarn start you can open your browser and navigate to the localhost URL that appears in your console. If you get an error here you might already have something running on port 3000. If you need to cancel this script hit control + c.

ADDITIONAL RESOURCES [07:24]

Thank you for watching today’s screencast, we hope you found this presentation informative and enjoy working with Emulsify 2.0. If you would like to search for some additional resources you can go to emulsify.info or github.com/fourkitchens/emulsify.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Oct 05 2017
Oct 05
October 5th, 2017

Welcome to the first episode in our new video series for Emulsify. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to get you up and running with Emulsify. This blog post accompanies a tutorial video, which you can find embedded at the end.

Emulsify is, at it’s core, a prototyping tool. At Four Kitchens we also use it as a Drupal 8 theme starter kit. Depending on how you want to use it, the installation steps will vary. I’ll quickly go over how to install and use Emulsify as a stand alone prototyping tool, then I’ll show you how we use it to theme Drupal 8 sites.

Emulsify Standalone

Installing Emulsify core as a stand alone tool is a simple process with Composer and NPM (or Yarn).

  1. composer create-project fourkitchens/emulsify --stability dev --no-interaction emulsify
  2. cd emulsify
  3. yarn install (or npm install, if you don’t have yarn installed)

Once the installation process is complete, you can start it with either npm start or yarn start:

  1. yarn start

Once it’s up, you can use the either the Local or External links to view the Pattern Lab instance in the browser. (The External link is useful for physical device testing, like on your phone or tablet, but can vary per-machine. So, if you’re using hosted fonts, you might have to add a bunch of IPs to your account to accommodate all of your developers.)

The start process runs all of the build and watch commands. So once it’s up, all of your changes are instantly reflected in the browser.

I can add additional colors to the _color-vars.scss file, Edit the card.yml example data, or even update the 01-card.twig file to modify the structure of the card component.

That’s really all there is to using Emulsify as a prototyping tool. You can quickly build out your components using component-driven design without having to have a full web server, and site, up and running.

Emulsify in a Composer-Based Drupal 8 Installation

It’s general best practice to install Drupal 8 via Composer, and that’s what we do at Four Kitchens. So, we’ve built Emulsify 2 to work great in that environment. I won’t cover the details of installing Drupal via Composer since that’s out of scope for this video, and there are videos that cover that already. Instead, I’ll quickly run through that process, and then come back and walk through the details of how to install Emulsify in a Composer-based Drupal 8 site.

Okay, I’ve got a fresh Drupal 8 site installed. Let’s install Emulsify alongside it.

From the project root, we’ll run the composer require command:

  • composer require fourkitchens/emulsify

Next, we’ll enable Emulsify and its dependencies:

  • cd web
  • drush en emulsify components unified_twig_ext -y

At this point, we highly recommend you use the Drush script that comes with Emulsify to create a custom clone of Emulsify for your actual production site. The reason is that any change you make to Emulsify core will be overwritten when you update Emulsify, and there’s currently no real good way to create a child theme of a component-based, Pattern Lab -powered, Drupal theme. So, the Drush script simply creates a clone of Emulsify and makes the file renaming process into a simple script.

We have another video covering the Drush script, so definitely watch that for all of the details. For this video though, I’ll just use emulsify core, since I’m not going to make any customizations.

  • cd web/themes/contrib/emulsify/ (If you do create a clone with the drush script, you’ll cd web/themes/custom/THEME_NAME/)
  • yarn install

  • yarn start

Now we have our Pattern Lab instance up and running, accessible at the links provided.

We can also head over to the “Appearance” page on our site, and set our theme as the default. When we do that, and go back to the homepage, it looks all boring and gray, but that’s just because we haven’t started doing any actual theming yet.

At this point, the theme is installed, and you’re ready to create your components and make your site look beautiful!

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Jul 13 2017
Jul 13
July 13th, 2017

When creating the Global Academy for continuing Medical Education (GAME) site for Frontline, we had to tackle several complex problems in regards to content migrations. The previous site had a lot of legacy content we had to bring over into the new system. By tackling each unique problem, we were able to migrate most of the content into the new Drupal 7 site.

Setting Up the New Site

The system Frontline used before the redesign was called Typo3, along with a suite of individual, internally-hosted ASP sites for conferences. Frontline had several kinds of content that displayed differently throughout the site. The complexity with handling the migration was that a lot of the content was in WYSIWYG fields that contained large amounts of custom HTML.

We decided to go with Drupal 7 for this project so we could more easily use code that was created from the MDEdge.com site.

“How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

The GAME website redesign greatly improved the flow of the content and how it was displayed on the frontend, and part of that improvement was displaying specific pieces of content in different sections of the page. The burning question that plagued us when tackling this problem was “How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

Before we could get deep into the code, we had to do some planning and setup to make sure we were clear in how to best handle the different types of content. This also included hammering out the content model. Once we got to a spot where we could start migrating content, we decided to use the Migrate module. We grabbed the current site files, images and database and put them into a central location outside of the current site that we could easily access. This would allow us to re-run these migrations even after the site launched (if we needed to)!

Migrating Articles

This content on the new site is connected to MDEdge.com via a Rest API. One complication is that the content on GAME was added manually to Typo3, and wasn’t tagged for use with specific fields. The content type on the new Drupal site had a few fields for the data we were displaying, and a field that stores the article ID from MDedge.com. To get that ID for this migration, we mapped the title for news articles in Typo3 to the tile of the article on MDEdge.com. It wasn’t a perfect solution, but it allowed us to do an initial migration of the data.

Conferences Migration

For GAME’s conferences, since there were not too many on the site, we decided to import the main conference data via a Google spreadsheet. The Google doc was a fairly simple spreadsheet that contained a column we used to identify each row in the migration, plus a column for each field that is in that conference’s content type. This worked out well because most of the content in the redesign was new for this content type. This approach allowed the client to start adding content before the content types or migrations were fully built.

Our spreadsheet handled the top level conference data, but it did not handle the pages attached to each conference. Page content was either stored in the Typo3 data or we needed to extract the HTML from the ASP sites.

Typo3 Categories to Drupal Taxonomies

To make sure we mapped the content in the migrations properly, we created another Google doc mapping file that connected the Typo3 categories to Drupal taxonomies. We set it up to support multiple taxonomy terms that could be mapped to one Typo3 category.
[NB: Here is some code that we used to help with the conversion: https://pastebin.com/aeUV81UX.]

Our mapping system worked out fantastically well. The only problem we encountered was that since we were allowing three taxonomy terms to be mapped to one Typo3 category, the client noticed some use cases where too many taxonomies were assigned to content that had more than one Typo3 category in certain use cases. But this was a content-related issue and required them to re-look at this document and tweak it as necessary.

Slaying the Beast:
Extracting, Importing, and Redirecting

One of the larger problems we tackled was how to get the HTML from the Typo3 system and the ASP conference sites into the new Drupal 7 setup.

The ASP conference sites were handled by grabbing the HTML for each of those pages and extracting the page title, body, and photos. The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

Grabbing the data from the Typo3 sites presented another challenge because we had to figure out where the different data was stored in the database. This was a uniquely interesting process because we had to determine which tables were connected to which other tables in order to figure out the content relationships in the database.

The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

A few things we learned in this process:

  • We found all of the content on the current site was in these tables (which are connected to each other): pages, tt_content, tt_news, tt_news_cat_mm and link_cache.
  • After talking with the client, we were able to grab content based on certain Typo3 categories or the pages hierarchy relationship. This helped fill in some of the gaps where a direct relationship could not be made by looking at the database.
  • It was clear that getting 100% of the legacy content wasn’t going to be realistic, mainly because of the loose content relationships in Typo3. After talking to the client we agreed to not migrate content older than a certain date.
  • It was also clear that—given how much HTML was in the content—some manual cleanup was going to be required.

Once we were able to get to the main HTML for the content, we had to figure out how to extract the specific pieces we needed from that HTML.

Once we had access to the data we needed, it was a matter of getting it into Drupal. The migrate module made a lot of this fairly easy with how much functionality it provided out of the box. We ended up using the prepareRow() method a lot to grab specific pieces of content and assigning them to Drupal fields.

Handling Redirects

We wanted to handle as many of the redirects as we could automatically, so the client wouldn’t have to add thousands of redirects and to ensure existing links would continue to work after the new site launched. To do this we mapped the unique row in the Typo3 database to the unique ID we were storing in the custom migration.

As long as you are handling the unique IDs properly in your use of the Migration API, this is a great way to handle mapping what was migrated to the data in Drupal. You use the unique identifier stored for each migration row and grab the corresponding node ID to get the correct URL that should be loaded. Below are some sample queries we used to get access to the migrated nodes in the system. We used UNION queries because the content that was imported from the legacy system could be in any of these tables.

SELECT destid1 FROM migrate_map_cmeactivitynode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_cmeactivitycontentnode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_conferencepagetypo3node WHERE sourceid1 IN(:sourceid) … SELECTdestid1FROMmigrate_map_cmeactivitynodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_cmeactivitycontentnodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_conferencepagetypo3nodeWHEREsourceid1IN(:sourceid)

Wrap Up

Migrating complex websites is rarely simple. One thing we learned on this project is that it is best to jump deep into migrations early in the project lifecycle, so the big roadblocks can be identified as early as possible. It also is best to give the client as much time as possible to work through any content cleanup issues that may be required.

We used a lot of Google spreadsheets to get needed information from the client. This made things much simpler on all fronts and allowed the client to start gathering needed content much sooner in the development process.

In a perfect world, all content would be easily migrated over without any problems, but this usually doesn’t happen. It can be difficult to know when you have taken a migration “far enough” and you are better off jumping onto other things. This is where communication with the full team early is vital to not having migration issues take over a project.

Web Chef Chris Roane
Chris Roane

When not breaking down and solving complex problems as quickly as possible, Chris volunteers for a local theater called Arthouse Cinema & Pub.

Jun 29 2017
Jun 29
June 29th, 2017

Recently I was working in a Drupal 8 project and we were using the improved Features module to create configuration container modules with some special purposes. Due to client architectural needs, we had to move the /features folder into a separate repository. We basically needed to make it available to many sites in a way we could keep doing active development over it, and we did so by making the new repo a composer dependency of all our projects.

One of the downsides of this new direction was the effects in CircleCI builds for individual projects, since installing and reverting features was an important part of it. For example, to make a new feature module available, we’d push it to this ‘shared’ repo, but to actually enable it we’d need to push the bit change in the core.extension.yml config file to our project repo. Yes, we were using a mixed approach: both features and conventional configuration management.

So a new pull request would be created in both repositories. The problem for Circle builds—given the approach previously outlined—is that builds generated for the pull request in the project repository would require the master branch of the ‘shared’ one. So, for the pull request in the project repo, we’d try to build a site by importing configuration that says a particular feature module should be enabled, and that module wouldn’t exist (likely not present in shared master at that time, still a pull request), so it would totally crash.

There is probably no straightforward way to solve this problem, but we came with a solution that is half code, half strategy. Beyond technical details, there is no practical way to determine what branch of the shared repo should be required for a pull request in the project repo, unless we assume conventions. In our case, we assumed that the correct branch to pair with a project branch was one named the same way. So if a build was a result of a pull request from branch X, we could try to find a PR from branch X in the shared repo and if it existed, that’d be our guy. Otherwise we’d keep pulling master.

So we created a script to do that: &lt;?php $branch = $argv[1]; $github_token = $argv[2]; $github_user = $argv[3]; $project_user = $argv[4]; $shared_repos = array( 'organization/shared' ); foreach ($shared_repos as $repo) { print_r("Checking repo $repo for a pull request in a '$branch' branch...\n"); $pr = <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch, $github_token, $github_user, $project_user, $repo); if (!empty($pr)) { print_r("Found. Requiring...\n"); exec("<strong class="markup--strong markup--pre-strong">composer require $repo:dev-$branch</strong>"); print_r("$repo:dev-$branch pulled.\n"); } else { print_r("Nothing found.\n"); } } function <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name, $github_token, $github_user, $project_user, $repo) { $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name"); curl_setopt($ch,CURLOPT_RETURNTRANSFER,true); curl_setopt($ch, CURLOPT_USERPWD, "$github_user:$github_token"); curl_setopt($ch, CURLOPT_USERAGENT, "$github_user"); $output=json_decode(curl_exec($ch), TRUE); curl_close($ch); return $output; } $branch=$argv[1];$github_token=$argv[2];$github_user=$argv[3];$project_user=$argv[4];$shared_repos=array(  'organization/shared'foreach($shared_reposas$repo){  print_r("Checking repo $repo for a pull request in a '$branch' branch...\n");  $pr=<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch,$github_token,$github_user,$project_user,$repo);if(!empty($pr)){    print_r("Found. Requiring...\n");    exec("<strong class="markup--strongmarkup--pre-strong">composer require $repo:dev-$branch</strong>");    print_r("$repo:dev-$branch pulled.\n");  else{    print_r("Nothing found.\n");function<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name,$github_token,$github_user,$project_user,$repo){  $ch=curl_init();    curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name");  curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);  curl_setopt($ch,CURLOPT_USERPWD,"$github_user:$github_token");  curl_setopt($ch,CURLOPT_USERAGENT,"$github_user");$output=json_decode(curl_exec($ch),TRUE);  curl_close($ch);  return$output;

As you probably know, Circle builds are connected to the internet, so you can make remote requests. What we’re doing here is using the Github API in the middle of a build in the project repo to connect to our shared repo with cURL and try to find a pull request whose branch name matches the one we’re building over. If the request returned something then we can safely say there is a branch named the same way than the current one and with an open pull request in the shared repo, and we can require it.

What’s left for this to work is actually calling the script:

- php scripts/require_feature_branch.php "$CIRCLE_BRANCH" "$GITHUB_TOKEN" "$CIRCLE_USERNAME" "$CIRCLE_PROJECT_USERNAME" -phpscripts/require_feature_branch.php"$CIRCLE_BRANCH""$GITHUB_TOKEN""$CIRCLE_USERNAME""$CIRCLE_PROJECT_USERNAME"

We can do this at any point in circle.yml, since composer require will actually update the composer.json file, so any other composer interaction after executing the script should take your requirement in consideration. Notice that the shared repo will be required twice if you have the requirement in your composer.json file. You could safely remove it from there if you instruct to require the master branch when no matching branch has been found in the script, although this could have unintended effects in other types of environments, like for local development.

Note: A quick reference about the parameters passed to the script:

$GITHUB_TOKEN: #Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a> $CIRCLE_*: #CircleCI vars, automatically available $GITHUB_TOKEN:#Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a>$CIRCLE_*:#CircleCI vars, automatically available

[Editor’s Note: The post “Running CircleCI Builds Based on Many Repositories” was originally published on Joel Travieso’s Medium blog.]

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 31 2017
May 31
May 31st, 2017

In the last post, we created a nested accordion component within Pattern Lab. In this post, we will walk through the basics of integrating this component into Drupal.

Requirements

Even though Emulsify is a ready-made Drupal 8 theme, there are some requirements and background to be aware of when using it.

Emulsify is currently meant to be used as a starterkit. In contrast to a base theme, a starterkit is simply enabled as-is, and tweaked to meet your needs. This is purposeful—your components should match your design requirements, so you should edit/delete example components as needed.

There is currently a dependency for Drupal theming, which is the Components module. This module allows one to define custom namespaces outside of the expected theme /templates directory. Emulsify comes with predefined namespaces for the atomic design directories in Pattern Lab (atoms, molecules, organisms, etc.). Even if you’re not 100% clear currently on what this module does, just know all you have to do is enable the Emulsify theme and the Components module and you’re off to the races.

Components in Drupal

In our last post we built an accordion component. Let’s now integrate this component into our Drupal site. It’s important to understand what individual components you will be working with. For our purposes, we have two: an accordion item (<dt>, <dd>) and an accordion list (<dl>). It’s important to note that these will also correspond to 2 separate Drupal files. Although this can be built in Drupal a variety of ways, in the example below, each accordion item will be a node and the accordion list will be a view.

Accordion Item

You will first want to create an Accordion content type (machine name: accordion), and we will use the title as the <dt> and the body as the <dd>. Once you’ve done this (and added some Accordion content items), let’s add our node template Twig file for the accordion item by duplicating templates/content/node.html.twig into templates/content/node--accordion.html.twig. In place of the default include function in that file, place the following:

{% include "@molecules/accordion-item/accordion-item.twig"
   with {
      "accordion_term": label,
      "accordion_def": content.body,
   }
%}

As you can see, this is a direct copy of the include statement in our accordion component file except the variables have been replaced. Makes sense, right? We want Drupal to replace those static variables with its dynamic ones, in this case label (the node title) and content.body. If you visit your accordion node in the browser (note: you will need to rebuild cache when adding new template files), you will now see your styled accordion item!

But something’s missing, right? When you click on the title, the body field should collapse, which comes from our JavaScript functionality. While JavaScript in the Pattern Lab component will automatically work because Emulsify compiles it to a single file loaded for all components, we want to use Drupal’s built-in aggregation mechanisms for adding JavaScript responsibly. To do so, we need to add a library to the theme. This means adding the following code into emulsify.libraries.yml:

accordion:
  js:
    components/_patterns/02-molecules/accordion-item/accordion-item.js: {}

Once you’ve done that and rebuilt the cache, you can now use the following snippet in any Drupal Twig file to load that library [NB: read more about attach_library]:

{{ attach_library('emulsify/accordion') }}

So, once you’ve added that function to your node–accordion.html.twig file, you should have a working accordion item. Not only does this function load your accordion JavaScript, but it does so in a way that only loads it when that Twig file is used, and also takes advantage of Drupal’s JavaScript aggregation system. Win-win!

Accordion List

So, now that our individual accordion item works as it should, let’s build our accordion list. For this, I’ve created a view called Accordion (machine name: accordion) that shows “Content of Type: Accordion” and a page display that shows an unformatted list of full posts.

Now that the view has been created, let’s copy views-view-unformatted.html.twig from our parent Stable theme (/core/themes/stable/templates/views) and rename it views-view-unformatted--accordion.html.twig. Inside of that file, we will write our include statement for the accordion <dl> component. But before we do that, we need to make a key change to that component file. If you go back to the contents of that file, you’ll notice that it has a for loop built to pass in Pattern Lab data and nest the accordion items themselves:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

In Drupal, we don’t want to iterate over this static list; all we need to do is provide a single variable for the  Views rows to be passed into. Let’s tweak our code a bit to allow for that:

<dl class="accordion-item">
  {% if drupal == true %}
    {{ accordion_items }}
  {% else %}
    {% for listItem in listItems.four %}
      {% include "@molecules/accordion-item/accordion-item.twig"
        with {
          "accordion_term": listItem.headline.short,
          "accordion_def": listItem.excerpt.long
        }
      %}
    {% endfor %}
  {% endif %}
</dl>

You’ll notice that we’ve added an if statement to check whether “drupal” is true—this variable can actually be anything Pattern Lab doesn’t recognize (see the next code snippet). Finally, in views-view-unformatted--accordion.html.twig let’s put the following:

{% set drupal = true %}
{% include "@organisms/accordion/accordion.twig"
  with {
    "accordion_items": rows,
  }
%}

At the view level, all we need is this outer <dl> wrapper and to just pass in our Views rows (which will contain our already component-ized nodes). Rebuild the cache, visit your view page and voila! You now have a fully working accordion!

Conclusion

We have now not only created a more complex nested component that uses JavaScript… we have done it in Drupal! Your HTML, CSS and JavaScript are where they belong (in the components themselves), and you are merely passing Drupal’s dynamic data into those files.

There’s definitely a lot more to learn; below is a list of posts and webinars to continue your education and get involved in the future of component-driven development and our tool, Emulsify.

Recommended Posts

  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to Part Three of our frontend miniseries on style guides! In this installment, we cover the bits and pieces of atomic design using Pattern Lab.
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 24 2017
May 24
May 24th, 2017

In the last post, we introduced Emulsify and spoke a little about the history that went into its creation. In this post, we will walk through the basics of Emulsify to get you building lovely, organized components automatically added to Pattern Lab.

Prototyping

Emulsify is at its most basic level a prototyping tool. Assuming you’ve met the requirements and have installed Emulsify, running the tool is as simple as navigating to the directory and running `npm start`. This task takes care of building your Pattern Lab website, compiling Sass to minified CSS, linting and minifying JavaScript.

Also, this single command will start a watch task and open your Pattern Lab instance automatically in a browser. So now when you save a file, it will run the appropriate task and refresh the browser to show your latest changes. In other words, it is an end-to-end prototyping tool meant to allow a developer to start creating components quickly with a solid backbone of automation.

Component-Based Theming

Emulsify, like Pattern Lab, expects the developer to use a component-based building approach. This approach is elegantly simple: write your DRY components, including your Sass and JavaScript, in a single directory. Automation takes care of the Sass compilation to a single CSS file and JavaScript to a single JavaScript file for viewing functionality in Pattern Lab.

Because Emulsify leverages the Twig templating engine, you can build each component HTML(Twig) file and then use the Twig functions include, embed and extends to combine components into full-scale layouts. Sound confusing? No need to worry—there are multiple examples pre-built in Emulsify. Let’s take a look at one below.

Simple Accordion

Below is a simple but common user experience—the accordion. Let’s look at the markup for a single FAQ accordion item component:

<dt class="accordion-item__term">What is Emulsify?</dt>
<dd class="accordion-item__def">A Pattern Lab prototyping tool and Drupal 8 base theme.</dd>

If you look in the components/_patterns/02-molecules/accordion-item directory, you’ll find this Twig file as well as the CSS and JavaScript files that provide the default styling and open/close functionality respectively. (You’ll also see a YAML file, which is used to provide data for the component in Pattern Lab.)

But an accordion typically has multiple items, and HTML definitions should have a dl wrapper, right? Let’s take a look at the emulsify/components/_patterns/03-organisms/accordion/accordion.twig markup:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

Here you can see that the only HTML added is the dl wrapper. Inside of that, we have a Twig for loop that will loop through our list items and for each one include our single accordion item component above. The rest of the component syntax is Pattern Lab specific (e.g., listItems, headline.short, excerpt.long).

Conclusion

If you are following along in your own local Emulsify installation, you can view this accordion in action inside your Pattern Lab installation. With this example, we’ve introduced not only the basics of component-based theming, but we’ve also seen an example of inheriting templates using the Twig include function. Using this example as well as the other pre-built components in Emulsify, we have what we need to start prototyping!

In the next article, we’ll dive into how to implement Emulsify as a Drupal 8 theme and start building a component-based Drupal 8 project. You can also view a recording of a webinar we made in March. Until then, see you next week!

Recommended Posts

  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to the final post of our frontend miniseries on style guides! In this installment, the Web Chefs talk through how we chose Pattern Lab over KSS Node for Four…
  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 17 2017
May 17
May 17th, 2017

Shared Principles

There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization. At the smaller agency level, being a frontend engineer typically involves a balancing act between a general knowledge of web development and keeping up with frontend best practices. This makes it all the more important for agency frontend teams to take a step back and determine some shared principles. We at Four Kitchens did this through late last summer and into fall, and here’s what we came up with. A system working from shared principles must be:

1. Backend Agnostic

Even within Four Kitchens, we build websites and applications using a variety of backend languages and database structures, and this is only a microcosm of the massive diversity in modern web development. Our frontend team strives to choose and build tools that are portable between backend systems. Not only is this a smart goal internally but it’s also an important deliverable for our clients as well.

2. Modular

It seems to me the frontend community has spent the past few years trying to find ways to incorporate best practices that have a rich history in backend programming languages. We’ve realized we, too, need to be able to build code structures that can scale without brittleness or bloat. For this reason, the Four Kitchens frontend team has rallied around component-based theming and approaches like BEM syntax. Put simply, we want the UI pieces we build to be as portable as the structure itself: flexible, removable, DRY.

3. Easy to Learn

Because we are aiming to build tools that aren’t married to backend systems and are modular, this in turn should make them much more approachable. We want to build tools that help a frontend engineer who works in any language to quickly build logically organized component-based prototypes quickly and with little ramp-up.

4. Open Source

Four Kitchens has been devoted to the culture of open-source software from the beginning, and we as a frontend team want to continue that commitment by leveraging and building tools that do the same.

Introducing Emulsify

Knowing all this, we are proud to introduce Emulsify—a Pattern Lab prototyping tool and Drupal 8 starterkit theme. Wait… Drupal 8 starterkit you say? What happened to backend agnostic? Well, we still build a lot in Drupal, and the overhead of it being a starterkit theme is tiny and unintrusive to the prototyping process. More on this in the next post.
[NB: Check back next week for our next Emulsify post!]

With these shared values, we knew we had enough of a foundation to build a tool that would both hold us accountable to these values and help instill them as we grow and onboard new developers. We also are excited about the flexibility that this opens up in our process by having a prototyping tool that allows any frontend engineer with knowledge in any backend system (or none) to focus on building a great UI for a project.

Next in the series, we’ll go through the basics of Emulsify and explain its out-of-the-box strengths that will get you prototyping in Pattern Lab and/or creating a Drupal 8 theme quickly.

Recommended Posts

Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Mar 07 2017
Mar 07

This weekend’s DrupalCamp London wasn’t my first Drupal event at all, I’ve been to 3 DrupalCon Europe, 4 DrupalCamp Dublin, and a few other DrupalCamps in Ireland and lots of meetups, but in this case I experienced a lot of ‘first times’ that I want to share.

This was the first time I’d attended a Drupal event representing a sponsor organisation, and as a result the way I experienced it was completely different.

Firstly, you focus more on your company’s goals, rather than your personal aims. In this case I was helping Capgemini UK to engage and recruit people for our open positions. This allowed me to socialise more and try to connect with people. We also had T-shirts so it was easier to attract people if you have something free for them. I was also able to have conversations with other sponsors to see why did they sponsor the event, some were also recruiting, but most of them were selling their solutions to prospective clients, Drupal developers and agencies.

The best of this experience was the people I found in other companies and the attendees approaching us for a T-shirt or a job opportunity.

New member of Capgemini UK perspective

As a new joiner in the Capgemini UK Drupal team I attended this event when I wasn’t even a month old in the company, and I am glad I could attend this event at such short notice in my new position, I think this tells a lot about the focus on training and career development Capgemini has and how much they care about Drupal.

As a new employee of the company this event allowed me to meet more colleagues from different departments or teams and meet them in a non-working environment. Again the best of this experience was the people I met and the relations I made.

I joined Capgemini from Ireland, so I was also new to the London Drupal community, and the DrupalCamp gave me the opportunity to connect and create relationships with other members of the Drupal community. Of course they were busy organising this great event, but I was able to contact some of the members, and I have to say they were very friendly when I approached any of the crew or other local members attending the event. I am very happy to have met some friendly people and I am committed to help and volunteer my time in future events, so this was a very good starting point. And again the best were the people I met.

Non-session perspective

As I had other duties I couldn’t attend all sessions. But I was able to attend some sessions and the Keynotes, with special mention to the Saturday keynote from Matt Glaman, it was very motivational and made me think anyone could evolve as a developer if they try and search the resources to get the knowledge. And the closing keynote from Danese Cooper was very inspirational as well about what Open Source is and what should be, and that we, the developers, have the power to make it happen. And we could also enjoy Malcom Young’s presentation about Code Reviews.

Conclusion

Closing this article I would like to come back to the best part of the DrupalCamp for me this year, which was the people. They are always the best part of the social events. I was able to catch up with old friends from Ireland, engage with people considering a position at Capgemini and introduce myself to the London Drupal community, so overall I am very happy with this DrupalCamp London and I will be happy to return next year. In the meantime I will be attending some Drupal meetups and trying to get involve in the community, so don’t hesitate to contact me if you have any question or you need my help.

Mar 02 2017
Mar 02
March 2nd, 2017

You might have heard about high availability before but didn’t think your site was large enough to handle the extra architecture or overhead. I would like to encourage you to think again and be creative.

Background

Digital Ocean has a concept they call a floating IPs. A Floating IP is an IP address that can be instantly moved from one Droplet to another Droplet in the same data center. This idea is great, it allows you to keep your site running in the event of failure.

Credit

I have to give credit to BlackMesh for handling this process quite well. The only thing I had to do was create the tickets to change the architecture and BlackMesh implemented it.

Exact Problem

One of our support clients had the need for a complete site relaunch due to a major overhaul in the underlying architecture of their code. Specifically, they had the following changes:

  1. Change in the site docroot
  2. Migration from a single site architecture to a multisite architecture based on domain access
  3. Upgrade of PHP version that required a server replacement/upgrade in linux distribution version

Any of these individually could have benefited from this approach. We just bundled all of the changes together to delivering minimal downtime to the sites users.

Solution

So, what is the right solution for a data migration that takes well over 3 hours to run? Site downtime for hours during peak traffic is unacceptable. So, the answer we came up with was to use a floating IP that can easily change the backend server when we are ready to flip the switch. This allows us to migrate our data on a new separate server using it’s own database (essentially having two live servers at the same time).

Benefits

Notice that we won’t need to change the DNS records here which meant we didn’t have to wait for DNS records to propagate all over the internet. The new site was live instantly.

Additional Details

Some other notes during the transition that may lead to separate blog posts:

  1. We created a shell script to handle the actual deployment and tested it before the actual “go live” date to minimize surprises.
  2. A private network was created to allow the servers to communicate to each other directly and behind the scenes.
  3. To complicate this process, during development (prelaunch) the user base grew so much we had to off load the Solr server on to another machine to reduce server CPU usage. This means that additional backend servers were also involved in this transition.

Go-Live (Migration Complete)

After you have completed your deployment process, you are ready to switch the floating ip to the new server. In our case we were using “keepalived” which responds to a health check on the server. Our health check was a simple php file that responded with the text true or false. So, when we were ready to switch we just changed the health checks response to false. Then we got an instant switch from the old server to the new server with minimal interruption.

Acceptable Losses

There were a few things we couldn’t get around:

  1. The need for a content freeze
  2. The need for a user registration freeze

The reason for this was that our database was the database updates required the site to be in maintenance mode while being performed.

A problem worth mentioning:

  1. The database did have a few tables that would have to have acceptable losses. The users sessions table and cache_form table both were out of sync when we switched over. So, any new sessions and saved forms were unfortunately lost during this process. The result is that users would have to log in again and fill out forms that weren’t submitted. In the rare event that a user changed their name or other fields on their preferences page those changes would be lost.

Additional Considerations

  1. Our mail preferences are handled by third parties
  2. Comments aren’t allowed on this site

Recommended Posts

  • Engineers find solving complex problems exciting, but as I’ve matured as an engineer, I’ve learned that complexity isn’t always as compelling as simplicity.
  • Cloudflare Bug May Have Created Security Leak Cloudflare, a major internet host, had some unusual circumstances that caused their servers to output information that contained private information such as HTTP…
  • When you already have a design and are working with finalized content, high fidelity wireframes might be just what the team needs to make decisions quickly.
Chris Martin
Chris Martin

Chris Martin is a junior engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Dec 01 2016
Dec 01

Startups and products can move faster than agencies that serve clients as there is no feedback loops and manual QA steps by an external authority that can halt a build going live.

One of the roundtable discussions that popped up this week while we’re all in Minsk is that agencies which practice Agile transparently as SystemSeed do see a common trade-off. CI/CD (Continuous Integration / Continuous Deployment) isn’t quite possible as long as you have manual QA and that lead time baked-in.

Non-Agile (or “Waterfall”) agencies can potentially supply work faster but without any insight by the client, inevitably then needing change requests which I’ve always visualised as the false economy of Waterfall as demonstrated here: 

image

Would the client prefer Waterfall+change requests and being kept in the dark throughout the development but all work is potentially delivered faster (and never in the final state), or would they prefer full transparency, having to check all story details, QA and sign off as well as multi-stakeholder oversight… in short - it can get complicated.

CI and CD isn’t truly possible when a manual review step is mandatory. Today we maintain a thorough manual QA by ourselves and our clients before deploy using a “standard” (feature branch -> dev -> stage -> production) devops process, where manual QA and automated test suites occur both at the feature branch level and just before deployment (Stage). Pantheon provides this hosting infrastructure and makes this simple as visualised below:

image

This week we brainstormed Blue & Green live environments which may allow for full Continuous Integration whereby deploys are automated whenever scripted tests pass, specifically without manual client sign off. What this does is add a fully live clone of the Production environment to the chain whereby new changes are always deployed out to the clone of live and at any time the system can be switched from pointing at the “Green” production environment, to the “Blue” clone or back again.

image

Assuming typical rollbacks are simple and databases are either in sync or both Green and Blue codebases link to a single DB, then this theory is well supported and could well be the future of devops. Especially when deploys are best made “immediately” and not the next morning or in times of low traffic.

In this case clients would be approving work already deployed to a production-ready environment which will be switched to as soon as their manual QA step is completed.

One argument made was that our Pantheon standard model allows for this in Stage already, we just need an automated process to push from Stage to Live once QA is passed. We’ll write more on this if our own processes move in this direction.

Nov 16 2016
Nov 16
November 16th, 2016

Speed Up Migration Development

One of the things that Drupal developers do for clients is content migration. This process uses hours of development time and often has one developer dedicated to it for the first half of the project. In the end, the completeness of the migration depends partly on how much time your client is willing to spend on building out migration for each piece of their content and settings. If you’ve come here, you probably want to learn how to speed up your migration development so you can move on to more fun aspects of a project.

The Challenge

Our client, the NYU Wagner Graduate School of Public Service was no exception when they decided to move to Drupal 8. Since our client had 65 content types and 84 vocabularies to weed through, our challenge was to build all those migrations into their budget and schedule.

The Proposed Solution

Since this was one of our first Drupal 8 sites, I was the first to dig my hands into the migration system. I was particularly stoked in the fact that everything in Drupal 8 is considered to be an entity. This opened up a bunch of possibilities. Also, the new automated migration system—Migrate Drupal—that came with core was particularly intriguing. In fact, our client had started down the path of using Migrate Drupal to upgrade their site to D8. Given they had field collections, entity references, and the fact that the Migrate Drupal module was still very much experimental for Drupal 7 upgrades, this didn’t pan out with a complete migration of their data.

The proposed solution was to use the --configure-only method on the drush tool migrate-upgrade. Doing so would build out templated upgrade configurations that would move all data from Drupal 7 or Drupal 6 to Drupal 8. The added bonus is that you can use that as a starting point and modify them from there.

Migration in Drupal 7 vs Drupal 8

Since we have the 100 mile high view of what the end game is, lets talk a little about why and how this works. In Drupal 7 Migrations are strictly class-based. You can see an example of a Drupal 7 migration in the Migrate Example module. The structure of the migration tends to be one big blob of logic (broken up by class functions of course) around a single migration. Here are the parts:

  • Class Constructor: where you define your source, destination, and field mapping
  • Prepare: a function where you do all your data processing

In Drupal 8, the concept of a migration has been abstracted out into the various parts that makes them reusable and feel more like “building with blocks” approach. You can find an example inside the Migrate Plus module. Here are the parts:

  • Source Plugins: a class defining the query, initial data alteration, keys, and fields provided by the source
  • Destination Plugins: a class defining how to store the data received in Drupal 8
  • Process Plugins: a class defining how to transform data from the source to something that can be used by the destination or other process plugins; you can find a full list of what comes with core in Migrate’s documenation
  • Migration Configuration: a configuration file that brings the configuration of all the source, destination, and process plugins to make a migration

Now yall might have noticed I left out hook_prepare_row. Granted, this is still available. It was also a key way many people used to manipulate data across several source fields that behaved the same. With the ideal of process plugins, you can now abstract out that functionality and use it in your field mapping.

How “Migrate Drupal” Makes the Process Better

There are tons of reasons to use Migrate Drupal to start your migration.

It builds a migration from your Drupal site

You might have seen above that I mentioned that Migrate Drupal provides a templated set of configurations. This is a product of some very elaborate migration detection classes. This means you will get all the configurations for:

  • content types
  • field setup
  • field configuration
  • various site settings
  • taxonomy vocabularies
  • custom blocks
  • content and their revisions
  • etc…

These will be built specifically for the site you are migrating from. This results in tons of configuration files—my first attempt created over 140 migration YAML files.

It’s hookable

Hookable means that it’s not just a part of core thing and that it’s expandable. That means that contributed modules can provide their own templates for their entities and field types, allowing Migrate Drupal to move over that data too. For example, it is completely possible (and in progress) for the Field Collection module to build in migration templates so that the migration will know how to import a field collection field. Not only that, the plugins provided by the contributed modules can be used in your custom migrations as well.

No initial need for redirection of content

Here’s an interesting one, everything comes over pretty much verbatim. Node IDs, term IDs, etc. are exactly the same. URL aliases come over, too, by default. Theoretically, you could have the same exact site from D7 on D8 if you ported over the theme.

More time to do the alterations the client wants

Since you aren’t spending your time building all the initial source plugins, process plugins, destination plugins, and configurations, you now have more time to alter the migrations to fit the new content model, or work with the new spiffy module like paragraphs.

How-To: Start a Migration with “Migrate Drupal”

Ok so here is the technical part. From here on is a quick How-To that gets you up and going. Things you will need are:

  • a Drupal 6 or 7 site
  • your brand new Drupal 8 site
  • a text editor
  • drush

1. Do a little research and install contrib modules.

We first need to find out if our contrib modules that are installed on our Drupal 6/7 site are available and have a migration component to them in Drupal 8. Once we identify the ones that we can use, go ahead and install them in Drupal 8 so they can help you do the migration. Here are a couple of thoughts:

Is the field structure the same as in Drupal 6/7? The entity destination plugin is a glorified way to say $entity->create($data); $entity->save();. Given this, if you know that on Drupal 6/7 that the field value was, for example…

[
  'value' => 'This is my value.',
  'format' => 'this is my format'
]

…and that it’s the same on Drupal 8, then you can rest easy. The custom field will be migrated over perfectly.

Is there a cckfield process plugin in the Drupal 8 Module for the custom field type? When porting fields, there is an automated process of detecting field types. If the field type you are pulling from equates to a known set of field types by the cckfield migration plugin, it will be used. You can find these in src/Plugin/migrate/cckfield of any given module. The Text core module has an example.

Is there a migration template for your entity or field in the Drupal 8 module? A migration template tells the Drupal Migrate module that there are other migrations that need to be created. In the case of the Text module. you will see one for the teaser length configuration. There can be multiple and look like migrations themselves, but are appended to in such a way to make them special for your site. You can find these in
migration_templates in the module.

Are there source, process, or destination plugins in the Drupal 8 module? These all help you (or the Migrate Drupal module) move content from your old site to your new one. It’s very possible that there are plugins not wired up to be used in an automated way yet, but that doesn’t keep you from using them! Look for them in src/plugin/migrate.

2. Install the contrib migrate modules.

First you must install all the various contributed modules that help you build these configurations and test your migrations. Using your favorite build method, add the following modules to your project:

NOTE: Keep in mind that you will need to be mindful of the version that goes with what version of Drupal Core. Example 8.x-1.x goes with Drupal 8.0.*, 8.x-2.x goes with Drupal 8.1.*, and 8.x-3.x goes with Drupal 8.2.*.

3. Set up the upgrade/migrate databases.

Be sure to give your database an key. The default is ‘upgrade’ for drush migrate-upgrade and ‘migrate’ for drush migrate-import. I personally stick with ‘migrate’ and just be sure to give the custom settings to migrate-upgrade. I use drush migrate-import a ton more than drush migrate-upgrade.

$databases = [
  'default' => [
    'default' => [
      'database' => 'drupal',
      'username' => 'user',
      'password' => 'pass',
      'host' => 'localhost',
      'port' => '',
      'driver' => 'mysql',
      'prefix' => '',
    ],
  ],
  'migrate' => [
    'default' => [
      'database' => 'migrate',
      'username' => 'user',
      'password' => 'pass',
      'host' => 'localhost',
      'port' => '',
      'driver' => 'mysql',
      'prefix' => '',
    ],
  ],
];

4. Export the migration configuration.

First I want to give credit to Mike Ryan for originally documenting this process. Without it, or his help in IRC, you wouldn’t have gotten this article today.

Go ahead and import your Drupal 6/7 database if you aren’t connecting to a live instance in your database settings with your preferred method. Take your pick:

  • drush sql-sync
  • drush sql-drop --database=migrate; gunzip -c /path/to/migrate.sql.gz | drush sqlc --database=migrate

Next run Migrate Upgrade to get your configuration built and stored in the Drupal 8 site.

drush migrate-upgrade --legacy-db-key=migrate --configure-only

Finally store your configuration. I prefer just to stick it in the sync directory created by Drupal 8 (or in my case configure for checking into Git).

drush config-export sync -y

I’m verbose about the directory because we usually have one for local development stored in the local key also. You can leave off the word sync if you only have a single sync directory.

5. Update your migration group with the info for the migration.

This is a quick and simple step. Find migrate_plus.migration_group.migrate_drupal_7.yml or migrate_plus.migration_group.migrate_drupal_6.yml and set the shared configuration. I usually make mine look like this:

langcode: en
status: true
dependencies: {  }
id: migrate_drupal_7
label: 'Import from Drupal 7'
description: 'Migrations originally generated from drush migrate-upgrade --configure-only'
source_type: 'Drupal 7'
module: null
shared_configuration:
  source:
    key: migrate

5. Alter the configuration.

Ok here comes the fun part. You should now have all the configurations to import everything. You could in fact now run drush mi --all and in theory get a complete migration of your old site to your new site in the data sense.

With that said, you will most likely need to make alterations. For example, in my migration we didn’t want all of the filters migrated over. Instead, we wanted to define the filters first, and then use a map to map filters from one type to another. So I did a global find across all the migration files for:

    plugin: migration
    migration: upgrade_d7_filter_format
    source: format

And replaced it with the following:

    plugin: static_map
    source: format
    map:
      php_code: filter_null
      filtered_html: basic_html

Another example of a change you can make is the change of the source plugin. This allows you to change the data you wanted. For example, I extended the node source plugin to add a where-clause so that I could only get data created after a certain time.

namespace Drupalwg_drupal7_migratePluginmigratesource;

use DrupalnodePluginmigratesourced7Node as MigrateD7Node;
use DrupalmigrateRow;

/**
 * Drupal 7 nodes source from database.
 *
 * @MigrateSource(
 *   id = "wg_d7_node",
 *   source_provider = "node"
 * )
 */
class Node extends MigrateD7Node {

  /**
   * {@inheritdoc}
   */
  public function query() {
    $query = parent::query();
    // If we pass in a timestamp... only get things created since then.
    if (isset($this->configuration['highwater'])) {
      $query->condition('n.created', $this->configuration['highwater'], '>=');
    }
    return $query;
  }

}

Lastly, you may want to change the destination configuration. By default, the configuration of the migration will go to a content type with the same name. It may be possible that you changed the name of the content type or are merging several content types together. Simply altering…

destination:
  plugin: 'entity:node'
  default_bundle: page

…to be…

destination:
  plugin: 'entity:node'
  default_bundle: landing_page

…may be something you need to do.

Once you are done altering the migration save the configuration files. You can use the sync directory or if you plan on distributing it in a module, you can use the
config/install folder of you module.

Rebuild your site with the new configuration via your preferred method, or simply run drush config-import sync -y.

6. Migrate the data.

This is the last step. When you are ready, migrate the data either by running each of the migrations individually using --force, run the migration even though other pieces haven’t, use the --execute-dependencies, or just go ahead and go for the gold drush migrate-import --all

Caveats

So finally after you go through all the good news, there are a few valid points that need to be made about the limitations of this method.

IDs are verbatim due to the complexity of dependencies

So this means that the migrations are currently expecting all the nids, tids, fids, and other IDs, to be exactly what they were on Drupal 6 or 7. This causes issues when your client is building new staged data. You have three options in this case:

  1. Alter the node, node_revision, file_managed, taxonomy_term_data, users, and probably some others I’m missing here that house the main entities that entity reference fields will need, so that their keys are something your client will not reach on their current production site while you are developing.
  2. Do not start adding or altering content on Drupal 8 until all migrations are done.
  3. Go through all the migrations and add migration process plugins where an entity is referenced, and then remove the main id from the migration of that entity.

In my case, I went with the first solution because this realization hit me kinda late. Our plan was to migrate now for data so our client would have something to show their stakeholders, and then migrate again later to get the latest data before going live.

There are superfluous migrations

You will always find out that you don’t want to keep the settings verbatim to the Drupal 6 or 7 site. This means you will have to remove that migration and remove it’s dependency from all the other migrations that depend on it. Afterwords, you will need to make sure that that case is covered. I shared an example in this article where we decided to go ahead and configure new filter formats. Another example may be that you don’t even give a crap about the dblog settings from your old Drupal site.

Final Thoughts

For NYU Wagner, we were able to save a ton of time having the migrations built out for us to start with. Just the hours spent on building the field configurations for the majority of the content types that were to stay the same was worth it. It was also a great bridge into “How Do Migrations Work?” We now have a more complete custom migration for our client in a fraction of the time once our feature set was nailed down, than if we were to go build out the migrations one at a time. Happy migrating.

Allan Chappell
Allan Chappell

Allan brings technological know-how and grounds it with some simple country living. His interests include DevOps, animal husbandry (raising rabbits and chickens), hiking, and automated testing.

Oct 27 2016
Oct 27

In a previous article on this blog, I talked about why code review is a good idea, and some aspects of how to conduct them. This time I want to dig deeper into the practicalities of reviewing code, and mention a few things to watch out for.

Code review is the first line of defence against hackers and bugs. When you approve a pull request, you’re putting your name to it - taking a share of responsibility for the change.

Once bad code has got into a system, it can be difficult to remove. Trying to find problems in an existing codebase is like looking for an unknown number of needles in a haystack, but when you’re reviewing a pull request it’s more like looking in a handful of hay. The difficult part is recognising a needle when you see one. Hopefully this article will help you with that.

Code review shouldn’t be a box-ticking exercise, but it can be helpful to have a list of common issues to watch out for. As well as the important question of whether the change will actually work, the main areas to consider are:

  • Security
  • Perfomance
  • Accessibility
  • Maintainability

I’ll touch on these areas in more detail - I’ll be talking about Drupal and PHP in particular, but a lot of the points I’ll make are relevant to other languages and frameworks.

Security

I don’t claim to be an expert on security, and often count myself lucky that I work in what my colleague Andrew Harmel-Law calls “a creative-inventive market, not a safety-critical one”.

Having said that, there are a few common things to keep an eye out for, and developers should be aware of the OWASP top ten list of vulnerabilities. When working with Drupal, you should bear in mind the Drupal security team’s advice for writing secure code. For me, the most important points to consider are:

Does the code accept user input without proper sanitisation?

In short - don’t trust user input. The big attack vectors like XSS and SQL injection are based on malicious text strings. Drupal provides several types of text filtering - the appropriate filter depends on what you’re going to do with the data, but you should always run user input through some kind of sanitisation.

Are we storing sensitive data anywhere we shouldn’t be?

Security isn’t just about stopping bad guys getting in where they shouldn’t. Think about what kind of data you have, and what you’re doing with it. Make sure that you’re not logging people’s private data inappropriately, or passing it across network in a way you shouldn’t. Even if the site you’re working on doesn’t have anything as sensitive as the Panama papers, you have a legal, professional, and personal responsibility to make sure that you’re handling data properly.

Performance

When we’re considering code changes, we should always think about what impact they will have on the end user, not least in terms of how quickly a site will load. As Google recently reminded us, page load speed is vital for user engagement. Slow, bloated websites cost money, both in terms of mobile data charges and lost revenue.

Does the change break caching?

Most Drupal performance strategies will talk about the value of caching. The aim of the game is to reduce the amount of work that your web server does. Ideally, the web server won’t do any work for a page request from an anonymous user - the whole thing will be handled by a reverse proxy cache, such as Varnish. If the request needs to go to the web server, we want as much of the page as possible to be served from an object cache such as Redis or Memcached, to minimise the number of database queries needed to render the page.

Are there any unnecessary uses of $_SESSION?

Typically, reverse proxy servers like Varnish will not cache pages for authenticated users. If the browser has a session, the request won’t be served by Varnish, but by the web server.

Here’s an illustration of why this is so important. This graph shows the difference in response time on a load test environment following a deployment that included some code to create sessions. There were some other changes that impacted performance, but this was the big one. As you can see, overall response time increased six-fold, with the biggest increase in the time spent by the web server processing PHP (the blue sections on the graphs), mainly because a few lines of code creating sessions had slipped through the net.

Graph showing dramatic increase in PHP evaluation time

Are there any inefficient loops?

The developers’ maxims “Don’t Repeat Yourself” and “Keep It Simple Stupid” apply to servers as well. If the server is doing work to render a page, we don’t want that work to be repeated or overly complex.

What’s the front end performance impact?

There’s no substitute for actually testing, but there are a few things that you can keep an eye out for when reviewing change. Does the change introduce any additional HTTP requests? Perhaps they could be avoided by using sprites or icon fonts. Have any images been optimised? Are you making any repeated DOM queries?

Accessibility

Even if you’re not an expert on accessibility, and don’t know ARIA roles, you can at least bear in mind a few general pointers. When it comes to testing, there’s a good checklist from the Accessibility Project, but here are some things I always try to think about when reviewing a pull request.

Will it work on a keyboard / screen reader / other input or output device ?

Doing proper accessibility testing is difficult, and you may not have access to assistive technology, but a good rule of thumb is that if you can navigate using only a keyboard, it will probably work for someone using one of the myriad input devices. Testing is the only way to be certain, but here are a couple of simple things to remember when reviewing CSS changes: hover and focus should usually go together, and you should almost never use outline: none;.

Are you hiding content appropriately?

One piece of low-hanging fruit is to make sure that text is available to screen readers and other assistive technology. Any time I see display: none; in a pull request, alarm bells start ringing. It’s usually not the right way to hide content.

Maintainability

Hopefully the system you’re working on will last for a long time. People will have to work on it in the future. You should try to make life easier for those people, not least because you’ll probably be one of them.

Reinventing the wheel

Are you writing more code than you need to? It may well be that the problem you’re looking at has already been solved, and one of the great things about open source is that you’re able to recruit an army of developers and testers you may never meet. Is there already a module for that?

On the other hand, even if there is an existing module, it might not always make sense to use it. Perhaps the contributed module provides more flexibility than our project will ever need, at a performance cost. Maybe it gives us 90% of what we want, but would force us to do things in a certain way that would make it difficult to get the final 10%. Perhaps it isn’t in a very healthy state - if so, perhaps you could fix it up and contribute your fixes back to the community, as I did on a recent project.

If you’re writing a custom module to solve a very specific problem, could it be made more generic and contributed to the community? A couple of examples of this from the Capgemini team are Stomp and Route.

One of the jobs of the code reviewer is to help draw the appropriate line between the generic and the specific. If you’re reviewing custom code, think about whether there’s prior art. If the pull request includes community-contributed code, you should still review it. Don’t assume that it’s perfect, just because someone’s given it away for nothing.

Appropriate API usage

Is your team using your chosen frameworks as they were intended? If you see someone writing a custom function to solve a problem that’s already been solved, maybe you need to share a link to the API docs for the existing solution.

Introducing notices and errors

If your logs are littered with notices about undefined variables or array indexes, not only are you likely to be suffering a performance hit from the logging, but it’s much harder to separate the signal from the noise when you’re trying to investigate something.

Browser support

Remember that sometimes, it’s good to be boring. As a reviewer, one of your jobs is to stop your colleagues from getting carried away with shiny new features like ES6, or CSS variables. Tools like Can I Use are really useful in being able to check what’s going to work in the browsers that you care about.

Code smells

Sometimes, code seems wrong. As I learned from Larry Garfield’s excellent presentation on code smells at the first Drupalcon I went to, code smells are indications of things that might be a deeper problem. Rather than re-hash the points Larry made, I’d recommend reading his slides, but it is worth highlighting some of the anti-patterns he discusses.

Functions or objects that do more than one thing

A function should have a function. Not two functions, or three. If an appropriate comment or function name includes “and”, it’s a sign you should be splitting the function up.

Functions that sometimes do different things

Another bad sign is the word “or” in the comment. Functions should always do the same thing.

Excessive complexity

Long functions are usually a sign that you might want to think about refactoring. They tend to be an indicator that the code is more complex than it needs to be. The level of complexity can be measured, but you don’t need a tool to tell you that if a function doesn’t fit on a screen, it’ll be difficult to debug.

Not being testable

Even if functions are simple enough to write tests for, do they depend on a whole system? In other words, can they be genuinely unit tested?

Lack of documentation

There’s more to be said on the subject of code comments than I can go into here, but suffice to say code should have useful, meaningful comments to help future maintainers understand it.

Tight coupling

Modules should be modular. If two parts of a system need to interact, they should have a clearly defined and documented interface.

Impurity

Side effects and global variables should generally be avoided.

Sensible naming

Is the purpose of a function or variable obvious from the name? I don’t want to rehash old jokes, but naming things is difficult, and it is important.

Why would you comment out lines of code? If you don’t need it, delete it. The beauty of version control is that you can go back in time to see what code used to be there. As long as you write a good commit message, it’ll be easy enough to find. If you think that you might need it later, put it behind a feature toggle so that the functionality can be enabled without a code release.

Specificity

In CSS, IDs and !important are the big code smells for me. They’re a bad sign that a specificity arms race has begun. Even if you aren’t going to go all the way with a system like BEM or SMACSS, it’s a good idea to keep specificity as low as possible. The excellent articles on CSS specificity by Harry Roberts and Chris Coyier are good starting points for learning more.

Standards

It’s important to follow coding standards. The point of this isn’t to get some imaginary Scout badge - code that follows standards is easier to read, which makes it easier to understand, and by extension easier to maintain. In addition, if you have your IDE set up right, it can warn you of possible problems, but those warnings will only be manageable if you keep your code clean.

Deployability

Will your changes be available in environments built by Continuous Integration? Do you need to set default values of variables which may need overriding for different environments? Just as your functions should be testable, so should your configuration changes. As far as possible, aim to make everything repeatable and automatable - if a release needs any manual changes it’s a sign that your team may need to be thinking with more of a DevOps mindset.

Keep Your Eyes On The Prize

With all this talk of coding style and standards, don’t get distracted by trivialities - it is worth caring about things like whitespace and variable naming, but remember that it’s much more important to think about whether the code actually does what it is supposed to. The trouble is that our eyes tend to fixate on those sort of things, and they cause unnecessary cognitive load.

Pre-commit hooks can help to catch coding standards violations so that reviewers don’t need to waste their time commenting on them. If you’re on a big project, it will almost certainly be worth investing some time in integrating your CI server and your code review tool, and automating checks for issues like code style, unit tests, mess detection - in short, all the things that a computer is better at spotting than humans are.

Does the code actually solve the problem you want it to? Rather than just looking at the code, spend a couple of minutes reading the ticket that it is associated with - has the developer understood the requirements properly? Have they approached the issue appropriately? If you’re not sure about the change, check out the branch locally and test it in your development environment.

Even if there’s nothing wrong with the suggested change, maybe there’s a better way of doing it. The whole point of code review is to share the benefit of the team’s various experiences, get extra eyes on the problem, and hopefully make the end product better.

I hope that this has been useful for you, and if there’s anything you think I’ve missed, please let me know via the comments.

Oct 26 2016
Oct 26

One of my biggest pet-peeves is creating Drupal 7 empty menu link titles since there’s no out-of-the-box solution. As a result it can be difficult to create stylized links, such as icons or background images. After many frustrating sessions I finally sat down to find a way to make this happen. Consequently, I began to think this was an impossibility and was unable to find a solution already in existence that did exactly what I needed it to do. However, Drupal 7 empty menu link titles are absolutely possible with just this one little snippet! Have no fear, theme_menu_link to the rescue!

Using <none> to Create Drupal 7 Empty Menu Link Titles

drupal 7 empty menu link titles

First of all, you must start by using the snippet provided below and will need to enter <none> as the link title in order to render it empty. To accomplish this, in your theme’s template.php file add:

/**
 * Implements theme_menu_link().
 *
 * @link https://api.drupal.org/api/drupal/includes!menu.inc/function/theme_menu_link/7.x
 */
function your_theme_menu_link($vars)
{
  $element = $vars['element'];
  $sub_menu = '';

  if ($element['#below']) {
    $sub_menu = drupal_render($element['#below']);
  }

  if ( '<none>' === $element['#title'] )
  {
    $element['#title'] = '';
  }

  $output = l($element['#title'], $element['#href'], $element['#localized_options']);

  return '<li' . drupal_attributes($element['#attributes']) . '>' . $output . $sub_menu . "</li>\n";
}

theme_menu_link returns HTML for menu and submenu links, so that it can be used to alter the menu’s output. In the above, we’re checking to see if the menu link title has been set to <none>. If this has been done correctly it removes the title text while leaving the link intact. Be sure to check your work before moving on because sometimes the simplest mistakes in the beginning will cause you grief in the future.

Finally, be sure to clear your cache if you’re not seeing the change!

Have you already added this snippet and yet not seen anything change? It’s probably due to Drupal not yet seeing the new hook in your code. In order to fix this, you’ll need to clear your cache so that Drupal will register the changes you’ve made. Most of the time, this will fix issues like these quickly rather than spending hours trying to debug something that truly isn’t broken.

You can also use this hook to alter other menu link attributes. For instance, if you wanted to avoid using the Menu attributes module you’re able to use this hook to add or remove classes.

For more information on theme_menu_link, see https://api.drupal.org/api/drupal/includes!menu.inc/function/theme_menu_link/7.x.

Is there a Drupal module that can handle this?

Of course there is! Icon API accomplishes this in addition to some extra features you may find useful. I’ve never used it personally, and I’d rather stay away from modules that don’t produce production releases. Consequently, if you have created or know of a module that can handle this feel free to shoot me a line in the comments below and I’ll gladly take a look at it to potentially include in this post as a footnote. In conclusion, don’t ever believe something is impossible because you’re often just a few days of banging your head against the wall away from a breakthrough!

Like this:

Like Loading...

Author: Ben Marshall

Red Bull Addict, Self-Proclaimed Grill Master, Entrepreneur, Workaholic, Front End Engineer, SEO/SM Strategist, Web Developer, Blogger

Sep 18 2016
Sep 18

If you’re migrating from a different CMS platform, the advantages of Drupal 8 seem fairly clear. But what if you’re already on Drupal? There has been a lot of discussion in the Drupal community lately about upgrading to Drupal 8. When is the right time? Now that the contributed module landscape is looking pretty healthy, there aren’t many cases where I’d recommend going with Drupal 7 for a new project. However, as I’ve previously discussed on this blog, greenfield projects are fairly rare.

Future proofing

One of the strengths of an open source project like Drupal is the level of support from the community. Other people are testing your software, and helping to fix bugs that you might not have noticed. Drupal 7 will continue to be supported until Drupal 9 is released, which should be a while away yet. However, if your site is on Drupal 6, there are security implications of remaining on an unsupported version, and it would be wise to make plans to upgrade sooner rather than later, even with the option of long term support. While the level of support from the community will no longer be the same, sites built on older versions of Drupal won’t suddenly stop working, and there are still some Drupal 5 sites out there in the wild.

Technical debt

Most big systems could do with some refactoring. There’s always some code that people aren’t proud of, some decisions that were made under the pressure of a tight deadline, or just more modern ways of doing things.

An upgrade is a great opportunity to start with a blank piece of paper. Architectural decisions can be revisited, and Drupal 8’s improved APIs are ideal if you’re hoping to take a more microservices-oriented approach, rather than ending up with another MySQL monolith.

Drupal’s policy of backward incompatibility means that while you’re upgrading the CMS, you have the chance to refactor and improve the existing custom codebase (but don’t be suckered in by the tempting fallacy that you’ll be able to do a perfect refactoring).

There are no small changes

Don’t underestimate how big a job upgrading will be. At the very least, every custom module in the codebase will need to be rewritten for Drupal 8, and custom themes will need to be rebuilt using the Twig templating system. In a few cases, this will be a relatively trivial job, but the changes in Drupal 8 may mean that some modules will need to be rebuilt from the ground up. It isn’t just about development - you’ll need to factor in the time it will take to define requirements, not to mention testing and deployment. If it’s a big project, you may also need to juggle the maintenance of the existing codebase for some time, while working on the new version.

The sites that we tend to deal with at Capgemini are big. We work with large companies with complex requirements, a lot of third party integrations, and high traffic. In other words, it’s not just your standard brochureware, so we tend to have a lot of custom modules.

If it ain’t broke, don’t fix it

Given the fact that an upgrade is non-trivial, the question has to be asked - what business value will an upgrade bring? If all you’re doing is replacing a Drupal 7 site with a similar Drupal 8 site, is it really a good idea to spend a lot of time and money to build something that is identical, as far as the average end user can tell?

If the development team is focused on upgrading, will there be any bandwidth for bug fixes and improvements? An upgrade will almost certainly be a big investment - maybe that time, energy and money would be better spent on new features or incremental improvements that will bring tangible business value and can be delivered relatively quickly. Besides, some of the improvements in Drupal 8 core, such as improved authoring experience, are also available in the Drupal 7 contrib ecosystem.

On the other hand, it might make more sense to get the upgrade done now, and build those improvements on top of Drupal 8, especially if your existing codebase needs some TLC.

Another option (which we’ve done in the past for an upgrade from Drupal 6 to 7) is to incrementally upgrade the site, releasing parts of the new site as and when they’re ready.

The right approach depends on a range of factors, including how valuable your proposed improvements will be, how urgent they are, and how long an upgrade will take, which depends on how complex the site is.

The upside of an upgrade

Having said all of that, the reasons to upgrade to Drupal 8 are compelling. One big plus for Drupal 8 is the possibility of improved performance, especially for authenticated users, thanks to modern features like BigPipe. The improved authoring experience, accessibility and multilingual features that Drupal 8 brings will be especially valuable for larger organisations.

Not only that, improving Developer Experience (DX) was a big part of the community initiatives in building Drupal 8. Adopting Symfony components, migrating code to object-oriented structures, improving the APIs and a brand new configuration management system are all designed to improve developer productivity and code quality - after the initial learning curve. These improvements will encourage more of an engineering mindset, and drive modern development approaches. The net benefit will be more testable (and therefore more reliable) features, easier deployment and maintenance methods and increase speed of future change.

Decision time

There is no one-size-fits-all answer. Your organisation will need to consider its own situation and needs.

Where does upgrading the CMS version fit into the organisation’s wider digital roadmap? Is there a site redesign on the cards any time soon? What improvements are you hoping to make? What functionality are you looking to add? Does your site’s existing content strategy meet your needs? Is the solution architecture fit for your current and future purposes, or would it make sense to think about going headless?

In summary, while an upgrade will be a big investment, it may well be one that is worth making, especially if you’re planning major changes to your site in the near future.

If the requirements for your upgrade project are “build us the same as what we’ve got already, but with more modern technology” then it’s probably not going to be worth doing. Don’t upgrade to Drupal 8 just because it’s new and shiny. However, if you’re looking further forward and planning to build a solid foundation for future improvements then an upgrade could be a very valuable investment.

Aug 09 2016
Aug 09
Screenshot of Behat tests results

If automated testing is not already part of your development workflow, then it’s time to get started. Testing helps reduce uncertainty by ensuring that new features you add to your application do not break older features. Having confidence that your not breaking existing functionality reduces time spent hunting bugs or getting reports from clients by catching them earlier.

Unfortunately, testing still does not get the time and attention it needs when you’re under pressure to make a deadline or release a feature your clients have been asking for. But—like using a version control system and having proper development, staging, and production environments—it should be a routine part of how you do your work. We are professionals, after all. After reading all the theory, I only recently took the plunge myself. In this post, I’ll show you how to use Behat to test that your Drupal site is working properly.

Before we dive in, the Behat documentation describes the project as:

[…] an open source Behavior Driven Development framework for PHP 5.3+. What’s behavior driven development, you ask? It’s a way to develop software through a constant communication with stakeholders in form of examples; examples of how this software should help them, and you, to achieve your goals.

Basically, it helps developers, clients, and others communicate and document how an application should behave. We’ll see shortly how Behat tests are very easy to read and how you can extend them for your own needs.

Mink is an extension that allows testing a web site by simulating interacting with it through a browser to fill out form fields, click on links, and so forth. Mink lets you test via Goutte, which makes requests and parses the contents but can’t execute JavaScript. It can also use Selenium, which controls a real browser and can thus test JS and Ajax interactions, but Selenium requires more configuration.

Requirements

To get started, you’ll need to have Composer on your machine. If you don’t already, head over to the Composer Website. Once installed, you can add Behat, Mink, and Mink drivers to your project by running the following in your project root:

composer require behat/behat
composer require behat/mink
composer require behat/mink-selenium2-driver
composer require behat/mink-extension

Once eveything runs, you’ll have a composer.json file with:

    "require": {
        "behat/behat": "^3.1",
        "behat/mink": "^1.7",
        "behat/mink-selenium2-driver": "^1.3",
        "behat/mink-extension": "^2.2"
    },

This will download Behat and it’s dependencies into your vendor/ folder. To check that it works do:

vendor/bin/behat -V

There are other ways to install Behat, outlined in the quick introduction.

The Drupal community has a contrib project, Behat Drupal Extension, that is an integration for Behat, Mink, and Drupal. You can install it with the requre command below. I had to specify the ~3.0 version, otherwise composer couldn’t satisfy dependencies.

composer require drupal/drupal-extension:~3.0

And you’ll have the following in your composer.json:

       "drupal/drupal-extension": "~3.0",

Configuring Behat

When you run Behat, it’ll look for a file named behat.yml. Like Drupal 8, Behat uses YAML for configuration. The file tells Behat what contexts to use. Contexts provide the tests that you can run to validate behavior. The file configures the web drivers for Mink. You can also configure a region_map which the Drupal extension uses to map identifiers (left of the :) to CSS selectors to identify theme regions. These come in very handy when testing Drupal theme output.

The one I use looks like:

default:
  suites:
    default:
      contexts:
        - Drupal\DrupalExtension\Context\DrupalContext
        - Drupal\DrupalExtension\Context\MarkupContext
        - Drupal\DrupalExtension\Context\MessageContext
        - FeatureContext
  extensions:
    Behat\MinkExtension:
      goutte: ~
      javascript_session: selenium2
      selenium2:
        wd_host: http://local.dev:4444/wd/hub
        capabilities: {"browser": "firefox", "version": "44"}
      base_url: http://local.dev
    Drupal\DrupalExtension:
      blackbox: ~
      region_map:
        breadcrumb: '#breadcrumb'
        branding: '#region-branding'
        branding_second: '#region-branding-second'
        content: '#region-content'
        content_zone: '#zone-content'
        footer_first: '#region-footer-first'
        footer_second: '#region-footer-second'
        footer_fourth: '#region-footer-fourth'
        menu: '#region-menu'
        page_bottom: '#region-page-bottom'
        page_top: '#region-page-top'
        sidebar_first: '#region-sidebar-first'
        sidebar_second: '#region-sidebar-second'

Writing a Simple Feature

Now comes the fun part. Let’s look at writing a feature and how to test that what we expect is on the page. The first time we run it, we need to initialize Behat to generate a FeatureContext class. Do so with:

vendor/bin/behat --init

That should also create a features/ directory, where we will save the features that we write. To behat, a feature is test suite. Each test in a feature evaluates specific functionality on your site. A feature is a text file that ends in .feature. You can have more than one: for example, you might have a blog.feature, members.feature, and resources.feature if your site has those areas available.

Of course, don’t confuse what Behat calls a feature—a set of tests—with the Features module that bundles and exports related functionality into a Drupal module.

For my current project, I created a global.feature file that checks if the blocks I expect to have in my header and footer are present. The contents of that file are:

Feature: Global Elements

  Scenario: Homepage Contact Us Link
    Given I am on the homepage
    Then I should see the link "Contact Us" in the "branding_second" region
    Then I should see the "Search" button in the "branding_second" region
    Then I should see the "div#block-system-main-menu" element in the "menu" region

As you can see, the tests is very readable even though it isn’t purely parsing natural language. Indents help organize Scenarios (a group of tests) and the conditions needed for each scenario to pass.

You can set up some conditions for the test, starting with “Given”. In this case, given that we’re on the homepage. The Drupal Extension adds ways to specify that you are a specific user, or have a specific role, and more.

Next, we list what we expect to see on the webpage. You can also tell Behat to interact with the page by specifying a link to click, form field to fill out, or a button to press. Again here, the Drupal extension (by extending the MinkExtension), provides ways to test if a link or button are in one of our configured regions. The third test above uses a CSS selector, like in jQuery, to check that the main menu block is in the menu region.

Testing user authentication

If you’re testing a site that is not local, you can use the drush api driver to test user authentication, node creation, and more. First, setup a drush alias for your site (in this example, I’m using local.dev. Then add the following are in your behat.yml:

      api_driver: 'drush'
      drush:
        alias: "local.dev"

You can then create a scenario to test the user login’s work without having to specify a test username or password by tagging them with @api

  @api
  Scenario: Admin login
    Given I am on the homepage
    Given I am logged in as a user with the "admin" role
    Then I should see the heading "Welcome" in the "content" region

If you’ve customized the username text for login, your test will fail. Don’t worry! Just add the following to your behat.yml file so that the test knows what text to look for. In this case, the username field label is just E-mail.

      text:
        username_field: "E-mail"

Custom Testing by Extending Contexts

When you initialized Behat, it created a features/bootstraps/FeatureContext.php file. This can be a handy class for writing custom tests for unique features on your site. You can add custom tests by using the Drupal Extension’s own sub-contexts. I changed my Feature Context to extend the Mink Context like this:

class FeatureContext extends MinkContext implements SnippetAcceptingContext {

Note that if you do that, you’ll need to remove MinkContext from the explicit list of default context in behat.yml.

No matter how you organize them, you can then write custom tests as methods. For example, the following will test that a link appears in the breadcrumb trail of a page. You can use CSS selectors to find items on the page, such as the ‘#breadcrumb’ div in a theme. You can also re-use other tests defined by the MinkContext like findLink.

/**
 * @Then I should see the breadcrumb link :arg1
*/
public function iShouldSeeTheBreadcrumbLink($arg1)
{
   // get the breadcrumb
   /**
     * @var Behat\Mink\Element\NodeElement $breadcrumb
     */
   $breadcrumb = $this->getSession()->getPage()->find('css', 'div#breadcrumb');

   // this does not work for URLs
   $link = $breadcrumb->findLink($arg1);
   if ($link) {
      return;
   }

   // filter by url
   $link = $breadcrumb->findAll('css', "a[href=\"{$arg1}\"]");
   if ($link) {
      return;
   }

   throw new \Exception(
       sprintf("Expected link %s not found in breadcrumb on page %s",
           $arg1,
           $this->getSession()->getCurrentUrl())
    );
}

If your context implements the SnippetAwareContext, behat will generate the Docblock and method signature when it encounters an unknown test. If you’re feature has the following:

    Then I should see "foo-logo.png" as the header logo.

When you run your tests, behat will output the error message below that you can copy and paste to your context. Anything in quotes becomes a parameter. The DocBlock contains the annotation Behat uses to find your test when it’s used in a scenario.

/**
* @Then I should see :arg1 as the header logo.
*/
public function iShouldSeeAsTheHeaderLogo($arg1)
{
   throw new PendingException();
}

Selenium

Follow the Behat docs to install selenium: http://mink.behat.org/en/latest/drivers/selenium2.html. When you’re testing you’ll need to have it running via:

java -jar /path/to/selenium-server-standalone-2.53.0.jar 

To tell Behat how to use selenium your behat.yml file should have:

      selenium2:
        wd_host: http://local.dev:4444/wd/hub
        capabilities: {"browser": "firefox"}

You’ll also need to have Firefox installed. OF course, at the time of this writing, Firefox is asking people to transition from use Webdriver to Marionette for automating browser usage. I have Firefox 47 and it’s still working with Webdriver as far as I can tell. I have not found clear, concise instructions for using Marionette with Selenium. Another option is to use Phantom.JS instead of Selenium for any features that need a real browser.

Once everything is working—you’ll know it locally because a Firefox instance will pop up—you can create a scenario like the following one. Use the @javascript tag to tell Behat to use Selenium to test it.

  @javascript
  Scenario: Modal Popup on click
    Given I am at "/some/page"
    When I click "View More Details"
    Then I wait for AJAX to finish
    Then I should see an "#modal-content" element
    Then I should see text matching "This is a Modal"

Conclusion

If you don’t have tests for your site, I urge you to push for adding them as part of your ongoing work. I’ve slowly added them to my main Drupal client project over the last few months and it’s really started to pay off. For one, I’ve captured many requirements and expectations about how pages on the site work that were only in my or the project manager’s heads, if not lost in a closed ticket somewhere. Second, whenever I merge new work in and before any deploy I can run tests. If they are all green, I can be confident that new code and bug fixes haven’t caused a regression. At the same time, I now have a way to test the site that makes it less risky to re-factor or reorganize code. I didn’t spend a lot of time building tests, but as I work on a new feature or fix a bug, writing a test is now just part of confirming that everything works as expected. For complicated features, it’s also become a time saver to have a test that automates a complicated interactions—like testing a 3-page web form, since Behat can run that scenario much faster than I can manually.

The benefits from investing in automated testing outweigh any initial cost in time and effort to set them up. What are you waiting for?

Jun 23 2016
Jun 23

These days, it’s pretty rare that we build websites that aren’t some kind of redesign. Unless it’s a brand new company or project, the client usually has some sort of web presence already, and for one reason or another, they’ve decided to replace it with something shiny and new.

In an ideal world, the existing system has been built in a sensible way, with a sound content strategy and good separation of concerns, so all you need to do is re-skin it. In the Drupal world, this would normally mean a new theme, or if we’re still in our dream Zen Garden scenario, just some new CSS.

However, the reality is usually different. In my experience, redesigns are hardly ever just redesigns. When a business is considering significant changes to the website like some form of re-branding or refresh, it’s also an opportunity to think about changing the content, or the information architecture, or some aspects of the website functionality. After all, if you’re spending time and money changing how your website looks, you might as well try to improve the way it works while you’re at it.

So the chances are that your redesign project will need to change more than just the theme, but if you’re unlucky, someone somewhere further back along the chain has decided that it’s ‘just a re-skinning’, and therefore it should be a trivial job, which shouldn’t take long. In the worst case scenario, someone has given the client the impression that the site just needs a new coat of paint, but you’re actually inheriting some kind of nasty mess with unstable foundations that should really be fixed before you even think about changing how it looks. Incidentally, this is one reason why sales people should always consult with technical people who’ve seen under the bonnet of the system in question before agreeing prices on anything.

Even if the redesign is relatively straightforward from a technical point of view, perhaps it’s part of a wider rebranding, and there are associated campaigns whose dates are already expensively fixed, but thinking about the size of the website redesign project happened too late.

In other words, for whatever reason, it’s not unlikely that redesign projects will find themselves behind schedule, or over budget - what should you do in this situation? The received agile wisdom is that time and resources are fixed, so you need to flex on scope. But what’s the minimum viable product for a redesign? When you’ve got an existing product, how much of it do you need to rework before you put the new design live?

This is a question that I’m currently considering from a couple of angles. In the case of one of my personal projects, I’m upgrading an art gallery listings site from Drupal 6 to Drupal 8. The old site is the first big Drupal site I built, and is looking a little creaky in places. The design isn’t responsive, and the content editing experience leaves something to be desired. However, some of the contributed modules don’t have Drupal 8 versions yet, and I won’t have time to do the work involved to help get those modules ready, on top of the content migration, the new theme, having a full-time job and a family life, and all the rest of it.

In my day job, I’m working with a large multinational client on a set of sites where there’s no Drupal upgrade involved, but the suggested design does include some functional changes, so it isn’t just a re-theming. The difficulty here is that the client wants a broader scope of change than the timescales and budget allow.

When you’re in this situation, what can you do? As usual with interesting questions, the answer is ‘it depends’. Processes like impact mapping can help you to figure out the benefits that you get from your redesign. If you’ve looked at your burndown rates, and know that you’re not going to hit the deadline, what can you drop? Is the value that you gain from your redesign worth ditching any of the features that won’t be ready? To put it another way, how many of your existing features are worth keeping? A redesign can (and should) be an opportunity for a business to look at their content strategy and consider rationalising the site. If you’ve got a section on your site that isn’t adding any value, or isn’t getting any traffic, and the development team will need to spend time making it work in the new design, perhaps that’s a candidate for the chop?

We should also consider the Pareto principle when we’re structuring our development work, and start by working on the things that will get us most of the way there. This fits in with an important point made by scrum, which can sometimes get forgotten about: that each sprint should deliver “a potentially shippable increment”. In this context, I would interpret this to mean that we should make sure that the site as a whole doesn’t look broken, and then we can layer on the fancy bits afterwards, similar to a progressive enhancement approach to dealing with older browsers. If you aren’t sure whether you’ll have time to get everything done, don’t spend an excessive amount of time polishing one section of the site to the detriment of basic layout and styling that will make the whole site look reasonably good.

Starting with a style guide can help give you a solid foundation to build upon, by enabling you to make sure that all the components on the site look presentable. You can then test those components in their real contexts. If you’ve done any kind of content audit (and somebody really should have done), you should have a good idea of the variety of pages you’ve got. At the very least, your CMS should help you to know what types of content you have, so that you can take a sample set of pages of each content type or layout type, and you’ll be able to validate that they look good enough, whatever that means in your context.

There is another option, though. You don’t have to deliver all the change at once. Can you (and should you) do a partial go-live with a redesign? Depending on how radical the redesign is, the attitudes to change and continuous delivery within your organisation and client, and the technology stack involved, it may make sense to deliver changes incrementally. In other words, put the new sections of the site live as they’re ready, and keep serving the old bits from the existing system. There may be brand consistency, user experience, and content management process reasons why you might not want to do this, but it is an option to consider, and it can work.

On one previous project, we were carrying out a simultaneous redesign and Drupal 6 to 7 upgrade, and we were able to split traffic between the old site and the new one. It made things a little bit more complicated in terms of handling user sessions, but it did give the client the freedom to decide when they thought we had enough of the new site for them to put it live. In the end, they decided that the answer was ‘almost all of it’.

So what’s the way forward?

In the case of my art gallery listings site, the redesign itself has a clear value, and with Drupal 6 being unsupported, I need to get the site onto Drupal 8 sooner rather than later. There’s definitely a point that will come fairly soon, even if I don’t get to spend as long as I’d like working on it, where the user experience will be improved by the new site, even though some of the functionality from the old site isn’t there, and isn’t likely to be ready for a while. I’m my own client on that project, so I’m tempted to just put the redesign live anyway.

In the case of my client, there are decisions to be made about which of the new features need to be included in the redesign. De-scoping some of the more complex changes will bring the project back into the realm of being a re-theming, the functional changes can go into subsequent releases, and hopefully we’ll hit the deadline.

A final point that I’d like to make is that we shouldn’t fall into the trap of thinking of redesigns as big-bang events that sit outside the day-to-day running of a site. Similarly, if you’re thinking about painting your house, you should also think about whether you also need to fix the roof, and when you’re going to schedule the cleaning. Once the painting is done, you’ll still be living there, and you’ll have the opportunity to do other jobs if and when you have the time, energy, and money to do so.

Along with software upgrades, redesigns should be considered as part of a business’s long-term strategy, and they should be just one part of a plan to keep making improvements through continuous delivery.

May 16 2016
May 16

With the release of Drupal 8 comes the new CLI to generate boilerplate code, interact and debug Drupal (for earlier versions of Drupal, see drush-related coder module tools). The transition from drush to drupal can be a little bit frustrating learning the new commands. To help ease the stress, I’ve put together a list of the most common Drupal 8 Console commands.

Drupal 8 Console Commands

Drupal 8 Console has been designed to facilitate the Drupal 8 adoption while making development and interaction more efficient and enjoyable. Use the cheat sheet below will help you hit the ground running with Drupal Console.

Drupal 8 Console: Cache Commands

# Rebuild and clear all site caches.
$ drupal cache:rebuild

# Rebuilds all caches
$ drupal cr all

# Rebuild discovery cache
$ drupal cr discovery

Drupal 8 Console: Module Commands

# Display current modules available for application
$ drupal module:debug

# Download module or modules in application
$ drupal module:download

# Install module or modules in the application
$ drupal module:install

# Uninstall module or modules in the application
$ drupal module:uninstall

Drupal 8 Console: Image Commands

# List image styles on the site
$ drupal image:styles:debug

# Execute flush function by image style or execute all flush images styles
$ drupal image:styles:flush

Drupal 8 Console: User Commands

# Displays current users for the application
$ drupal user:debug

# Delete users for the application
$ drupal user:delete

# Clear failed login attempts for an account.
$ drupal user:login:clear:attempts

# Returns a one-time user login url.
$ drupal user:login:url

# Generate a hash from a plaintext password.
$ drupal user:password:hash

# Reset password for a specific user.
$ drupal user:password:reset

Drupal 8 Console: Generate Commands

# Generate an Authentication Provider
$ drupal generate:authentication:provider

# Generate commands for the console.
$ drupal generate:command

# Generate & Register a controller
$ drupal generate:controller

# Generate the DrupalConsole.docset package for Dash
$ drupal generate:doc:dash

# commands.generate.doc.data.description
$ drupal generate:doc:data

# Generate documentations for Commands
$ drupal generate:doc:gitbook

# Generate a new content type (node / entity bundle)
$ drupal generate:entity:bundle

# Generate a new config entity
$ drupal generate:entity:config

# Generate a new content entity
$ drupal generate:entity:content

# Generate an event subscriber
$ drupal generate:event:subscriber

# Generate a new "FormBase"
$ drupal generate:form

# Generate an implementation of hook_form_alter() or hook_form_FORM_ID_alter
$ drupal generate:form:alter

# Generate a new "ConfigFormBase"
$ drupal generate:form:config

# Generate a module.
$ drupal generate:module

# Generate module permissions
$ drupal generate:permissions

# Generate a plugin block
$ drupal generate:plugin:block

# Generate CKEditor button plugin.
$ drupal generate:plugin:ckeditorbutton

# Generate a plugin condition.
$ drupal generate:plugin:condition

# Generate field type, widget and formatter plugins.
$ drupal generate:plugin:field

# Generate field formatter plugin.
$ drupal generate:plugin:fieldformatter

# Generate field type plugin.
$ drupal generate:plugin:fieldtype

# Generate field widget plugin.
$ drupal generate:plugin:fieldwidget

# Generate image effect plugin.
$ drupal generate:plugin:imageeffect

# Generate image formatter plugin.
$ drupal generate:plugin:imageformatter

# Generate a plugin mail
$ drupal generate:plugin:mail

# Generate plugin rest resource
$ drupal generate:plugin:rest:resource

# Generate a plugin rule action
$ drupal generate:plugin:rulesaction

# Generate a plugin type with annotation discovery
$ drupal generate:plugin:type:annotation

# Generate a plugin type with Yaml discovery
$ drupal generate:plugin:type:yaml

# Generate a custom plugin view field.
$ drupal generate:plugin:views:field

# Generate a profile.
$ drupal generate:profile

# Generate a RouteSubscriber
$ drupal generate:routesubscriber

# Generate service
$ drupal generate:service

# Generate a theme.
$ drupal generate:theme

Drupal 8 Console: Cron Commands

# List of modules implementing a cron
$ drupal cron:debug

# Execute cron implementations by module or execute all crons
$ drupal cron:execute

# Release cron system lock to run cron again
$ drupal cron:release

Drupal 8 Console: Theme Commands

# Displays current themes for the application
$ drupal theme:debug

# Download theme in application
$ drupal theme:download

# Install theme or themes in the application
$ drupal theme:install

# Uninstall theme or themes in the application
$ drupal theme:uninstall

Drupal 8 Console: Views Commands

# Display current views resources for the application
$ drupal views:debug

# Disable a View
$ drupal views:disable

# Enable a View
$ drupal views:enable

# Display current views plugins for the application
$ drupal views:plugins:debug

Drupal 8 Console: Translation Commands

# Clean up translation files
$ drupal translation:cleanup

# Determine pending translation string in a language or a specific file in a language
$ drupal translation:pending

# Generate translate stats
$ drupal translation:stats

# Sync translation files
$ drupal translation:sync

Drupal 8 Console: Site Commands

# List all known local and remote sites.
$ drupal site:debug

# Import/Configure an existing local Drupal project
$ drupal site:import:local

# Install a Drupal project
$ drupal site:install

# Switch site into maintenance mode
$ drupal site:maintenance

# Switch system performance configuration
$ drupal site:mode

# Create a new Drupal project
$ drupal site:new

# Show the current statistics of website.
$ drupal site:statistics

# View current Drupal Installation status
$ drupal site:status

Drupal 8 Console: Database Commands

# Launch a DB client if it's available
$ drupal database:client

# Shows DB connection
$ drupal database:connect

# Drop all tables in a given database.
$ drupal database:drop

# Dump structure and contents of a database
$ drupal database:dump

# Remove events from DBLog table, filters are available
$ drupal database:log:clear

# Display current log events for the application
$ drupal database:log:debug

# Restore structure and contents of a database.
$ drupal database:restore

# Show all tables in a given database.
$ drupal database:table:debug

Drupal 8 Console: Create Commands

# Create dummy comments for your Drupal 8 application.
$ drupal create:comments

# Create dummy nodes for your Drupal 8 application.
$ drupal create:nodes

# Create dummy terms for your Drupal 8 application.
$ drupal create:terms

# Create dummy users for your Drupal 8 application.
$ drupal create:users

# Create dummy vocabularies for your Drupal 8 application.
$ drupal create:vocabularies

Drupal 8 Console: Miscellaneous Commands

# Display basic information about Drupal Console project
$ drupal about

# Chain command execution
$ drupal chain

# System requirement checker
$ drupal check

# Displays help for a command
$ drupal help

# Copy configuration files to user home directory.
$ drupal init

# Lists all available commands
$ drupal list

# Update project to the latest version.
$ drupal self-update

# Runs PHP built-in web server
$ drupal server

Like this:

Like Loading...

Author: Ben Marshall

Red Bull Addict, Self-Proclaimed Grill Master, Entrepreneur, Workaholic, Front End Engineer, SEO/SM Strategist, Web Developer, Blogger

Apr 18 2016
Apr 18

In our last post we talked about how the Drupal Community is supporting Drupal 6 after its end-of-life and what that means for your Drupal 6 site.  In this post we’ll get a bit more technical and talk about what exactly you need to do to keep your website up to date.

Step #1: Getting an accurate report of your site’s modules and themes

Ever since the Drupal 6 sunset date, your website’s report of Available Updates at /admin/reports/updates has been telling you that everything is unsupported and should be uninstalled.  That’s not very helpful :-)

Drupal 6 Update module report

So the first step is to get the myDropWizard module.  This is provided free (libre) to the Drupal Community by one of the Drupal LTS vendors. The main purpose of this module is to show whether a module is supported by the Drupal 6 Long-Term Support (LTS) vendors, so you’ll know if it’ll be getting security updates going forward.

Drupal 6 MyDropWizard report

Ahhh, that’s better.

Now you can continue to manage security updates for modules and themes just like you always have.  But there are some significant gotchas.

How problems are found and fixed

When vulnerabilities are found they will be posted as issues in the Drupal 6 LTS project.  Once the issue is  fixed, and if the module maintainer is still maintaining the D6 version, then the maintainer will simply release a new version, just like normal.  But if the maintainer is no longer maintaining it, then new releases are made on Github.  Drupal Core is no longer receiving any new commits on the 6.x branch,  so new releases will be made to the Pressflow project, also on Github.

How to obtain new versions

If new releases are scattered across Drupal.org and Github, the question becomes: How do we easily obtain new versions?  Assuming that you run security updates with Drush, then you can use the --update-backend=mydropwizard flag when calling any of Drush’s Project Management commands (pm-download, pm-refresh, pm-updatedstatus).  The MyDropWizard backend will automatically obtain the project from Github or Drupal.org.

New Drush

But this brings a new problem: The --update-backend flag only works with Drush 7 or later.  On a server that’s running Drupal 6, chances are high that it’s using Drush 6, and chances are also high that it’s using an old version of PHP (like 5.3.3 that is bundled with Debian/Ubuntu).  The old version of PHP means that you can’t upgrade to the latest Drush 7 (7.2.0).  But there’s a fix on the way for that.

Switching to Pressflow?

Since new releases are no longer made to Drupal Core, and only to Pressflow, this creates a bit of a dilemma for sites that are not running Pressflow.  In those cases it’s probably more cost effective to manually apply patches rather than move to Pressflow.  There’s a couple ways to be notified of these commits:

Add-on modules

The final tricky area that we’ve found is with modules that extend Update Status module. There’s two in particular that we use often: Update Exclude Unsupported, and Update Status Advanced Settings — neither of these will work anymore.  Instead you’ll need to implement hook_mydropwizard_status_alter().

/**
 * Implements hook_mydropwizard_status_alter().
 */
function mymodule_security_mydropwizard_status_alter($projects) {

  // Projects determined to be okay for this particular site.
  $projects_deemed_okay = array(
    // This is okay because of the finglewabble.
    'foo', 
    // This is okay because of hard-coded site configuration. 
    'bar',
  );
  foreach ($projects_deemed_okay as $module) {
    if (isset($projects[$module])) {
      $projects[$module]['status'] = 'deemed-okay';
      $projects[$module]['extra'][] = array(
        'class' => 'deemed-okay',
        'label' => t('Deemed okay'),
        'data' => t('This project was analyzed and determined to be acceptable to run on your site.'
      );
    }
  }
}

Conclusion

I’m sure there will be a few more hiccups along the way, but this should get you started.

Image by: Henri Bergius

Apr 18 2016
Apr 18

email symbol on row of colourful envelopesWhat would a website be if it couldn’t send emails, even if just for password resets? Running your own mail server is a huge hassle, so many developers instead use a third party service to send transactional emails like password resets, new user welcome messages, and order summaries. One of the most popular services, in part because of their generous free tier, is Mandrill, owned by MailChimp.

In case you might have missed the announcement, MailChimp is changing Mandrill to be an add-on to paid MailChimp accounts, thus eliminating the generous free tier. We’re big fans of MailChimp and use its mailing list service for our own announcements, (hey, why not join that list if you’re not already on subscribed?) but a full MailChimp account isn’t going to be for everybody. They’ve already shut out the ability for new subscriptions, but if you’re a PHP developer who does things like put off your taxes until the last minute (American customers have three extra days this year, but that’s today), you’re probably sweating the April 27th deadline.

Many people also know Mandrill by reputation and will need options in the future. For you, we’ve put together this list of viable transactional email alternatives with PHP and major PHP application support. Joomla! and MODX support SMTP integration natively, so you’ll just need the SMTP configuration options from your chosen provider. If you want to use a provider’s web API, see the PHP options below.

Cal Evans did an unscientific Twitter survey to see what options people were migrating to:

If you are moving off of @mandrillapp, what are you moving to?

— Cal Evans (@CalEvans) March 29, 2016

SparkPost

MailChimp’s announcement notes that SparkPost has agreed to take on existing Mandrill users and honor Mandrill’s pricing for them. Fortunately, SparkPost has PHP users covered: there is an official PHP API library. There is also a Drupal module, but unfortunately it seems to be 7.x only at this writing and is only a sandbox project—you’ll have to install it via git. Drupal 8 users should be able to use the official API library with Composer. WordPress developers are in more luck: there is an official WordPress plugin. SparkPost provides a guide for Magento devs using the SMTP Pro extension. SparkPost also has one of the most generous plans we’ve seend, with 100,000 free emails per month, though you can not exceed that limit without upgrading ahead of time.

SendGrid

A long time option for PHP users has been SendGrid. (Full disclosure: SendGrid has sponsored our php[tek] conference in the past, but is not a current sponsor.) They have an official PHP API, installable via Composer. While there is a 7.x-only Drupal module, SendGrid recommends Drupal users use the SMTP Authentication Support or Swift Mailer modules in its documentation. Both the officially-recommended modules support Drupal 8 at least in the development releases of each module. Magento is also supported through the SMTP Pro extension. WordPress devs can install the official plugin. SendGrid doesn’t list a free tier on their pricing page, their “Essentials” plan start at $9.95 for 40,000 emails per month.

SendinBlue

Many devs I know have spoken highly of SendinBlue. They offer a WordPress plugin, (7.x only) Drupal module, and Magento extension. They also have an official PHP library. Their free tier is limited to 9,000 emails per month with no daily limits, however the messages will include SendinBlue branding.

Amazon SES

Amazon’s transactional email service is affordable but not as easy to install and configure for newbies. They have an official PHP library through the AWS PHP SDK. There is a third-party Drupal module for 7.x users. Similarly there’s an independent WordPress plugin. There is a USD 99 paid extension for Magento.

Mailjet

Mailjet offers a PHP API wrapper, a WordPress plugin, a 7.x-only Drupal plugin, a Joomla! extension, and a Magento plugin. The free tier is capped at 6,000 emails per month and 200 email per day. The first 30 days include a premium trial which allows users to explore segmentation, testing, and compare campaign performance.

Mailgun

Mailgun has a PHP SDK installable via Composer. There is also a WordPress plugin, a 7.x-only Drupal module,  and a Magento extension. The first 10,000 emails each month are free, after which you pay a tiered price based on monthly volume.

Postmark

Postmark offers a PHP API library, installable via Composer and available on Packagist. There is also an official WordPress plugin. There is a community-supported Drupal module (you guessed it, 7.x only) and Magento extension. There are also many other community modules for PHP frameworks. If you sign up to try it, the first 25,000 emails are free. After that, you can buy credits to send emails starting at $1.50 per thousand emails.

Conclusion

Which of these services you use depends on your needs, price sensitivity, and how much specific support you want for your platform. If I’ve missed any services with good PHP support, please let us know in the comments!

Image Credit: RaHuL Rodriguez on Flickr

Apr 08 2016
Apr 08

Those of you who still have a Drupal 6 site are by now aware that you need to do something with it since this version is no longer supported.  Your options in short are:

  • Upgrade to Drupal 7
  • Upgrade to Drupal 8
  • Choose one of several options to limit your vulnerability (e.g. convert the site into a static HTML website, or close logins to all but a handful of trusted people and harden the security of the login form)

But that’s a big decision.  What do you do until you’ve decided which path to choose?  Now that Drupal 6 is past its sunset date, is your site suddenly vulnerable to having its data stolen and being turned into a spam factory?

The short answer

As long as you have someone keeping an eye on the security of your site, you’re just fine.  Take some time to make your decision — just don’t wait too long.

Interested in our Drupal security services?  Contact us to find out more.

The long answer

The long answer is a bit more nuanced.  When the Drupal 6 end-of-life was approaching, the Drupal Security Team asked for vendors to apply to become recognized as official Drupal 6 Long Term Service Vendors.  These LTS vendors have clients running Drupal 6 websites.  As security vulnerabilities are found and fixed in Drupal 7 (and Drupal 7 modules) the vendors are committing to make those same fixes to the Drupal 6 versions, but only for the modules that their clients are using.  

That exception has significance for other Drupal 6 sites and it all boils down to the question of:

How much security is enough?  

Low risk websites

Many (most?) websites only need to be worried about automated security attacks: Villains and mischief makers will try to attack every website on the Internet using every known vulnerability.  A tiny fraction  of the time they’ll be successful and turn a website into a spam factory, or virus spreader.  If they do their work well the site owner won’t even notice.  There’s a very low success rate, but there’s a billion websites out there.  You do the math.

High risk websites

Other websites have to worry about someone trying to actively hack their website.  There’s usually three possible reasons for this:

  • Your website has information worth stealing — Maybe your site has an e-commerce component, or a database of hundreds of thousands of membership records (with full names and e-mail addresses).
  • Your website has a lot of visitors — This is really just a subset of the first point.  If someone could infect all those visitors with a virus they could make a lot of money.
  • Someone wants to shut your website down — Maybe your organization has a political bent that some people strongly disagree with.

So what does this mean for my Drupal 6 site?

If your site is in the low risk category, then nefarious individuals will be using the vulnerabilities fixed by the LTS vendors in their automated attacks.  As long as your site continues to be updated with these fixes you are probably fine.  

There is still some risk if:

  • your site runs a module that the LTS vendors do not support,
  • and a vulnerability is found in the Drupal 7 version of that module,
  • and that vulnerability exists identically on the Drupal 6 version.

That’s possibly enough “ifs” to keep the risk at an acceptable level.

Also be aware that this support won’t last forever.  As more sites get off of Drupal 6, the LTS Vendors will have fewer clients paying for those services, and the number of supported modules will diminish.  Eventually your Drupal 6 site could be the last one standing with no one looking out for it.  How long this support is “good enough” is impossible to say.

If your site is in the high risk category, then you need to take a more active role in preventing successful attacks.  You could:

  • Move the information worth stealing somewhere else.
  • Move your site off of Drupal 6 faster.
  • Become a client of a Drupal 6 LTS Vendor to ensure that all of your modules are supported (not just the ones that other LTS Vendor clients happen to be using).

If you need help figuring out what you need, just contact us.

Photo by Billie Grace Ward

Apr 05 2016
Apr 05

Continuous Integration is often sold as overall process improvement through continued learning. It serves to end error-prone manual processes and achieve the long-standing DevOps goal of consistency and automation.

“Don’t solve the same problem more than once.”

The practice of Continuous Integration has already yielded technical breakthroughs, and can be summarized by the moniker “don’t solve the same problem more than once”. Tools like Jenkins provide a common platform for development teams to create commands to automate their deployments, run code quality checking, facilitate security scans, or analyze the performance of systems. Free and open source tools are being shared through tools like GitHub to build community, create robust tools, and evolve the entire DevOps movement. Innovations in collaborative problem solving make Continuous Integration a reality for any development team.

But, this innovation is often only accessible by technical audiences to solve technical problems. There still is a significant missing piece to this practice. So, what’s next?

It’s time to evolve Continuous Integration beyond technical problem solving into a practice based on serving those we build the tools for. Continuous Integration is ready to undergo the transformation needed for broader adoption, that it may reach its full potential.

Accessible Continuous Integration

“Accessible” is the process of creating products that are usable by people with the widest possible range of abilities, operating within the widest possible range of situations.

What exactly do we mean when we talk about “Accessible Continuous Integration?” It is the adoption of any tool, innovation, or practice that eliminates silos between teams and knocks down barriers to entry.

We believe (Accessible) Continuous Integration can be for everyone — and we’d like to see a mind shift in how this practice is promoted and pursued. Bridges must not only be built between the systems we use, but also between technical and non-technical team members. We need to study this problem space, standardize conventions, and help communities recognize the influence of Accessible Continuous Integration. Technology must mature to a point where it is capable of serving a broad population in order to become truly powerful and effective.

There is evidence that these ideas are catching on in the wild. Conceptually, Accessible Continuous Integration seems to be exemplified by the following tenets:

  1. Simple – There is an emphasis on making tools and practices streamlined and unencumbered.
  2. Useful – Problems identified are solved comprehensively and inclusively with consideration for the needs of all parties.
  3. Flexible – Tools have robust sets of features, integrate easily with other adopted tools, and are configurable for many use cases now and in the future.
  4. Transparent – Concise communication is pursued to promote broad visibility by the ongoing operations of the continuous integration practice.

Some of today’s most innovative tools are models of Accessible Continuous Integration case studies. They demonstrate some or all of the aforementioned tenets.

  1. Slack – Messaging platform used to reduce barriers between technical and non-technical collaborators with a wide variety of plugins and extendable behaviors for integrating other systems and processes (GitHub, Jenkins, JIRA, etc.)
  2. GitHub – Development tool used to foster technical collaboration between developers with a set of available plugins for integrating continuous integration tools
  3. Pantheon – Cloud-based hosting framework with an on-demand container-based environment that lowers the barrier of entry for non-technical staff to test development work in a fluid manner, in addition to APIs that allow developers to extend hosting operations
  4. Trello – An easy-to-use and customizable backlog management tool that has integrations for Slack, GitHub, and many other systems

This conversation of Accessible Continuous Learning is just getting started. These tools and practices, among others, have the potential for tremendous growth in this space. At CivicActions, we’re committed to exploring Accessible Continuous Integration to its furthest possibilities. It resonates with who we are and our vision of giving back to our open source roots, strengthening our non-technical stakeholders, and delivering quality solutions.

The idea was recently presented at the 2016 Stanford Drupal Camp. The recording can be found here.

We want to jump-start this discussion and drive toward elegant and lasting solutions. Our clients deserve the best we can offer. Accessible Continuous Integration is a significant step toward our vision of digital empowerment for all.

Mar 07 2016
Mar 07
teaser image for blog post

In learning about custom Drupal 8 module development, I found plenty of very simple field module examples, but none that covered how to store more than one value in a field and still have it work properly, so it's time to fix that.

To save you typing or copy and pasting things around all the code in this post is available on Github at https://github.com/ixis/dicefield

Concepts

There are three main elements to define when creating a field type:

  • The field base is the definition of the field itself and contains things like what properties it should have.
  • The field widget defines the form field that is used to put data into your field, what its rules are and how those data are manipulated and stored in the field.
  • The field formatter is how the field will be displayed to the end user and what options are configurable to customise that display.

So far, so familiar if you've ever worked with Drupal 7 fields, and this is like so much of Drupal 8: on the surface, to the end user, it's very similar, but behind the scenes, it's a whole new world.

Use case

To create a (probably quite limited-use, in all honesty) real-world example, I decided to take on the challenge of creating a field to represent dice notation. For example, if you see 1d6 you would grab a single six-sided die and roll it. If you see 3d6-2, you would roll 3 six-sided dice and subtract 2 from the result.

There are three components here:

  • The number of dice
  • The number of sides on each die
  • The modifier: the part that is added or subtracted at the end

Although in practice you could store the whole thing as one big string, and it would be a walk in the park to set up, you would lose some of the more useful functionality, such as search indexing and sorting at a database level. Suppose you wanted to create a view that filtered only field values that involved rolling 5 dice. With a multi-value field such as the one we're creating, it's simple to do. If you store everything as one big string, it involves pattern matching or loading all the results and sifting through them.

Info file

Info files are now YAML format, and this is covered in detail elsewhere, but here's what I came up with:

dicefield.info.yml

name: Dice field
type: module
description: A way of specifying a dice value such as 1d6 or 2d8+3.
package: Field types
version: 1.0
core: 8.x
 
dependencies:
  - field

This is nice and straightforward, and, obviously, our module must depend on the core field module, or it cannot work at all.

Note that Drupal 8 no longer requires anything more than an info file to enable a module; previous versions required an empty .module file at least.

Field base

Now things get interesting. We're going to create a new plugin class to define our field type. The system generally works by extending one of the existing types and making the necessary changes to it. This is the biggest piece of advice I can give beyond reading articles like these: do as little work as possible! Copy/paste from existing things in core or contributed modules and change them to suit (although obviously give credit where it's due).

It's worth noting that, unlike in Drupal 7, our dicefield.info.yml file does not contain a list of "includes" that Drupal 8 should know about. These are loaded automatically by the PSR-4 autoloader, which is both more efficient and more convenient than the previous method. It does mean, however, that you must be careful to lay out your folder structure carefully and make sure things are named properly, because these things do matter in Drupal 8.

src/Plugin/Field/FieldType/Dice.php

/**
 * @file
 * Contains \Drupal\dicefield\Plugin\Field\FieldType\Dice.
 */
 
namespace Drupal\dicefield\Plugin\Field\FieldType;
 
use Drupal\Core\Field\FieldItemBase;
use Drupal\Core\Field\FieldStorageDefinitionInterface;
use Drupal\Core\TypedData\DataDefinition;
 
/**
 * Plugin implementation of the 'dice' field type.
 *
 * @FieldType (
 *   id = "dice",
 *   label = @Translation("Dice"),
 *   description = @Translation("Stores a dice roll such as 1d6 or 2d8+3."),
 *   default_widget = "dice",
 *   default_formatter = "dice"
 * )
 */
class Dice extends FieldItemBase {
  /**
   * {@inheritdoc}
   */
  public static function schema(FieldStorageDefinitionInterface $field_definition) {
    return array(
      'columns' => array(
        'number' => array(
          'type' => 'int',
          'unsigned' => TRUE,
          'not null' => FALSE,
        ),
        'sides' => array(
          'type' => 'int',
          'unsigned' => TRUE,
          'not null' => TRUE,
        ),
        'modifier' => array(
          'type' => 'int',
          'not null' => TRUE,
          'default' => 0,
        ),
      ),
    );
  }
 
  /**
   * {@inheritdoc}
  */
  public function isEmpty() {
    $value1 = $this->get('number')->getValue();
    $value2 = $this->get('sides')->getValue();
    $value3 = $this->get('modifier')->getValue();
    return empty($value1) &amp;&amp; empty($value2) &amp;&amp; empty($value3);
  }
 
  /**
   * {@inheritdoc}
   */
  public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) {
    // Add our properties.
    $properties['number'] = DataDefinition::create('integer')
      ->setLabel(t('Number'))
      ->setDescription(t('The number of dice'));
 
    $properties['sides'] = DataDefinition::create('integer')
      ->setLabel(t('Sides'))
      ->setDescription(t('The number of sides on each die'));
 
    $properties['modifier'] = DataDefinition::create('integer')
      ->setLabel(t('Modifier'))
      ->setDescription(t('The modifier to be applied after the roll'));
 
    $properties['average'] = DataDefinition::create('float')
      ->setLabel(t('Average'))
      ->setDescription(t('The average roll produced by this dice setup'))
      ->setComputed(TRUE)
      ->setClass('\Drupal\dicefield\AverageRoll');
 
    return $properties;
  }
}

This looks quite complicated and also quite alien from most things in Drupal 7 unless you're used to working with ctools plugins or the migrate system. Let's break it down a little bit.

The namespace

We have defined our namespace at very nearly the top of the file:

namespace Drupal\dicefield\Plugin\Field\FieldType;

The standard way is to use the Drupal namespace, followed by the name of your module (exactly the same as the name of the folder your module lives in), then the other bits. It's this namespace that will tell the PSR-4 autoloader where to find the classes it needs, so make sure it's correct!

Annotation-based plugin definition

There are multiple ways of defining the plugin's core data. The standard Drupal 8 way is to use annotations, which are like code comment blocks, but contain actual code rather than a comment. Other ways include YAML files, for example, but we're going to keep things simple here.

Note that one downside to using annotations to define plugin data is that since they are effectively comments, not all IDEs can interpret them in the same way as code, so you lose the syntax highlighting and code suggestions associated with writing PHP code in a modern IDE (we use PhpStorm internally). While this might look bad, it's actually not a huge deal because:

  • The plugin definition is a tiny part of your overall code base.
  • The code is right there in front of you, instead of in a separate file.
  • There are still other options if you really don't like it.

Here's the code in question:

/**
 * Plugin implementation of the 'dice' field type.
 *
 * @FieldType (
 *   id = "dice",
 *   label = @Translation("Dice"),
 *   description = @Translation("Stores a dice roll such as 1d6 or 2d8+3."),
 *   default_widget = "dice",
 *   default_formatter = "dice"
 * )
 */

Everything starting from the @FieldType is the plugin definition and everything above is just a regular comment, so you can still write a useful description if you like (and in fact, you should).

The @FieldType part tells Drupal 8 that it is a new field type. There are other annotations that can define various things in Drupal, and we'll see a few others later in the article.

There are a number of key/value pairs in the definition, and these work as follows:

  • id is used to give this plugin a machine name. This only needs to be unique for the type of thing being defined here, so you could have a FieldType called "dice" and also a FieldFormatter called "dice" without worrying about the implications of a namespace collision.
  • label uses the @Translation() notation, which is just like using Drupal's t() function, and provides a human-readable name to be used in the admin UI and other places.
  • description also uses @Translation and just lets users know what your field is for.
  • default_widget is the machine name of the widget that will be used, by default, when this field is put in place on an entity. If there are multiple widgets available, users will be able to pick, but this will be the default. Note that this refers to the machine name of the widget, not the class name. Drupal makes this distinction a lot, so you will become used to working with two different types of notation: Drupal internal machine names, and class names. The class name is not needed here. As long as we define a @FieldWidget plugin later, with an id of "dice", we will be good to go.
  • default_formatter works the same way as default_widget, but is used for the formatter (what the user sees on the front end, rather than the way data are put into your field). Note how these both have the same name. Because they're different plugin types (one is a FieldWidget and the other is a FieldFormatter), they can have the same name and Drupal 8 won't get confused.

There are also a number of other keys that you can use here, but these are best detailed by the Drupal documentation on Entity annotation, and we've covered the ones we need.

Extending classes

Nearly every class you write in Drupal 8 will extend another class, or implement an interface, or apply a trait, or perhaps any combination of those. For example:

class Dice extends FieldItemBase {

We are extending from the base field class here and this will give us all of the functionality we need to implement a new field type. All we have to do is override the methods that we want to work in a different way.

The schema

The schema is simply the definition for how the data will be stored (in the database, or whatever storage engine you're using). We need to return an array (apparently we're still stuck in "array inception" mode for some parts of Drupal 8 but thankfully this is now a lot less common) of arrays, that contain arrays that define the columns we want to store. Yeah, that.

/**
 * {@inheritdoc}
 */
 public static function schema(FieldStorageDefinitionInterface $field_definition) {
   return array(
     'columns' => array(
       'number' => array(
         'type' => 'int',
         'unsigned' => TRUE,
         'not null' => FALSE,
       ),
       'sides' => array(
         'type' => 'int',
         'unsigned' => TRUE,
         'not null' => TRUE,
       ),
       'modifier' => array(
        'type' => 'int',
        'not null' => TRUE,
        'default' => 0,
      ),
    ),
  );
}

We are basically only defining the columns key in the outer array. In practice it's probably a good idea to define indexes and possibly also foreign keys if your fields will be linked to other data, but let's keep things simple for now.

The definitions in columns work the same way as the Drupal 8 schema API which you can use for reference if you need to.

Notice we're defining three integer fields: one for the number of dice, one for the number of sides on each die, and one for the modifier.

The number of dice and the sides are mandatory, so they do not have a default value. However, you can safely assume that unless otherwise stated, the modifier is optional, and should default to zero, which is why this one has a default value.

Note also that the first two are unsigned, because you can't have a die with -6 sides. The modifier is not unsigned, because both +3 and -3 are valid for modifiers.

The final thing worth mentioning here is that we're only defining fields that are actually stored as data. Later on, we'll see how to derive a computed field, but since the field is a calculated value (which is then cached in the render cache, so stop sweating about performance already!) it is not stored in the database and shouldn't be defined here.

isEmpty

It's very important to tell Drupal how to know if your field is empty or not. Without this, certain basic field functionality will not work properly. In our case, it's quite straightforward: the field is only really "empty" if none of the three values contain anything.

/**
 * {@inheritdoc}
 */
public function isEmpty() {
  $value1 = $this->get('number')->getValue();
  $value2 = $this->get('sides')->getValue();
  $value3 = $this->get('modifier')->getValue();
  return empty($value1) &amp;&amp; empty($value2) &amp;&amp; empty($value3);
}

Notice how we're using the internal method $this->get() to grab the value? The properties attached to the field will be called the same thing as those in propertyDefinitions() (see below). It makes sense for them to also match the properties we have defined in schema() above, but this does not necessarily have to be the case. Just have a good reason for doing otherwise!

propertyDefinitions

Next, we define the properties that this field will have. These will be the individual pieces of data we can retrieve from the field, and will affect things like view sorting order and how we will set up our formatter later.

/**
 * {@inheritdoc}
 */
public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) {
  // Add our properties.
  $properties['number'] = DataDefinition::create('integer')
    ->setLabel(t('Number'))
    ->setDescription(t('The number of dice'));
 
  $properties['sides'] = DataDefinition::create('integer')
    ->setLabel(t('Sides'))
    ->setDescription(t('The number of sides on each die'));
 
  $properties['modifier'] = DataDefinition::create('integer')
    ->setLabel(t('Modifier'))
    ->setDescription(t('The modifier to be applied after the roll'));
 
  $properties['average'] = DataDefinition::create('float')
    ->setLabel(t('Average'))
    ->setDescription(t('The average roll produced by this dice setup'))
    ->setComputed(TRUE)
    ->setClass('\Drupal\dicefield\AverageRoll');
 
  return $properties;
}

Note that we have the same three properties that we defined as being stored in schema() above, plus a fourth one, called average. This is a computed field, which means that instead of storing the value in the database, we derive it from the values of the other fields. It is more useful to do it this way, because the average value is just the sum of the minimum and maximum possible roll, halved, then added to the modifier. If we were to store this in the database we would be wasting database space. You might think it inefficient to compute this value, but in fact, it's cached by Drupal's render cache system, and invalidated only when the field is updated, so except for the first time it's computed, it's not generally a performance hindrance.

Each of our four properties are basic types as defined by Drupal's typed data API. We have three integers and a float, but we could also use string or other types if we wanted to. We could even come up with our own types, but that's not necessary for this field so I won't cover it here.

We just return an array of properties by using DataDefiniton::create() and chaining the methods we want in order to create the property. As a minimum, you should use setLabel() and setDescription, but there are plenty of others that you can use. The fourth property, average, has two extra methods.

setComputed() is used to indicate that this field is computed rather than stored in the database, so there won't be a matching column in schema().

Given that it's computed, Drupal needs to know what class to use to do this computation, and this is where setClass() comes in. See below for more about computing field values in their own classes.

Widget

Now that we've set up our field base, we need to set up a widget so that people editing a node (or other entity) where this field is used are able to input or edit the data.

src/Plugin/Field/FieldWidget/DiceWidget.php

/**
 * @file
 * Contains \Drupal\dicefield\Plugin\Field\FieldWidget\DiceWidget.
 */
 
namespace Drupal\dicefield\Plugin\Field\FieldWidget;
 
use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\WidgetBase;
 
/**
 * Plugin implementation of the 'dice' widget.
 *
 * @FieldWidget (
 *   id = "dice",
 *   label = @Translation("Dice widget"),
 *   field_types = {
 *     "dice"
 *   }
 * )
 */
class DiceWidget extends WidgetBase {
  /**
   * {@inheritdoc}
   */
  public function formElement(
    FieldItemListInterface $items,
    $delta,
    array $element,
    array &amp;$form,
    array &amp;$form_state
  ) {
    $element['number'] = array(
      '#type' => 'number',
      '#title' => t('# of dice'),
      '#default_value' => isset($items[$delta]->number) ? $items[$delta]->number : 1,
      '#size' => 3,
    );
    $element['sides'] = array(
      '#type' => 'number',
      '#title' => t('Sides'),
      '#field_prefix' => 'd',
      '#default_value' => isset($items[$delta]->sides) ? $items[$delta]->sides : 6,
      '#size' => 3,
    );
    $element['modifier'] = array(
      '#type' => 'number',
      '#title' => t('Modifier'),
      '#default_value' => isset($items[$delta]->modifier) ? $items[$delta]->modifier : 0,
      '#size' => 3,
    );
 
    // If cardinality is 1, ensure a label is output for the field by wrapping
    // it in a details element.
    if ($this->fieldDefinition->getFieldStorageDefinition()->getCardinality() == 1) {
      $element += array(
        '#type' => 'fieldset',
        '#attributes' => array('class' => array('container-inline')),
      );
    }
 
    return $element;
  }
}

Class

Once again, we are inheriting from the base WidgetBase class because almost all of the work involved in being a widget is done for us, so we only need to lift a finger to tell Drupal what's different from the base.

class DiceWidget extends WidgetBase {

Annotation

Again, we see that the annotation at the top of the class defines basic data on this widget:

/**
 * Plugin implementation of the 'dice' widget.
 *
 * @FieldWidget (
 *   id = "dice",
 *   label = @Translation("Dice widget"),
 *   field_types = {
 *     "dice"
 *   }
 * )
 */

This time, @FieldWidget tells Drupal it's dealing with a widget, and the id and label properties work the same way as for the base field above.

We have an array this time, in the form of field_types, which tells Drupal which types of field are allowed to use this widget. Note that unlike regular PHP arrays in Drupal, you must not put a comma after the last element in these arrays.

This field_types allows us to create new widgets, even for existing field types, in case we want a better or different way of inputting data. For example, a geo-location field that stores map coordinates might have a text widget for inputting the data manually, and a separate map widget that allows the user to click on a map to choose a point.

formElement

In actual fact, we only need to override one method in this class:

  /**
   * {@inheritdoc}
   */
  public function formElement(
    FieldItemListInterface $items,
    $delta,
    array $element,
    array &amp;$form,
    FormStateInterface $form_state,
  ) {
    $element['number'] = array(
      '#type' => 'number',
      '#title' => t('# of dice'),
      '#default_value' => isset($items[$delta]->number) ? $items[$delta]->number : 1,
      '#size' => 3,
    );
    $element['sides'] = array(
      '#type' => 'number',
      '#title' => t('Sides'),
      '#field_prefix' => 'd',
      '#default_value' => isset($items[$delta]->sides) ? $items[$delta]->sides : 6,
      '#size' => 3,
    );
    $element['modifier'] = array(
      '#type' => 'number',
      '#title' => t('Modifier'),
      '#default_value' => isset($items[$delta]->modifier) ? $items[$delta]->modifier : 0,
      '#size' => 3,
    );
 
    // If cardinality is 1, ensure a label is output for the field by wrapping
    // it in a details element.
    if ($this->fieldDefinition->getFieldStorageDefinition()->getCardinality() == 1) {
      $element += array(
        '#type' => 'fieldset',
        '#attributes' => array('class' => array('container-inline')),
      );
    }
 
    return $element;
  }

This method tells Drupal how to render the form for this field. Because we need to know three things (the number of dice, the sides per die, and the modifier), we will provide three fields for this. Note how the form keys for these fields match what we defined in schema() and propertyDefinitions() above.

The #attributes on the fieldset causes the fields to be displayed inline instead of one line after another.

These fields use the number field type, which is basically a text field but with little up and down arrows that can be used to increase or decrease the value. It also provides some basic validation in that you need to put a numerical value in here, not a string, and there's no need to write this validation if it's already done for us.

The #default_value key shows how to extract the value from the current field. In the case where we're editing a field, we want the existing values to be in the form ready to be changed, and $items[$delta]->PROPERTY_NAME will do that for us.

I have also set up a default value in the case where we're creating a completely new node, as I felt it was nice to be able to show an example of the required input. Also, since the modifier is often zero, it makes sense to set this as a default value. I could have also used #placeholder to put an HTML5 placeholder value in the field instead of real input.

The last part of this method simply adds a fieldset so that if the field cardinality is 1 (only 1 "dice roll" field value can be put in, instead of allowing unlimited, or a higher number of entries), then the label for the field will still show up properly. This can be used as-is for most field widgets.

Formatter

The last required step (and it might not even be required if you can re-purpose a core formatter from Drupal itself) is to set up a new formatter. This will output the information in the field onto the screen so that users can see it.

src/Plugin/Field/FieldFormatter/DiceFormatter.php

/**
 * @file
 * Contains \Drupal\dicefield\Plugin\Field\FieldFormatter\DiceFormatter.
 */
 
namespace Drupal\dicefield\Plugin\Field\FieldFormatter;
 
use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\FormatterBase;
 
/**
 * Plugin implementation of the 'dice' formatter.
 *
 * @FieldFormatter (
 *   id = "dice",
 *   label = @Translation("Dice"),
 *   field_types = {
 *     "dice"
 *   }
 * )
 */
class DiceFormatter extends FormatterBase {
  /**
   * {@inheritdoc}
   */
  public function viewElements(FieldItemListInterface $items, $langcode = NULL) {
    $elements = array();
 
    foreach ($items as $delta => $item) {
      if ($item->sides == 1) {
        // If we are using a 1-sided die (occasionally sees use), just write "1"
        // instead of "1d1" which looks silly.
        $markup = $item->number * $item->sides;
      }
      else {
        $markup = $item->number . 'd' . $item->sides;
      }
 
      // Add the modifier if necessary.
      if (!empty($item->modifier)) {
        $sign = $item->modifier > 0 ? '+' : '-';
        $markup .= $sign . $item->modifier;
      }
 
      $elements[$delta] = array(
        '#type' => 'markup',
        '#markup' => $markup,
      );
    }
 
    return $elements;
  }
}

Annotation

The annotation defines the basic formatter data, and works just like the others above.

  /**
   * Plugin implementation of the 'dice' formatter.
   *
   * @FieldFormatter (
   *   id = "dice",
   *   label = @Translation("Dice"),
   *   field_types = {
   *     "dice"
   *   }
   * )
   */

In fact, this is almost identical to the one for DiceWidget that we defined above, except we're now using the @FieldFormatter type.

Class

As we've seen above, it's easiest to just extend the base class, FormatterBase, since this does all the heavy lifting already and we can pick and choose what to override.

class DiceFormatter extends FormatterBase {

viewElements

We are overriding the method that actually produces the markup that will be displayed on the page:

/**
 * {@inheritdoc}
 */
public function viewElements(FieldItemListInterface $items, $langcode = NULL) {
  $elements = array();
 
  foreach ($items as $delta => $item) {
    if ($item->sides == 1) {
      // If we are using a 1-sided die (occasionally sees use), just write "1"
      // instead of "1d1" which looks silly.
      $markup = $item->number * $item->sides;
    }
    else {
      $markup = $item->number . 'd' . $item->sides;
    }
 
    // Add the modifier if necessary.
    if (!empty($item->modifier)) {
      $sign = $item->modifier > 0 ? '+' : '-';
      $markup .= $sign . $item->modifier;
    }
 
    $elements[$delta] = array(
      '#type' => 'markup',
      '#markup' => $markup,
    );
  }
 
  return $elements;
}

The most important thing here is that we loop through the $items because each field could have a cardinality of greater than one, meaning multiple dice rolls can be stored in a single field.

There is a fringe case where technically it's possible (not in the physical world) to have a one-sided die, which will always roll a 1, no matter what, so instead of writing 1d1, we tell the formatter to present it as just 1 for clarity.

Next we add the modifier, but only if it's a non-zero, because 1d6 looks cleaner than 1d6+0. Note that the modifier could be positive or negative, so we need to account for that in the code. Negative numbers, when converted to strings, already have a negative symbol at the front, but positive ones don't, so we add that on.

The last part is quite important, and that is presenting the return value as a series of render arrays, rather than just plain text. Everything should be presented as a render array where possible, because this allows Drupal to delay its rendering until the last possible moment, affording other modules the opportunity to override where necessary.

AverageRoll type

Earlier, we defined a computed field when we set up propertyDefinitions(). This means that we need to tell Drupal how to compute the value of this field, and we will create a separate class for this.

src/AverageRoll.php

/**
 * @file
 * Contains \Drupal\dicefield\AverageRoll.
 */
 
namespace Drupal\dicefield;
 
use Drupal\Component\Utility\SafeMarkup;
use Drupal\Component\Utility\String;
use Drupal\Core\TypedData\DataDefinitionInterface;
use Drupal\Core\TypedData\TypedDataInterface;
use Drupal\Core\TypedData\TypedData;
 
/**
 * A computed property for an average dice roll.
 */
class AverageRoll extends TypedData {
 
  /**
   * Cached processed value.
   *
   * @var string|null
   */
  protected $processed = NULL;
 
  /**
   * Implements \Drupal\Core\TypedData\TypedDataInterface::getValue().
   */
  public function getValue($langcode = NULL) {
    if ($this->processed !== NULL) {
      return $this->processed;
    }
 
    $item = $this->getParent();
 
    // The minimum roll is the same as the number of dice, which will occur if
    // all dice come up as a 1. Then apply the modifier.
    $minimum = $item->number + $item->modifier;
 
    // The maximum roll is the number of sides on each die times the number of
    // dice. Then apply the modifier.
    $maximum = ($item->number * $item->sides) + $item->modifier;
 
    // Add together the minimum and maximum and divide by two. In cases where we
    // get a fraction, take the lower boundary.
    $this->processed = ($minimum + $maximum) / 2;
    return $this->processed;
  }
 
  /**
   * Implements \Drupal\Core\TypedData\TypedDataInterface::setValue().
   */
  public function setValue($value, $notify = TRUE) {
    $this->processed = $value;
 
    // Notify the parent of any changes.
    if ($notify &amp;&amp; isset($this->parent)) {
      $this->parent->onChange($this->name);
    }
  }
}

There is no annotation!

Since we're not defining a plugin here, there's no annotation at the top of this class. There's no need.

Class

We are simply extending an existing type, to do the heavy lifting for us, as with previous classes above.

class AverageRoll extends TypedData {

Caching

I mentioned earlier that Drupal's render cache means that we only need to process the value when it changes, rather than every time we see the field, and we can add a mechanism to achieve this. It's via a protected property:

/**
 * Cached processed value.
 *
 * @var string|null
 */
protected $processed = NULL;

Note how even though this is just a property on a class, it is still fully documented like anything else!

getValue

This method will, as the name suggests, get the value of the computed field when Drupal asks for it. Note that method doesn't need to be called directly. Drupal takes care of this internally which means that you can just use $item->average instead of having to write $item->average->getValue() or anything complicated like that.

/**
 * Implements \Drupal\Core\TypedData\TypedDataInterface::getValue().
 */
public function getValue($langcode = NULL) {
  if ($this->processed !== NULL) {
    return $this->processed;
  }
 
  $item = $this->getParent();
 
  // The minimum roll is the same as the number of dice, which will occur if
  // all dice come up as a 1. Then apply the modifier.
  $minimum = $item->number + $item->modifier;
 
  // The maximum roll is the number of sides on each die times the number of
  // dice. Then apply the modifier.
  $maximum = ($item->number * $item->sides) + $item->modifier;
 
  // Add together the minimum and maximum and divide by two. In cases where we
  // get a fraction, take the lower boundary.
  $this->processed = ($minimum + $maximum) / 2;
  return $this->processed;
}

At the beginning of this method is the test to see if we have already assigned a value to $this->processed. If we have, we don't need to compute the value. We only do that part if the value is null, to save on processing power.

The internals of this method first work out the minimum and maximum possible rolls, applying the modifier to each, and then divide their total by two, which gives the average. For many dice rolls this will be a fraction, which is why we have defined this field as a float rather than an integer.

setValue

Lastly, to make sure cache invalidation works correctly, we need to define a method to take care of setting our value:

/**
 * Implements \Drupal\Core\TypedData\TypedDataInterface::setValue().
 */
public function setValue($value, $notify = TRUE) {
  $this->processed = $value;
 
  // Notify the parent of any changes.
  if ($notify &amp;&amp; isset($this->parent)) {
    $this->parent->onChange($this->name);
  }
}

Here, we set $this->processed to the new value, but we also notify the class's parent via the onChange() method. Remember that the vast majority of implementation is handled by the base class instead of making us do the work here, so we ought to be happy to pass the buck to Drupal itself where we can!

AverageRollFormatter

Lastly, it's worth bearing in mind that we have a formatter than can display the dice roll itself, and we also have a computed field that can calculate the average roll, but we have no way of actually showing the average roll to the end user. We will create one more formatter class that will take care of this for us. Because this is a separate formatter, it will allow the site administrator to choose, when displaying dice fields (in views, say), whether to show the dice notation, the average value, or even both (by adding the field twice with different formatters).

src/Plugin/Field/FieldFormatter/AverageRollFormatter.php

/**
 * @file
 * Contains \Drupal\dicefield\Plugin\Field\FieldFormatter\AverageRollFormatter.
 */
 
namespace Drupal\dicefield\Plugin\Field\FieldFormatter;
 
use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\FormatterBase;
 
/**
 * Plugin implementation of the 'average_roll' formatter.
 *
 * @FieldFormatter (
 *   id = "average_roll",
 *   label = @Translation("Average roll"),
 *   field_types = {
 *     "dice"
 *   }
 * )
 */
class AverageRollFormatter extends FormatterBase {
  /**
   * {@inheritdoc}
   */
  public function viewElements(FieldItemListInterface $items, $langcode = NULL) {
    $elements = array();
 
    foreach ($items as $delta => $item) {
      $elements[$delta] = array(
        '#type' => 'markup',
        '#markup' => $item->average,
      );
    }
 
    return $elements;
  }
}

The details here are much the same as our DiceFormatter from above, but the implementation is even more simple. Since the value of the field is computed for us already, all we need to do is print it out (by putting it into a render array) in viewElements().

One thing to bear in mind here is that we could have done the average calculation directly in the formatter. There's certainly scope for it: we have all the field data and the ability to execute whatever custom code we like. However, this would have been the wrong thing to do because it would have meant that the average can only be displayed on the front end, and not accessible internally to Drupal. By doing it the way we have done it above, we ensure that, for example, views is able to sort this field by average value, or another developer can plug into this module and grab the average value for his or her own use.

Wrapping up

Now that you have all the elements in place, you can enable the module and try it out!

Once enabled, you will see the new "dice roll" option when picking a field type to add to an entity. When added, you can try creating a new entity of this type, and see the widget in action. Then, the formatter will take care of showing the end result on your entity's view page. Don't forget: you have a choice of two formatters, so you can either show the dice notation or the average value (or even both if you use a custom template or views).

If you have any comments, corrections or observations about this tutorial, please feel free to leave them in the comments below. Hopefully the information about the principles and design decisions will be useful!

The full codebase is available on Github.

Jan 05 2016
Jan 05

We’ve been thinking about code reviews lately here at Advomatic.  Fellow Advo-teammate Oliver’s previous post covered the whys of our code review process, and Sarah covered the hows when she did a overview of some tools we use regularly.  This post focuses on the front-end side, and deals more with the whats of front-end code review… as in:

  • what are our main overall goals of the code we write?
  • what are the common standards we follow?
  • and what quick-and-easy changes can we make to improve our front-end code?

Guiding Front-end Coding Principles

In the world of front-end development, it seems like there is an intimidating, constantly expanding/changing set of tools and techniques to consider.  The good news is that there is a pretty dependable set of widely accepted working guidelines that aren’t tool-specific.  We really try to consider those guidelines first when we are reviewing a peer’s code (and the technical approach they have taken):

  • Write valid, semantic HTML5 markup – and as little markup as needed
  • Enforce separation of content, functionality/behavior, and presentation
  • The site should function without the use of JavaScript, which when added, is only used to progressively enhance the site. Visitors without JavaScript should not be crippled from using the site.

Best Practices

Sometimes there are a million ways to solve a single problem, but I think we do a good job of not pushing a particular tool or way of dealing any of those problems.  Instead, we’ll see if the code is efficient and if it can stand up to questions about its implementation, as far as best practices go.  For example:

  • Is there logic-related PHP in templates that should be moved to template.php?
  • Can JS be loaded conditionally from a custom module or the theme’s template.php via
    $render['#attached']['js']
    

    so it doesn’t load on pages that don’t need it?

  • Is the project already using a DRY CSS methodology elsewhere?  BEM or SMACSS maybe?  Should it be used here?
  • Are we generally following coding standards?
  • Are we coding with web accessibility guidelines in mind?

Minimal Effort Refactoring or “Grabbing the Low Hanging Fruit”

Refactoring code as an afterthought (or as a reaction to some problem down the road), is never fun – it pays to be proactive.  Ideally, we should always be retooling and improving our code as we go along as we get more information about the scope of a project and if the project budget and/or timeline allows for it.  When we look at a peer’s code changes, what kinds of things can we suggest as a quick and easy “upgrade” to their code?

  • Markup:
    • Instead of that ugly Drupal-default markup we’re working with, can it be swapped out with an appropriate HTML5 replacement element?
    • Is that inline JS or CSS I see in the markup?  Crikey, what’s that doing here?!?
    • Do we really need all these wrapper divs?
    • Is this the bare minimum markup we need?
  • CSS/Sass:
    • Can we use an already-established Sass mixin or extend instead of duplicating styles?
    • Is this element styling worthy of a new mixin or extend that can be reused on other elements, either now or in the future?
    • Should this mixin actually be an extend (or vice versa)?
    • If the code is deeply nested, can we utilize our CSS methodology to make the selector shorter, more general, or more concise?
    • Is this selector too unique and specific?  Can it be generalized?
  • JS:
    • Does this work without JS?
    • Is there a chance this could adversely affect something unrelated?
    • Does it need to be more specific, or be within a Drupal behavior?
    • In Drupal behaviors, do selected portions of the DOM use the “context” variable when they should? For example:
      $('#menu', context)
      
    • Is jQuery Once being used to make sure that code is only run once?
    • Should this code only run when we are within a certain viewport size range (a breakpoint)?
    • Does it need to be killed/destroyed when the breakpoint changes?
    • Would this functionality benefit from being fired via feature detection, especially if our target browsers are old-ish?

Quick Browser QA

I know, I know.  Browser QA is not really “code review”.  However, it does go hand in hand, and makes sense to deal with when you’re reviewing someone’s code.  It is at that point that you, the code reviewer, are most familiar with your peer’s work and specific goals.

While we do a more thorough and complete test and review of work in target browsers and devices at the end of an iteration (generally every two weeks), we also do a shorter burst of quick QA during an individual ticket’s code review.  We’ll quickly check it in the latest version of major target browsers/devices – this helps us find bugs and issues that are big, visible and easy to fix.  It also ensures that they don’t pile up on us to deal with during our final iteration… which can be really demoralizing.

Back to Basics

Code reviews are great for brushing up on the basics when other parts of your work can seem very complicated.  None of this is particularly revolutionary – it helps to revisit the basics of why you do what you do from time to time. Aside from reviewing the technical aspect of the work, you can act as an outsider that can see the big picture of how this work affects the user experience… easier than someone in the trenches.  This is an important and valuable role to play as things come together on the front-end of your project.

Dec 18 2015
Dec 18

Code reviews are a regular part of our project process and give us the opportunity to catch bugs and standardize code before work is tested by our project leads or clients. You can read more about our code review philosophy in our last post.

This post aims to give an overview of some of the code review tools we use for PHP code reviews.

Tool Time

Git history

For smaller reviews, using Git history to look at a code change is all you need. We use a post-commit Git hook that posts commit hashes to their related tickets in our project management software, so when you’re assigned a ticket to review, you can easily see the commit IDs and run “git show [the hash]” to see the change. With some other ticket management tools you may even be able to see the code changes right along with the ticket comments.

Git diff showing a code change.

Looks good, Oliver! ????


CodeSniffer

The PHP CodeSniffer (PHPCS) utility reviews PHP code for adherence to a given code standard. For Drupal projects, we can check code against Drupal’s standards. There are a few ways to run this, but first, you’ll need to install a few things.

How to install PHP CodeSniffer

  1. Download the PHPCS package using Composer.
    • For Drupal 7 projects:
      composer global require squizlabs/PHP_CodeSniffer:\<2
    • For Drupal 8 projects:
      composer global require squizlabs/PHP_CodeSniffer:\>=2
    • Or, you can install PHPCS with Drush.
  2. Download the Drupal Coder module (7.x-2.x branch – this part is important, don’t choose the 1.x branch). Move this to your central Drush directory ($HOME/.drush) – that allows it to be used on all your Drupal projects.
  3. Configure PHPCS to use Drupal standards:
    phpcs --config-set installed_paths $HOME/.drush/coder/coder_sniffer
    phpcs --config-set default_standard Drupal

Run PHP CodeSniffer in phpStorm IDE

If you use an IDE, there’s probably a plugin for running PHPCS. I set it up in phpStorm like this:

  1. Follow the directions above to install CodeSniffer with the Drupal standards.
  2. Set the path to your CodeSniffer installation in phpStorm (Preferences > Languages & Frameworks > PHP > CodeSniffer). Click the Validate button there to make sure it works.
  3. Enable CodeSniffer (Preferences > Editor > Inspections): Select “PHP CodeSniffer validation”, then select Drupal as the standard to use.
PHPCS settings in phpStorm

PHPCS settings in phpStorm.

Once that’s hooked up, you’ll start to see inline alerts of your rule breaking. You can also run PHPCS against a whole file, directory or project (Code > Run inspection by name > PHPCS). This will give you a list of all the issues PHPCS finds, with a synopsis of the problem and how to fix it.

PHPCS error in phpStorm.

Oooh, busted! ????

There are a lot more Drupal-specific features in phpStorm that are worth trying out, especially in Drupal 8 – check out the JetBrains site for more information.

Run CodeSniffer on the command line

If you don’t use an IDE or just prefer a CLI, you can run PHPCS with terminal commands. You can do this with Drush, like this: drush drupalcs path/to/your/file

Or, without Drush, like this: phpcs --standard=Drupal path/to/your/file

PHPCS command line output.

PHPCS command line output.

The command will return a list of errors and the line numbers where they occur.

Drupal Coder Review module

If you prefer a UI, you can still make use of the Coder module by way of the accompanying Coder Review module.

Coder Review module UI.

Coder Review module provides user interface.

  1. Download the Coder module to your site’s module directory and enable coder and coder_review.
  2. Browse to admin/config/development/coder/settings.
  3. Choose which modules or themes to review.
  4. Review your results, and if needed, make the suggested changes to your code.

Further Reading

Best practices for Drupal code are well-documented on Drupal.org:

These are some other blog posts on the topic:

Do you use any other code review tools?
How do you use code review tools in your project process?

Dec 15 2015
Dec 15

Code reviews, or the practice of going over another’s code, have become an integral part of our team’s development workflow. There are several types of code reviews, but this article will focus on a few key scenarios where code reviews have served to significantly enhance our process and the quality of our work.

At Advomatic, we’ve been using code reviews for several years, but we’re always looking to change things up, try out new ideas, and learn from our strategic experimentation. Over the past year, we’ve been thinking hard about the purpose and value of code reviews, and how we can improve our process.  This is what we’ve come up with.

Quality Control

Quality control is the most common way we use code reviews. Every ticket that involves substantial* commits is code reviewed by a different developer on the team. Usually, we divide the reviews between the front-end and back-end teams, so developers are only responsible for their area of expertise. Sometimes, on small teams, we bring in developers from another team, just to provide reviews. These reviews regularly focus on a set of common standards, including:

  • Is this the right approach for the problem?  Maybe you know some trick that the author doesn’t.
  • Coding conventions/standards: Honestly, we don’t put too much weight on this.  Yes, we adopt Drupal coding conventions, but there’s no value to our clients in spending the time to add a missing space in if( or fix a comment line that went longer than 80 characters.
  • Proper documentation: are functions well commented? Is the purpose of the code explained or easily understandable, particularly for complex solutions or unique requirements? Can we understand why a bit of unusual code is doing what it’s doing?
  • Efficiency: is the code as concise as it can be? Is there any unnecessary duplication? Is it sufficiently modular?
  • Clean-up: has all experimental and debug code been removed?
  • Security: Is all output of user-generated-content properly checked/escaped? Are all potential breakage points properly handled?
  • Organization: is the code located in the proper module or directory? Does it leverage existing functionality?
  • Bugs: Are there edge-case bugs that you can see aren’t handled by the code?

Knowledge Sharing

One of the challenges of organizing a team is that our work can become siloed. If each person is working on a separate feature, they alone are familiar with the specifics when it comes time to make a change. By having team members review one another’s code before delivering it to the client, we help ensure that critical details are passed around to everyone.

Often, there is a team member with a particular skill or expertise and it makes sense to have them take on tasks in that area. With code reviews, junior developers can learn from their more experienced teammates and be ready to tackle a particular challenge on a future project.

The Hiring Process

We’ve recently incorporated code reviews into the interview process. We’ve long required that candidates give us a sample of their work and then perform internal reviews to evaluate their potential strengths and weaknesses. However, after flipping that process on its head, we’ve found we learn even more by having the job seeker review a flawed (usually in multiple ways) code sample. Not only does this help to reveal what the candidate knows, but it also illuminates how they communicate and what they might be like as a team member.

Best Practices

As you can imagine, who does the review and how they do it are essential ingredients for a successful outcome. It is crucial that everyone comes into the process with the proper attitude and communicates using an effective tone. For a senior team member, there’s a difficult balance between providing enough information to home in on a problem, while at the same time encouraging self-sufficient problem solving. In these cases, it helps to frame suggestions in terms of the desired outcome, rather than describing a specific solution. For example, instead of saying, “Make the database call happen outside of the foreach loop,” the reviewer might propose, “Could we improve performance by altering where the database call happens?”

On the topic of tone, it’s best to keep things friendly, provide lots of encouragement around the criticism, and focus on critiquing the code – not the coder. It is also important to consider how much to say and when to say it. If a junior developer is struggling with a solution, it can be discouraging to hear multiple comments about where the comment lines are wrapping and missing punctuation (not that those comments don’t sometimes have their place).

When the process is handled with careful intent, the code review process can be an effective tool for team building and knowledge sharing. Our tickets are regularly filled with responses like, “Great catch! I can’t believe I missed that,” and “Good suggestion, I’m on it!”

One thing to remember is that while this arrangement is working well for us, your team might need something different.  A two-person team building a website for a small non-profit has different needs from an 80-person team working on an online banking website. Find the balance that works for you.

What are the key ways you use code reviews in your process? Are there any important principles or best practices you’d add to the ones listed here?

Thanks to Dave Hansen-Lange for reviewing (and improving) this post.

* That’s right, we don’t code review everything. If there’s no value (particularly to the client) in having someone review the code (maybe only a few lines were changed, maybe the new code is self-evident), then we skip it.

Nov 19 2015
Nov 19

Need an array of user roles in your Drupal 8 site? Say goodbye to user_roles(), say hello to user_role_names().

This function will return an associative array with the role id as the key and the role name as the value.

/**
 * Retrieves the names of roles matching specified conditions.
 * @link https://api.drupal.org/api/drupal/core!modules!user!user.module/function/user_role_names/8
 */
function user_role_names($membersonly = FALSE, $permission = NULL) {
  return array_map(function($item) {
    return $item->label();
  }, user_roles($membersonly, $permission));
}

For more information, see https://api.drupal.org/api/drupal/core!modules!user!user.module/function/user_role_names/8.

Like this:

Like Loading...

Author: Ben Marshall

Red Bull Addict, Self-Proclaimed Grill Master, Entrepreneur, Workaholic, Front End Engineer, SEO/SM Strategist, Web Developer, Blogger

Nov 18 2015
Nov 18

Looking for how to create new module permissions in Drupal 8? Look no further, $module.permissions.yml to the rescue!

In Drupal 8 permissions are now defined in $module.permissions.yml file instead of using hook_permission().

Drupal 8 Static Permissions

Create a new file in the root of your module folder and name it my_module.permissions.yml.

# In my_module.permissions.yml file.
access all views:
  title: 'My module settings'
  description: 'A custom permission for your module settings page.'
  restrict access: TRUE

Drupal 8 Dynamic Permissions

In Drupal 8, you can support dynamic permissions by referencing a function that will dynamically define those permissions. This callback defines the permissions for core’s filter module.

# In filter.permissions.yml
permission_callbacks:
  - Drupal\filter\FilterPermissions::permissions
<?php
// in FilterPermissions.php

class FilterPermissions {
  public function permissions() {
    $permissions = [];
    // Generate permissions for each text format. Warn the administrator that any
    // of them are potentially unsafe.
    /** @var \Drupal\filter\FilterFormatInterface[] $formats */
    $formats = $this->entityManager->getStorage('filter_format')->loadByProperties(['status' => TRUE]);
    uasort($formats, 'Drupal\Core\Config\Entity\ConfigEntityBase::sort');
    foreach ($formats as $format) {
      if ($permission = $format->getPermissionName()) {
        $permissions[$permission] = [
          'title' => $this->t('Use the <a href="https://benmarshall.me/drupal-8-module-permissions//@url">@label</a> text format', ['@url' => $format->url(), '@label' => $format->label()]),
          'description' => String::placeholder($this->t('Warning: This permission may have security implications depending on how the text format is configured.')),
        ];
      }
    }
    return $permissions;
  }
}
?>

For more information, see https://www.drupal.org/node/2311427.

Like this:

Like Loading...

Author: Ben Marshall

Red Bull Addict, Self-Proclaimed Grill Master, Entrepreneur, Workaholic, Front End Engineer, SEO/SM Strategist, Web Developer, Blogger

Nov 17 2015
Nov 17

We just launched our first Drupal 8 website for the Northwest Atlantic Marine Alliance (NAMA). During our project retrospective, a few of us brought up how nice it was that so many contrib modules that were part of the D6 site weren’t necessary in Drupal 8 – this was a major factor in making it possible to launch this project before the actual D8 release.

The graphic below compares contrib modules that were running on NAMA’s Drupal 6 site compared to the modules needed to achieve the same functionality in Drupal 8.

Having so many of the modules that we always install built into core gave us more time to focus on ways to optimize the site for NAMA’s specific needs, and it should also save us time down the road when it’s time to handle maintenance tasks (like security updates) or add on new features.

What are you most excited about having in core?

Which modules are you waiting to see in D8 before upgrading?

Nov 17 2015
Nov 17

It’s been awhile since Drupal 8 was first ready to try out. But even since then, I was checking out news, reading updated docs, working on my Drupal 7 projects… still waiting for something real. And it finally happened – during the keynote at DrupalCon Barcelona, Dries announced the first D8 release candidate and basically confirmed that it’s now ready for production sites!

road to the drupal 8

And there my journey had started; after all those weeks, I finally received a great opportunity to try out the real D8.

Getting started

You can download D8 here, which its configuration is not difficult – everything is well described in the official docs, I didn’t find any issues there. Just a small reminder: after applying configs in settings.php/services.yml (such as enabling twig debug), don’t forget to run the new drush command for cache clearing: drush cache-rebuild
Speaking of drush, for Drupal 8 you will need to update your drush up to the 8th version. If you want to keep both for purposes like executing drush commands on a remote server which runs older drush versions, just put drush8 in another folder and create an alias (more info). That is an easy and rather straightforward way, but there is also an automated way to do that, which could be useful in some cases.

Folder structure

Looking into the folders structure – it is a bit different to D7, but pretty logical, the main differences being: folder ‘core’ – Drupal’s heart has been moved here, and we don’t need to put themes and modules into the sites folder! We already have these folders in the root and from now on we should add custom/contrib subfolders there.

Opening my site – creating base content types, blocks (entities now! I like it, Bean module, thanks and goodbye – we’re finally able to create different block types and add fields. Breadcrumbs, site name and slogan are blocks now (+1 to karma from non-developers I think, as they don’t need to research what is code and how could they put this info in some other place) and, well, finally we don’t need the title = ‘’ for title hiding, for this purpose, a simple checkbox was added!).

Content

Content looks great (live editing on pages – this feature was even backported to D7 Quick Edit module, I guess it’s perfect for content editors), so I was pretty excited.

Deployment

It should be as easy and comfortable as it only could be for any person that will be deploying a project – even if it was not the same person that was working on this particular site in the past. For this purpose we have the Configuration Management – an interface for importing and exporting site configurations. And as almost everything (besides the content) is configurations, this tool means easy deployment and reusability (I think this is the fullest info base over CM). The Features module that we used in D7 is pretty heavy and was avoided rather often in small projects (eg. in those being coded only by one developer), leading to sacrificing usability with deployment. But the new, better workflow for deploying in D8 that’s in the core out of the box sounds great, and means easy deployment for any sized site!

Theming

Check out this introduction for a deeper look (I like this intro even a bit more than the official one).

Decided to try on a few pages and checked which variables do we have in preprocess_html, looked for devel – and turned out there was no stable release, which didn’t sound too well, so while I was researching devel issues track, I decided to config PhpStorm with xdebug for Drupal 8 – great instruction, thanks! So in case of no devel – I was ready to work, but it’s worth mentioning that devel’s beta worked fine in general, I usually used kpr($variables) – krumo submodule for d7, and so I found it similar in d8 – krumo($variables). devel 8 was extended with different debugging tools; there is also one more submodule – Kint – kint($variables), which looks really good and should overtake the role of the now unsupported Krumo.

No one has ever had any doubts that the Views module would be present in the next Drupal core, and of course it is in D8.

In general the core team does listen to devs’/users’ needs. A huge amount of great, often used and I would say expected functionality were added in Drupal 8: Ckeditor module for Rich text editor, Module filter, Date, Link, Email, Phone, Reference field types, entity cache module, custom view modes can be created for any entity (bye DS or custom code), admin pages are mostly views now and much more.

I was able to create all necessary functionality for a site without any problems or any lines of custom code in a module (I wasn’t able to enjoy OOP in Drupal fully – but I definitely will), and I don’t find the barrier to entry for Drupal 8 is much higher than D7 – so newbies go ahead, you have a lot of abilities in core, everything is pretty well documented and followed modern standards.

Theming took a bit more time than usual as there is a new theme layer here: HTML5 markups, naming convention, CSS architecture, the template engine in general (TWIG is great – a lot of people are already used it, but in addition to the general theme layer update in Drupal 8, it’s just a great pleasure to see improved and much cleaner code in templates, reduced amount of duplicated code and the ability for developers to extend base templates with necessary additions but not completely replace them as before; easy syntax, security – autoescaping by default, DB calls and PHP scripts couldn’t run in a template.

Small tip: config your IDE for syntax highlighting and autocomplete of .twig templates, PhpStorm has native support.

Also the idea to have 0 theme functions in core by the time Drupal 8 is released sounds interesting.

The future is here

drupal 8 is future

All these initiatives are great and finally Drupal is not so outdated and is using modern tools and best practices (we don’t need to excuse it because of its age anymore :-) ).

So, work on Drupal is in progress, it is well supported (D8 is really less “Drupal” specific and closer to modern standards, frameworks – hey, Symfony is here – and OOP in general; I bet all of that will grow Drupal’s community with high quality developers and as a result bring improvements), the core team and the community are strongly connected, devs and users are researching best practices and use cases, and Drupal 8 looks finally ready for production sites.

Tip: Don’t miss core changes – @drupal8changes

X-Team is hiring Drupal developers!
Click here to learn how to join the league of the extraordinary.

Oct 13 2015
Oct 13

Cowritten by Jack Haas and Amanda Luker

Celebrating the first release candidate for Drupal 8, the Advomatic team has been testing things out, diving into not-so-well documented (yet!) waters. Here’s a little tutorial for doing something that had us scratching our heads for a bit: adding responsive images in Drupal 8.

Here’s what you will need:

  • A Drupal 8 install (Breakpoint and Responsive Image modules are already in core)
  • Make sure Breakpoint module is enabled (already enabled by default in core)
  • Turn on the Responsive Image module

Setting up your breakpoints

Since Breakpoint module was added to D8 core, you can now define your theme’s breakpoints in code (and categorize them into breakpoint groups – for example, one group for image-sizing breakpoints and one for layout-related breakpoints). There is no UI for doing this, but Breakpoint module gives us an API that allows modules and themes to define breakpoints and breakpoint groups, as well as resolution modifiers that come in handy when targeting devices with HD/Retina displays.

To do this from your theme, simply create a yourthemename.breakpoints.yml file in the root of your theme directory. For the purposes of simplifying this tutorial, we will be creating two, simple, device-agnostic breakpoints: “small” and “large.” Under normal circumstances we would be using more.

yourthemename.small:
  label: small
  mediaQuery: '(min-width: 0px)'
  weight: 0
  multipliers:
    - 1x
    - 2x
yourthemename.large:
  label: large
  mediaQuery: 'all and (min-width: 960px)'
  weight: 1
  multipliers:
    - 1x
    - 2x

The weight of these is crucial so that the proper image styles get swapped in depending on viewport size and screen resolution. Here, a breakpoint’s weight should be listed from smallest min-width to largest min-width. Modules, however, can reverse that order if needed, as the Responsive Images module does.

Also, note the “multipliers” section. Breakpoint allows for different pixel density multipliers for displaying crisper images on HD/Retina displays: 1x, 1.5x and 2x. More on that below.

Determine what image styles you will be using for each breakpoint

Depending on how many breakpoints you have, you will need to plot out what your image sizes will be for each breakpoint range. In our case, we’ll do a simplified version with just two image styles, one for each of our breakpoints.

As an aside, here’s an interesting presentation on doing the math for your responsive images before you even get to Drupal. The presenter, Marc Drummond from Lullabot, uses a rule of 25% to choose his image styles. For a hero image, for example, he starts with 320px (generally, your starting maximum width). Then he picks his next breakpoint at 320 x 1.25 — 400px, then 400 x 1.25 — 500px … on up through 1600px (tidying up the numbers a bit to make more sense later).

Add your image styles at /admin/config/media/image-styles

Below is what I’ve chosen for my image styles, basing each on the largest width and height it will need to be before the next breakpoint snaps into place. I’m using a 16:9 aspect ratio across the board here to make crops, but yours may vary, based on the layout changes among breakpoints. These all use the “Scale and Crop” effect.

  • feature-small (0-959px): 738px x 415px
  • feature-large (960px & up): 938px x 528px
Screen Shot 2015-10-08 at 11.24.33 AM

 

HD (and Retina) screens

At this point, you should seriously consider making a twice-as-large version of images for your high definition display users. For this step, you’ll need to make sure you have your breakpoints set up to accept 2x options (via the “multipliers” you set up in yourthemename.breakpoints.yml), and then add doubled-up image styles:

  • feature-small-2x (0-959px): 1476px x 830px
  • feature-large-2x (960px & up): 1876px x 1056px
Screen Shot 2015-10-08 at 11.21.51 AM

Now I’ve got a slew of image styles all ready to go!

Screen Shot 2015-10-09 at 2.04.35 PM

Intro to the Picture HTML element

Drupal 8 makes the big leap and uses the HTML5’s Picture element to display all our different image styles and pick the right one. It’s essentially a container that allows for multiple sources for a single image tag. Let’s see how it works.

Here’s a basic example of the picture element markup:

<picture>
  <source
    media="(min-width: 960px)"
    srcset="images/kitten-big.png,
      images/[email protected] 2x">
  <source 
    media="(min-width: 768px)"
    srcset="images/kitten-med.png,
      images/[email protected] 2x">
  <img 
    src="https://www.advomatic.com/blog/adding-responsive-images-to-your-drupal-8-site/images/kitten-sm.png,
      images/[email protected] 2x">
      alt="a cute kitten">
</picture>

The source tag allows you to state your breakpoints and the image source to use in that case (with a regular and a 2x version). The img tag provides your fallback image — it will show when the others don’t apply. So the work we’ve done so far is basically providing all the parameters for the picture element that we’re building. Note that the source rules are listed from largest to smallest.

Next, create your responsive image style using those image styles at /admin/config/media/responsive-image-style

Now let’s put it all together in Drupal. Add a new responsive style, selecting the breakpoint group you are going to use here — in my case, I’m using my theme’s breakpoint group.

Screen Shot 2015-10-09 at 3.02.21 PM

For each breakpoint, I’m choosing a single image style — but you can also choose to ignore that breakpoint, or even load multiple images styles based on different size criteria you specify.

Apply your responsive image style

In my case, I added a field to the Basic Page content type for the Feature Image. Set the Format Settings to “Responsive image” and select your new Responsive image style.

Screen Shot 2015-10-08 at 1.50.07 PM

Now check your work. One thing that makes it a little more difficult is that the web inspector won’t tell you which version of the image is loading just by looking at the DOM; the picture element markup looks the same no matter what breakpoint you are in. Instead, you will need to peek at the browser’s web dev tools (such as the Network tab in Chrome) to see which version of the image has downloaded.

Here’s the wide version on a non-HD screen:

Test Responsive Image Styles | Drupal 8 Test-2

And a small version on a HD screen:

Test Responsive Image Styles | Drupal 8 Test-3

As you can see, this method can be as simple or complex as you need it to be. But I think Drupal 8’s new system allows for the flexibility for the range of devices and breakpoints we have now.

Oct 09 2015
Oct 09

When migrating a site from Drupal 6 to Drupal 8, we had to write some very basic Plugins. Since plugins and some of their related pieces are new to Drupal 8, here is a walk-through of how we put it together:

Use case

In Drupal 6, the contrib Date module provided a date field that had both a start and end date. So, the beginning of Crazy Dan’s Hot Air Balloon Weekend Extravaganza might be July 12, with an end date of July 14. However, the datetime module in Drupal 8 core does not allow for end dates. So, we had to use two distinct date fields on the new site: one for the start date, and one for the end date.

Fields in D6:
1.  field_date: has start and end date

Fields in D8:
1.  field_date_start: holds the start date
2.  field_date_end: holds the end date

Migration overview

A little background information before we move along: migrations use a series of sequential plugins to move your data: builder, source, process, and finally, destination.

Since we are moving data from one field into two, we had to write a custom migration process plugin. Process plugins are where you can manipulate the data that is being migrated.

Writing the process plugin (general structure)

The file system in Drupal 8 is organized very differently than in Drupal 7. Within your custom module, plugins will always go in [yourmoduledir]/src/Plugin. In this case, our migrate process plugin goes in [yourmodulename]/src/Plugin/migrate/process/CustomDate.php.

Here is the entire file, which we’ll break down below.

<?php
/**
 * @file
 * Contains \Drupal\custom_migrate\Plugin\migrate\process\CustomDate.
 */

Standard code comments.

namespace Drupal\custom_migrate\Plugin\migrate\process;
use Drupal\migrate\ProcessPluginBase;
use Drupal\migrate\MigrateExecutableInterface;
use Drupal\migrate\Row;

Instead of using functions like include(); or include_once(); to add various PHP files, now we “include” them by referencing their namespaces. Or rather Drupal knows which files to autoload based on the namespace.  This way, if a class is ever moved to a new directory, we won’t have to change code elsewhere, as long as the namespace stays the same. We will allow our code to be used the same way, by defining its namespace.

/**
* This plugin converts Drupal 6 Date fields to Drupal 8.
*
* @MigrateProcessPlugin(
*   id = "custom_date"
* )
*/

This class comment includes an annotation. When the Migrate module is looking for all available migration plugins, it scans the file system, looking for annotations like this. By including it, you let the migration module discover your migrate process plugin with the unique id ‘custom_date’.

Our new class will inherit from the ProcessPluginBase class, which is provided by the Migrate module in core. Let’s step back and look at that class. This is it’s definition:

abstract class ProcessPluginBase extends PluginBase implements MigrateProcessInterface { ... }

Since this is an abstract class, it can never be instantiated by itself. So, you never call new ProcessPluginBase(). Instead we create our own class that inherits it, by using the keyword extends:

class CustomDate extends ProcessPluginBase { ... }

The ProcessPluginBase class has two public methods, which will be available in child classes, unless the child class overrides the methods. In our case, we will override transform(), but leave multiple() alone, inheriting the default implementation of that one.

(A note about abstract classes: If there were any methods defined as abstract, our child class would be required to implement them. But we don’t have to worry about that in this case!) To override transform() and create our own logic, we just copy the method signature from the parent class:

public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property)

Writing the process plugin (our specific data manipulation)

In order to write custom code, let’s review our use case of dates again. Since this is our mapping:

OLD D6 field_date (from component) -> NEW D8 field_date_from
OLD D6 field_date (to component) -> NEW D8 field_date_to

We will migrate field_date twice per node. The first time, we will pull the from date. The second time, we will pull the to date. Since our process plugin needs to be aware of which piece we’re looking for in that particular run, we will allow the process plugin to have additional configuration. In our case, we will call this configuration date_part, which can be either from or to, and defaults to from:

$date_part = isset($this->configuration['date_part']) ? $this->configuration['date_part'] : 'from';

Depending on which date part we’re looking for, we’ll grab the appropriate date from the D6 source field, which is stored in the array $value.

$value = ($date_part == 'from') ? $value['value'] : $value['value2'];

And we’ll return the string, which will populate the new field:

return $value;

That’s it for writing our process plugin! Now we just have to use it.

Using the process plugin

In our migration definition, we need to call this plugin and feed it the correct information. So back in [yourmodulename]/config/install/migrate.migration.d6_node.yml, we map the new and old fields:

field_date_start:
  plugin: custom_date
  source: field_date
  date_part: from
field_date_end:
  plugin: custom_date
  source: field_date
  date_part: to

Which reads like this: For field_date_start on the new site, pass the old field_date to the custom_date process plugin, with the configuration date_part = ‘from’. Do this all again for field_date_end, but with the configuration date_part = ‘to’. Both of our D8 fields get filled out, each getting its data from a single field on the old D6 site.

migration

Time to fly! (Image courtesy of Wikimedia)


Feedback

Hopefully this helps. If you have any corrections, improvements, questions, or links to how you use plugins, leave them in the comments!

Oct 07 2015
Oct 07

This morning, during my usual scan of Feedly/Twitter/Reddit I read Secure the data of visitors on your Drupal website better. The post shows you how to use the Field encryption module.

The Field encryption module ensures that the values stored in the Drupal database are encrypted. When the database ends up in the wrong hands, then nobody can read the data since this module has encrypted it. This way, you are prepared for a worse case scenario.

It all seems straight forward enough, but I suspected it wouldn’t be so simple and in fact doesn’t look as secure as purported. My main concern was with the 3 options presented for storing the private key used to encrypt data. So I asked on Twitter:

How secure is this if Drupal needs to read the key? http://t.co/q8wzlLy2dM /cc @enygma @Awnage @ircmaxell

— Oscar Merida (@omerida) October 7, 2015

Now, this post isn’t meant to denigrate the work of that project or to completely discount the advice on openlucius.com. Let’s see why some of the ways to store the key are problematic.

@enygma @omerida @Awnage the key is stored in the db? o_O

— Anthony Ferrara (@ircmaxell) October 7, 2015

The first option is to store the key in the database, and at least the article recommends against selecting that. If your key is in the database, and a malicious attacker manages to steal your database or find some way to read it’s contents via SQL Injection, they will have your key. With the key, nothing will stop them from unencrypting your data.

The second option is to specify it as a variable in settings.php. The key is a little harder to get but is only a variable_get('encrypt_drupal_variable_key') call away. Since settings.php is in the public web root, misconfiguring PHP or having PHP files show the source code will leak your key too. Finally, if you’re committing your settings files to git or SVN (hint: you shouldn’t be), anyone with access to your repository will also have your key.

@omerida imho they should only offer the last option. @ircmaxell @Awnage

— Chris Cornutt (@enygma) October 7, 2015

The final option, to use a File should be the recommended way to specify the key. Ideally, the file is somewhere outside of your web root and, again, not in your code repository.

Only in option 1 is your key vulnerable to SQL injection attack. For the other 2 options, an attacker would have to gain access to your code to get your key. Given how Drupal 7 stores routes in the database, all that takes is and SQLi vulnerability in another module or core itself and someone could install a back door or shell on your site.

@omerida @enygma @Awnage depends on a lot of factors. If done correctly (haven't looked yet), could make SQLi virtually useless by itself.

— Anthony Ferrara (@ircmaxell) October 7, 2015

No matter how you store it, if you have the PHP module enabled, anyone who can build a view or execute PHP code from the Drupal UI can retrieve your key. There’s also a temptation to share the key across development, testing, and production environments so that your database snapshots are portable between them all.

Others brought up issues on as well. The original module author added a comment highlighting that the Field Encryption module is still marked as Beta, which was released in 2013.Also, there are better key management solutions for Drupal.

@enygma @ircmaxell @omerida @Awnage the default options are bad, bad, and bad. Modules like townsec_key & key attempt to provide real KMS

— Cash Williams (@cashwilliams) October 7, 2015

Also, there are better algorithms for encryption than those in the module.

@ircmaxell @enygma @omerida @Awnage It's worse than that, the suggested plugin Encrypt, uses ECB & mcrypt with no authentication. *shrug*

— Ashley Pinner (@NeoThermic) October 7, 2015

Security is a Continual Process

This illustrates that security is a continual process, with a lot of considerations to take into account. It’s not as easy as installing a single module or ticking a box on a check list. If you’re storing really sensitive user data, ask yourself if you really need it. If this data is credit card information—get to know what it takes to be PCI compliant. Then ask if you aren’t better off using a payment processor instead. But please, don’t be lulled into a false sense of security after adding a single component.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web