Oct 21 2019
Oct 21

The 2019 Drupal South sprint is shaping up to be the biggest contribution event in the Australia-Pacific region since Drupalcon Sydney 2013.

This year, core-contributors with over 3000 commit credits between them will be in attendance, including 3 members of the core committers team, 3 members of the Drupal security team, 7 core module/subsystem maintainers as well as maintainers of major contrib modules and command-line tools.

With Drupal 9 just around the corner, this will be a great chance to help the community get popular modules ready for Drupal 9, meet some great people and help shape the future of Drupal.

The PreviousNext team are sponsoring and helping to run the sprint day on Wednesday, November 27th 2019, and there are a few things you can do now to hit the ground running on the day.

What's a Sprint Day about anyway?

Contribution Sprints are a great opportunity to get involved in contributing to Drupal. Contributions don't have to be just about code. Issue triage, documentation, and manual testing are examples of non-code contributions.

If you are new to contributing, you can take a look at the New Contributor tasks on the Drupal.org Contributor Tasks page.

While there will be experienced contributors there on the day to help, keep in mind, this is not a training session. :-)

Set Up a Development Environment

There is more than one way to shear a sheep, and there is also more than one way to set up a local development environment for working on Drupal.

If you don't already have a local development environment setup, we recommend using Docker Compose for local development - follow the instructions for installing Docker Compose on OSX, Windows and Linux.

Once you've setup Docker compose, you need to setup a folder containing your docker-compose.yml and a clone of Drupal core. The instructions for that vary depending on your operating system, we have instructions below for OSX, Windows and Linux, although please note the Windows version is untested.

Mac OSX

mkdir -p ~/dev/drupal
cd ~/dev/drupal
wget https://gist.githubusercontent.com/larowlan/9ba2c569fd52e8ac12aee962cc9319c9/raw/ef35764c2bf60b07996fdc57c747c3c99a855b80/docker-compose.yml
git clone --branch 8.9.x https://git.drupalcode.org/project/drupal.git app
docker-compose up -d
docker-compose run -w /data/app app composer install

Windows

git clone --branch 8.9.x https://git.drupalcode.org/project/drupal.git app
docker-compose up -d
docker-compose run -w /data/app app composer install

Linux

mkdir -p ~/dev/drupal # or wherever you want to put the folder
cd ~/dev/drupal
wget https://gist.githubusercontent.com/larowlan/63a0f6efacee71b483af3a2184178dd0/raw/248dff13557efa533c0ca297d39c87cd3eb348fe/docker-compose.ymlgit clone --branch 8.9.x https://git.drupalcode.org/project/drupal.git app
docker-compose up -d
docker-compose exec app /bin/bash -c "cd /data/app && composer install"

If you have any issues, join us on Drupal slack in the #australia-nz channel beforehand and we'll be happy to answer any questions you might have.

Find Issues to Work On

If you want to see what might be an interesting issue to work on, head over to the Drupal.org Issue Queue and look for issues tagged with 'DrupalSouth 2019'. These are issues that others have tagged.

You can also tag an issue yourself to be added to the list.

Being face-to-face with fellow contributors is a great opportunity to have discussions and put forward ideas. Don't feel like you need to come away from the day having completed lines and lines of code.

We look forward to seeing you all there!

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 21 October 2019

Add new comment

Oct 08 2019
Oct 08

Skpr - pronounced Skipper - is a cloud hosting platform specifically designed to maximise the productivity of development teams by giving them full control right from the command line.

During our consulting engagements with large organisations, we recognised a clear trend; they were moving away from narrow, single-site hosting services and building bespoke platforms on top of Kubernetes to support their multi-site, multi-technology initiatives.

Back in 2016 we had this exact need for hosting our entire portfolio of sites. Throughout this journey we found that providing developers with a simple Command Line Interface (CLI), has led to huge improvements in our team's efficiency and the overall quality of our products.

So, today we’re announcing the public launch of our hosting platform, Skpr. The platform for teams who want a simple command line tool, backed by a range of industry-leading services and supported by our own team of experts.

Why Skpr is different

Many hosting platforms provide a web interface where deployments can be dragged-and-dropped between environments.

While these solutions are more effective for non-developers, they fall short in integration and extendability within the workflow of the developers actually doing the job. Having a Command Line Interface (CLI) means that not only do we provide the same level of control, we provide the flexibility to extend those workflows. 

  • Scripts - Having a CLI means that Skpr can integrate into existing automation, along with CI tools such as CircleCI.
  • Documentation - Complex tasks carried out via a GUI are very difficult to document. CLIs mean you spend less time describing a user interface and more time documenting the actual process.

Control on Command

With a few commands, developers have the control to package, deploy, configure and monitor their services right from the command line.

And while we want to provide a platform that's powerful, reliable and secure, we're passionate about making it easy-to-use as well.

To find out more, visit skpr.io.

Add new comment

Sep 24 2019
Sep 24

Drupal 8.8.0 will be released in December 2019 and the upcoming changes in JSON:API module codebase introduce huge performance benefits.

Here are three things to prove that:

1. Recent patches committed to JSON:API in Drupal 8.8

https://www.drupal.org/project/drupal/issues/3039730 is a simple issue which is making sure that if you are requesting information of related entities then it statically caches the resource type information for that relationship so that when multiple entities of the same entity type and bundle are requested it doesn’t have to collect the resource type information for the related entities over and over again.

https://www.drupal.org/project/drupal/issues/2819335 adds a cache layer to store the normalized entities so that if we need the normalized version of an entity we can just get it from the cache instead of normalizing the whole entity again which can be a very expensive process.

https://www.drupal.org/project/drupal/issues/3018287 introduces new cache backend to store JSON:API resource type information which was stored in the static cache. This means that instead of creating JSON:API resource types every request, we are just creating them once after cache clear.

2. Profiling using blackfire.io

I was able to do some profiling to compare the JSON:API core module in Drupal 8.7 versus 8.8 . Here are the initial conditions:

  • PHP 7.3
  • JSON:API version 8.7
  • No JSON:API Extras
  • Page Cache module disabled.
  • Dynamic Page Cache module is set to cache.backend.null, which forces a 100% cache miss rate.
  • Cleared all caches.
  • Visit user login page to rebuild the container and essential services.

Case I

Visit the first JSON:API endpoint which loads 50 nodes with 8 fields, 2 computed fields, 2 filters, and sorted by title.

JSON:API 8.7 - URL 1

Case II

Visit the first JSON:API endpoint which loads 2 nodes with 45 paragraph fields, each paragraph field has 6 fields and 2 computed fields, 1 filter.

JSON:API 8.7 - URL 2

Then update the JSON:API to 8.8, all other initial conditions were the same as before.

Case I

Visit the first JSON:API endpoint which loads 50 nodes with 8 fields, 2 computed fields, 2 filters, and sorted by title.

JSON:API 8.8 - URL 1

Case II

Visit the first JSON:API endpoint which loads 2 nodes with 45 paragraph fields, each paragraph field with 6 fields and 2 computed fields, 1 filter.

JSON:API 8.8 - URL 2

Comparison:

Case I

The comparison shows 79% improvement in response time.

URL1 comparison from JSON:API 8.7 to JSON:API 8.8

There are 39 more SQL queries on JSON:API in Drupal 8.8.

After having a detailed look at those shows that there are additional calls to new cache bin added by JSON:API but the most important thing was 50 fewer queries to url_aliase table.

URL1 query comparison from JSON:API 8.7 to JSON:API 8.8

Function calls also show the reduced number of function calls to Entity API and normalizers.

URL1 function comparison from JSON:API 8.7 to JSON:API 8.8

Case II

The comparison shows 66% improvement in response time.

URL2 comparison from JSON:API 8.7 to JSON:API 8.8

There are 35 more SQL queries on JSON:API in Drupal 8.8.

These are the same additional calls to the new cache bin.

URL2 query comparison from JSON:API 8.7 to JSON:API 8.8

Function calls also show the reduced number of function calls to Entity API and normalizers — same as before.

URL2 function comparison from JSON:API 8.7 to JSON:API 8.8

I ran the same scenarios with redis cache backends instead of the default database backends. The results show the same kind of improvements.

3. Raw response comparison:

What matters is how this all plays out on the website.

JSONAPI:8.7 first page load on cold cache

JSONAPI:8.7 first page load on cold cache

JSONAPI:8.8 first page load on cold cache

JSONAPI:8.8 first page load on cold cache

Before After Improvement URL1 2.6 sec 1.3 sec 2x faster URL2 4.5 sec 1.8 sec 2.7x faster URL3 7.7 sec 2.5 sec 3.1x faster URL4 7.5 sec 2.4 sec 3.1x faster URL5 7.2 sec 2.5 sec 2.9x faster Overall 10.3 sec 3.8 sec 2.7x faster

Conclusion:

In short, JSON:API in Drupal 8.8 is going to be significantly faster than its predecessor!

To improve the performance like this takes enormous effort and this was a community accomplishment but special thanks to @ndobromirov, @kristiaanvandeneynde, @itsekhmistro, and last but not least the hardworking maintainers of JSON:API module @e0ipso, @gabesullice, and @Wim Leers, without their work, support and guidance this would not have been possible. Please give them a shoutout on Twitter or come say ‘hi’ in Drupal Slack #contenta channel. If you are interested in JSON:API and its performance then please feel free to help out at https://www.drupal.org/project/issues/search?status%5B%5D=Open&issue_tags_op=all+of&issue_tags=Performance%2C+API-First+Initiative.

Thanks to @Wim Leers for feedback on this post!

Photo of Jibran Ijaz

Posted by Jibran Ijaz
Senior Drupal Developer

Dated 24 September 2019

Add new comment

Sep 23 2019
Sep 23

It's a Monday morning and you push your first bit of code for the week. Suddenly all your Javascript tests start failing with crazy errors you've never seen before! And it's not just happening on one project! This post will hopefully help you track down the fix to the Bad Message 400 errors plaguing WebDriver.

Here at PreviousNext, we have automated processes to ensure our PHP images are updated on a weekly basis. On September 22nd 2019, that update included a version bump to the curl library from 7.65.1 to 7.66.0. This had a cascading effect which resulted in builds across all of our projects failing javascript tests running against selenium/standalone-chrome containers.

The errors looked something like this:

WebDriver\Exception\CurlExec: Webdriver http error: 400, payload :<h1>Bad Message 400</h1><pre>reason: Bad Content-Length</pre>

We were able to compare an old version of the PHP image (from a week ago) and track down that version change in CURL. But why was that failing? We didn't want to just go about pinning curl back to the old version and dusting our hands off.

Let's dive into the void (stacktrace)

WebDriver\Exception\CurlExec: Webdriver http error: 400, payload :<h1>Bad Message 400</h1><pre>reason: Bad Content-Length</pre>

/data/vendor/instaclick/php-webdriver/lib/WebDriver/Exception.php:155
/data/vendor/instaclick/php-webdriver/lib/WebDriver/AbstractWebDriver.php:132
/data/vendor/instaclick/php-webdriver/lib/WebDriver/AbstractWebDriver.php:218
/data/vendor/instaclick/php-webdriver/lib/WebDriver/Container.php:224
/data/vendor/behat/mink-selenium2-driver/src/Selenium2Driver.php:781
/data/vendor/behat/mink-selenium2-driver/src/Selenium2Driver.php:769
/data/vendor/behat/mink/src/Element/NodeElement.php:153

When inspecting the code through the above trace, we found that the instaclick/php-webdriver library was responsible for issuing the actual cURL command and was throwing the exception.

Op to the Rescue

When looking through the recent commits of the library, Nick Schuch noticed a suspicious commit that sounded a bit fishy. Sure enough, manually applying those changes got all the tests green again!

But how do we fix it for good? That's where it gets a bit tricky due to some composer constraints (as per usual).

Unfortunately the instaclick/php-webdriver library's HEAD is quite far aHEAD of the latest stable release (1.4.5), and we aren't able to simply bump to dev in our composer file due to behat/mink-selenium2-driver (a Drupal core dev requirement) constraining us to 1.x.

The Fix

The easiest approach for now is to commit a patch locally to your repository and manually patch it until the maintainer releases a new stable release

First download a custom patch file I've prepared against 1.4.5:

wget https://gist.githubusercontent.com/acbramley/c2809699c4dbf1774a14d89722743395/raw/e2c1479a73e7e9faff200802671bf982b6f3ac56/gistfile1.txt -O instaclick-curl-fix.patch

Then patch the library (using cweagans/composer-patches) with the new patch file by adding the following to the patches key in your composer.json:

"instaclick/php-webdriver": { "fix cURL POST": "instaclick-curl-fix.patch" }

Then simply run composer update instaclick/php-webdriver

Photo of Adam Bramley

Posted by Adam Bramley
Senior Drupal Developer

Dated 23 September 2019

Add new comment

Sep 13 2019
Sep 13

One of the increasingly popular architectural paradigms that Drupal has been seen as a leader in, is the concept of a single Drupal software package that can be spun up to power networks of websites with varying degrees of commonality. This is usually driven by the ambitious goal of being able to code and configure Drupal once and then leverage that effort as either an entire platform or foundation for many "networked" sites.

Beginning down the path of starting a project like this is complex and unfortunately isn't helped by some of Drupal's (confusingly named) features which describe aspects of reusability but aren't in themselves a full fledged approach to architecting such a network. In addition to that, there are many misconceptions about Drupal's capabilities and affordances when it comes to building such networks.

In order to try and expose some of the discovery and decision making process behind starting an ambitious Drupal network project, the following is a non-exhaustive list of popular architectural paradigms that exist, evaluated on the following axis:

  • Up-front investment: the up-front cost of starting a network of sites.
  • Per-unit investment: the cost of introducing a new site to the network
  • Flexibility: the ability to customise and create bespoke experiences within each network site
  • Platform maintainability: the ability to iterate and evolve the network as a whole

As with all complex projects, there are a large number of requirements and constraints which factor into technical decision making, so these approaches are a broad view of the landscape of Drupal's capabilities.

Models of networked sites

Starter-kits

A starter-kit consists of creating a Drupal distribution or install profile, catering to as much common functionality as possible across the network in an initial development phase and then allowing each new website to make any additional required customisations as needed. These customisations may consist of writing additional code, enabling new dependencies and modifying the configuration shipped with the initial distribution.

For each individual site, this model affords the most flexibility. By allowing each site to evolve independently any new requirements or features perceived as bespoke can be implemented and deployed without making consideration to the starter-kit itself or other websites within the network.

The major drawback of this approach is being able to maintain and evolve the network of sites as a whole. Each new site in the network creates a new deployment with it's own configuration, dependencies and code, meaning new features and bug fixes can't be deployed across the whole network without specific individual effort and conflict resolution for each site. In practice once a site is live under this model, it can effectively be considered a siloed project without a significant relationship to other sites in the network.

As far as how feature rich an initial starter kit is largely depends on the project. For example, early versions aGov 8, the starter-kit distribution PreviousNext built and maintained for Australian government organisations was intentionally fairly rudimentary in the amount of content types it shipped with. The goal was a foundation to launch you into best practices, without being overly prescriptive. When future iterations of aGov were released that baked in Drupal 8's new media capabilities, it was not possible to deploy this iteration to all existing installations.

In a similar vein, I would classify govCMS8, the Australian governments own Drupal distribution under this same model. By default, the distribution ships with a lot more features and a lot more ready to go configuration than most starter-kits, however both SaaS and PaaS deployments of govCMS allow a wide scope of deep structural configuration changes to Drupal, which essentially sets up each project to function as a standalone unit after the initial build.

Products

Another less widespread approach is the product model. Under this model, a Drupal distribution is leveraged as the full feature set for all sites in the network and all sites make use of the initial and ongoing development roadmap of the product. This approach is arguably less flexible, since each individual site doesn't have unfettered access to extend and manipulate the platform.

The major advantage of this approach is, a single team can scale their ongoing enhancement and maintenance efforts to the entire network of sites regardless of the number of sites in the network.

Under the product model, since the feature set running on each site is a known and strictly defined set of configuration and dependencies, a team could feasibly migrate hundreds of sites to using new features of Drupal by working on a single software product. All sites would be the target of all new evolutions of the platform and benefit from it's ongoing maintenance. Evolutions of the platform would not strictly be limited to features, but also updating dependencies or moving to new major versions of Drupal. 

An example of a project PreviousNext delivered under this model was a video content platform for group of media organisations. Each organisation could leverage the product to spin up a website to share their videos and engage with their audience. New features were added regularly and at its height, 26 different websites serving vastly different audiences of users would evolve together. One of the challenges of maintaining a product distribution is the governance around flexibility and change. While tempting to allow each site to develop its own product roadmap, when each site required it's own bespoke features, the process for such requests would be followed:

  • The site owner raises a request a change to their website, "please replace the hero image on the homepage with a slideshow of images".
  • The team evaluates the request and places it on the product roadmap.
  • Instead of replacing all hero images with slideshows, the feature is developed an optional choice for content editors: you may either upload slides or a hero image.
  • The feature would be built, tested and deployed to all sites in the network.
  • The site owner is then able to upload slides and all other site owners in the network have the same capability.

This approach certainly takes a level of control and focused organisation effort to accomplish, however leveraged properly can have significant payoffs for the maintainability of a network of sites as a whole. Examples of product based distributions in the Open Source Drupal ecosystem are Open Social or Drupal Commons.

Federated back-ends

Another approach to building out a network of sites is the notion of a federated back-end. This dovetails with terms like "service oriented architecture", "multi-tenancy" or "content hub", where a single deployment and instance of Drupal is leveraged as a large repository of content for multiple web front-ends.

Under this model, instead of the boundary between users and content being defined by different databases and deployments of Drupal, they must instead be implemented in the application layer. That is, Drupal itself is customised and tailored to meet the access needs of the organisations sharing the same Drupal site. While this is certainly additional work and complexity, if a single group of content authors is responsible for content across the whole network, it can be advantageous to lower these barriers. Maintenance for the content hub is also fairly light touch under this model, since only one installation needs to be updated and maintained.

This pattern also intersects with the "product" style of application. Since all web properties are powered by the same underlying application, they closely share a feature set and product roadmap. While this model is also often deployed in conjunction with a decoupled front-end, Drupal sites are capable of delivering integrated front-ends to network sites from a single federated back-end. In some cases, the federated model has an elevated risk of being a single point of failure, given a single deployment and instance is responsible for sites in the network.

An example of the federated back-end model can be illustrated in the "Tide" component of vic.gov.au's "Single Digital Presence" project. Drupal 8 is deployed as a single instance serving multiple decoupled front-ends. The features of the single content repository are documented and available for evaluation by prospective users. 

Independent sites

One option that isn't often considered in of a lot of organisations when evaluating reusability of features and effort across a network of Drupal sites is simply building multiple completely unrelated sites and using smaller units of functionality as a mechanism for reuse. Drupal has a mature concept for sharing functionality across Drupal sites: the humble module.

While this approach in general doesn't strictly fit the theme of this blog post, in some cases writing and publishing a general case module which doesn't boil the ocean on dictating the tools, approach and features used to build a new Drupal site, is the best solution for some projects.

These kinds of projects would be driven by websites that are characterized as mostly unique with various related and unrelated feature sets. This is also an approach consistent with day to day of Drupal development outside the scope of network projects. With Drupal's open ecosystem, opportunities for collaboration and reuse often drive building functionality in reusable modules and publishing those modules on drupal.org.

Common misconceptions

Install profiles

Drupal "profiles" or "distributions" are not a silver bullet for developing and organising an architecture for networks of Drupal sites. They are incredibly flexible, so how they are deployed and leveraged still contain the same governance and architectural decisions discussed.

Configuration management

Configuration management is not a silver bullet. While configuration management has simplified a range of complex deployment problems that were present in Drupal 7 sites, it doesn't drive a particular architecture for networks of sites. The tools in Drupal core are continually getting sharper and while innovations in contrib have experimented with new mechanisms for deploying and updating configuration, it's not an end-to-end architectural solution for network sites.

Multisite

The multisite concept in Drupal is frequently misunderstood as an approach for building a network of sites. In reality, multisites are a tool for deploying any configuration or setup of Drupal sites to a shared document root. It doesn't produce any tangible outcome as far as project architecture is concerned beyond forcing multiple sites to be hosted on the same server.

Headless Drupal & JavaScript frameworks

While some of these approaches, like the "federated back-end" are significantly benefited by a decoupled front-end, headless Drupal is compatible with all models of network sites. You could build a product or starter-kit that was either fully integrated, progressively decoupled or completely decoupled and the same back-end architectural decisions would apply.

Drupal has strong API based functionality, which can be enabled and configured as required. The approach of evaluating and selecting frameworks or front-ends to consume Drupal, have their own set of considerations that need to be carefully evaluated for fitness in any given project.

Styling and appearance

Styleguide-driven development has largely matured to solve issues with reusability, inheritance and extensibility of visual components. This approach has been the foundation for all new sites built by PreviousNext for last few years, see our blog posts. By building components and style guides, duplication of effort when building front-ends has been minimised across both traditional and network based projects. For that reason the visual style and consistency of sites within a network is not necessarily a factor when considering Drupal architectural paradigms.

Summing up

Given the size of Drupal's ecosystem and the increasingly rapid pace of evolution, describing all of the factors and challenges that play into a large network site project is difficult. As always the process of rigorous discovery and deeply understanding a project's goals and requirements should always be the first step in beginning a technical project.

Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 13 September 2019

Comments

It should be mentioned that if Aegir (https://www.aegirproject.org/) is used for the hosting, management, provisioning, etc. of sites in any of the above scenarios, it'll save a lot of trouble. For example, all sites running on the same platform (Aegir-speak for a Drupal codebase) can be upgraded with a single button click, with rollback on any failures.

Pagination

Add new comment

Aug 09 2019
Aug 09

Scheduled Transitions is a module allowing you to schedule a specific previously saved revision to move from one state to another. This post provides an introduction to Scheduled Transitions for Drupal 8.

Scheduled Transitions is a module allowing you to schedule a specific previously saved revision to move from one state to another. For example an editor may edit a piece of content remaining in a draft state throughout the draft process. When ready, an editor may select the ready revision to be moved from draft to published. 

Another more complex use case is with the following workflow Draft -> Needs Review -> Approved -> Published -> Archived. A Content Editor could edit a piece of content until it is in Needs Review status, a Content Moderator will approve the content by setting the state to Approved. The Content Moderator would go to set up a scheduled transition for when the content would move from Approved to Published at some point in the future. If the content is time sensitive, another future scheduled transition could be created to automatically change from Published to Archived.

Scheduled Transitions integrates tightly with Content Moderation and Workflows, inheriting transitions, states, and associated permissions automatically.

This post and accompanying video cover configuration and general usage.

Video

[embedded content]

Another shorter version of the video is available without site building aspects, ready to be shared with an editorial team.

Dependencies

Requirements and dependencies are fairly bleeding edge, but will change in the future, as of posting they are:

Installation

Download and install the module using your favourite method:

composer require drupal/scheduled_transitions
drush pm:enable scheduled_transitions # or
drupal module:install scheduled_transitions

Configuration

Configure Workflows

If you have not already created a workflow, navigate to Configuration -> Workflows, click Add workflow button.

Create a label, select Content moderation from the Workflow type dropdown.

Set up states and the transitions between in any way you desire, and set which entity type bundles the workflow should apply to.

Configure Scheduled Transitions

Navigate to Configuration » Scheduled Transitions

Under the Enabled types heading, select the entity type bundles to enable Scheduled transitions on. Save the form.

Scheduled Transitions: Settings

User permissions

Navigate to People » Permissions.

Under Content Moderation heading, enable all workflow transition permissions that apply.Under Scheduled Transitions heading, enable Add scheduled transitions and View scheduled transitions permissions that apply. These permissions apply to individual entities, in addition to these permissions, users must also have access to edit the individual entities. Make sure you grant any permissions needed for users to edit the entities, for example Node's require Edit any content or Edit own content permissions.

General Usage

Moving on to day-to-day functionality of Scheduled Transitions.

Navigate to a pre-existing entity. Though nodes are show in examples below, Scheduled Transitions works with any revisionable entity type. Such as block content, terms, or custom entity types.

You'll find the Scheduled Transitions tab, with a counter in the tab indicating how many transitions are scheduled for the entity and translation being viewed.

Scheduled Transitions: Tab

Clicking the tab will send you to a listing of all scheduled transitions for an entity.

If the user has permission, an Add Scheduled transitions button will be visible.

Scheduled Transitions: List

Clicking the button presents a modal form. The form displays a list of all revisions for the entity or translation.

Scheduled Transitions: Modal

Click the radio next to the revision you wish to schedule for state change.

After the radio is selected, the form will reload showing valid workflow transitions from the selected revisions' state.

The user selects which transition is to be executed, along with the date and time the transition should be executed.

Scheduled Transitions: Revision Selected

Depending on the state of the selected source revision, an additional checkbox may display, prompting me to recreate pending revisions. This feature is useful if users have created more non published revisions after the scheduled revision. It prevents loss of any intermediate non-published work. A diagram is provided below:

Scheduled Transitions: Recreate Pending Revisions

Click the schedule button. The modal closes and the scheduled transitions list reloads.

Scheduled Transitions: Post creation

When the time is right, the scheduled transition is executed. You can force schedule transitions to execute by running cron manually. Cron should should be set up to run automatically and regularly, preferably every 5 minutes or so.

The job executes the transitions and deletes itself, removing itself from the transition list. As a result of executing the transition, you'll notice when navigating to the core revisions list for an entity a new revision is created, with a log outlining the state change.

Scheduled Transitions: Revisions

Multilingual

When dealing with entities with multiple translations, you can find that transitions are available for the translation in context, and are separate to other translations. For example revisions in English and German languages of an entity are scheduled independently.

Global List

Scheduled transitions comes with Views integration, on installation a view is pre-installed. You can find the view by navigating to Content » Scheduled Transitions. The view shows all pending scheduled transitions on the site.

Scheduled Transitions: Global List

For more information, check out the Scheduled Transitions project page or Scheduled Transitions project documentation.

Photo of Daniel Phin

Posted by Daniel Phin
Drupal Developer

Dated 9 August 2019

Comments

I wish every Drupal contrib module had an announcing blog post and accompanying video. Especially if it's this well executed. Thank you for this very high quality contribution!

ditto re: blog & vid

Looks technically solid. But I think the UI is going to be a bit much for the majority of content editors. 99% of the time the only revision that matters is the highest vid. Perhaps something on the entity edit form next to the submit buttons would be more usable.

Pagination

Add new comment

Jul 29 2019
Jul 29

Page objects are a pattern that can be used to write clearer and more resilient test suites. This blog post will explore implementing page objects in PHP with the Mink library.

There are various PHP libraries for creating and maintaining page objects. In order to create a library that was useful for the current state of PHP functional testing in Drupal, I created a library with the design goals of:

  • Working seamlessly with Drupal core test classes, traits and weitzman/drupal-test-traits.
  • Working with all of Drupal's dev dependency version constraints and not introducing additional dependencies.
  • Exclusively utilising the Mink API, to provide a fast on-ramp for moving existing tests to page objects and for developers to write new page objects using their existing knowledge of Mink.
  • Drawing inspiration from nightwatch.js to provide transferability between PHP and JS functional tests.

Taken from the project page, by implementing page objects:

  • You create tests that are easier to read and maintain.
  • You reduce coupling between test cases and markup.
  • You encourage thorough testing by making the whole process easier.

While these examples will be using sam152/mink-page-objects the principles apply to using any library or indeed plain old objects. First I'll examine a real project test case using Mink directly, written to test a search feature on a Drupal site:

/**
 * Test how search results appear on the site.
 */
public function testSearchItemDisplay() {
  $sample_result = $this->randomMachineName(32);
  $this->createNode([
    'title' => $sample_result,
    'type' => 'news_item',
    'body' => ['value' => 'Test news item body'],
    'moderation_state' => 'published',
  ]);
  $this->searchApiIndexItems();

  $this->drupalGet('<front>');
  $this->submitForm([
    'query' => $sample_result,
  ], 'Search');

  $this->assertSession()->pageTextContains('1 results for');
  $this->assertSession()->elementContains('css', 'h1', $sample_result);
  $this->assertSession()->elementContains('css', '.sidebar-menu__item--active', 'Show all');
  $this->assertSession()->elementContains('css', '.listing', $sample_result);

  // A news item should not appear when filtering by basic pages.
  $this->clickLink('Basic page');
  $this->assertSession()->pageTextContains('0 results for');
  $this->assertSession()->elementContains('css', '.sidebar-menu__item--active', 'Basic page');

  $this->clickLink('News item');
  $this->assertSession()->elementContains('css', '.sidebar-menu__item--active', 'News item');
  $this->assertSession()->elementContains('css', '.listing', $sample_result);
}

And now the equivalent test refactored to use a page object:

/**
 * Test how search results appear on the site.
 */
public function testSearchItemDisplayPageObject() {
  $sample_result = $this->randomMachineName(32);
  $this->createNode([
    'title' => $sample_result,
    'type' => 'news_item',
    'body' => ['value' => 'Test news item body'],
    'moderation_state' => 'published',
  ]);
  $this->searchApiIndexItems();

  $search_page = SearchPage::create($this);

  $search_page->executeSearch($sample_result)
    ->elementContains('@title', $sample_result)
    ->assertResultCount(1)
    ->assertResultsContain($sample_result)
    ->assertActiveFilter('Show all');

  $this->clickLink('Basic page');
  $search_page->assertActiveFilter('Basic page')
    ->assertResultCount(0);

  $this->clickLink('News item');
  $search_page->assertActiveFilter('News item')
    ->assertResultCount(1)
    ->assertResultsContain($sample_result);
}

In the second test, there are a few advantages:

  • The code is more DRY, since selectors on the page aren't repeated. In fact, if the page object was used for all future search tests, they'd never be repeated in a test again!
  • The test uses a more natural language that is easier to parse by readers of the code and communicates the intentions of the author in a clearer fashion.
  • The search page object is type-hinted, making writing new tests fast and reducing the amount of page related knowledge developers must collect and remember.

The cost paid for these benefits is an additional layer of indirection between your test case and the test browser, so to realise the full benefit of such an approach, I'd expect a page object to be written to service at least two different test cases however I haven't experimented implementing this pattern across a large scale test suite.

An annotated version of the page object (for the purposes of demonstration) looks like:

/**
 * A page object for the search page.
 */
class SearchPage extends DrupalPageObjectBase {

  /**
   * {@inheritdoc}
   */
  protected function getElements() {
    // Selectors found on the page, these can be referenced from any of the Mink
    // API calls within this page object.
    return [
      'title' => 'h1',
      'results' => '.listing',
      'activeFilter' => '.sidebar-menu__item--active',
    ];
  }

  /**
   * Assert the number of results on the search page.
   *
   * @param int $count
   *   The number of items.
   *
   * @return $this
   */
  public function assertResultCount($count) {
    $this->assertSession()->pageTextContains("$count results for");
    return $this;
  }

  /**
   * Assert a string appears on the page.
   *
   * @param string $string
   *   The string that should appear on the page.
   *
   * @return $this
   */
  public function assertResultsContain($string) {
    $this->elementContains('@results', $string);
    return $this;
  }

  /**
   * Assert a string does not appear on the page.
   *
   * @param string $string
   *   The string that should not appear on the page.
   *
   * @return $this
   */
  public function assertResultsNotContain($string) {
    $this->elementNotContains('@results', $string);
    return $this;
  }

  /**
   * Assert the active filter.
   *
   * @param string $filter
   *   The active filter.
   *
   * @return $this
   */
  public function assertActiveFilter($filter) {
    $this->elementContains('@activeFilter', $filter);
    return $this;
  }

  /**
   * Execute a search query.
   *
   * @param string $query
   *   A search query.
   *
   * @return $this
   */
  public function executeSearch($query) {
    $this->drupalGet('<front>');
    $this->submitForm([
      'query' => $query,
    ], 'Search');
    return $this;
  }

}

While the library itself is decoupled from Drupal, the DrupalPageObjectBase base class integrates a few additional Drupal features such as UiHelperTrait for methods like ::drupalGet and ::submitForm as well as creating a ::create factory to automatically wire dependencies from Drupal tests into the page object itself.

I would be interested in hearing thoughts on if introducing page objects may benefit Drupal core's own functional test suite and details on how that might be accomplished given the tools available. 

Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 29 July 2019

Add new comment

May 15 2019
May 15

Display Suite is a handy module we've used for a long time. However for new projects utilising Layout Builder we've found we don't need it. Swap out Display Suite for Drupal 8 core blocks with contexts!

Positioning fields

The main use case for Display Suite (DS) is to position fields into layouts. However, Layout Builder now offers a Drupal 8 core alternative to building layouts.

As DS utilises core's Layout Discovery module switching these layouts over to Layout Builder should be fairly straight forward. Having said that, so far we've only implemented this on new greenfield sites starting from scratch with Layout Builder.

Custom fields

One of DS's most useful features is defining custom fields as @DsField plugins.

Say we have a custom Event entity which needs custom output to format a map of that event.

DsField version

<?php

namespace Drupal\my_event\Plugin\DsField;

use Drupal\ds\Plugin\DsField\DsFieldBase;

/**
 * Plugin to render a map of an event.
 *
 * @DsField(
 *   id = "my_event_map",
 *   ...
 *   entity_type = "my_event"
 * )
 */
class EventMap extends DsFieldBase {

  /**
   * {@inheritdoc}
   */
  public function build() {
    /** @var \Drupal\my_event\Entity\Event $event */
    $event = $this->entity();
    
    // Logic here to build and format your map utilising $event.
  }

}

Block equivalent

This DsField converts directly to a Block plugin utilising context to get the entity.

<?php

namespace Drupal\my_event\Plugin\Block;

use Drupal\Core\Block\BlockBase;

/**
 * Block implementation to render a map of an event.
 *
 * @Block(
 *   id = "my_event_map",
 *   ...
 *   context = {
 *     "my_event" = @ContextDefinition("entity:my_event", required = TRUE),
 *   }
 * )
 */
class EventMap extends BlockBase {

  /**
   * {@inheritdoc}
   */
  public function build() {
    /** @var \Drupal\my_event\Entity\Event $event */
    $event = $this->getContextValue('my_event');
    
    // Logic here to build and format your map utilising $event.
  }

}

This block is then available for placement as per the usual Layout Builder techniques.

Controlling field markup

Another use for DS is to control the markup of and around fields.

As an alternative to DS we often use Element Class Formatter module to easily inject custom classes into fields. In combination with Twig templates utilising embeds and includes this should mostly do away with the need for DS.

Summing up

DS is a great module, full kudos to swentel, aspilicious and everyone else who's worked to make DS such a powerful UI based tool. However we don't really see a place for it looking to a world powered by Layout Builder.

Here's looking forward to a Drupal where all layout happens via Layout Builder!

Comments

It may be that I'm still not much more than a rookie, and all of my sites are pretty small, but I have the same problem with layout builder, paragraphs, and Gutenberg. If I build a page with one of these, I can't use the individual elements in a View (slide show, gallery, accordion, etc.).

Please tell me that I have missed something fundamental and that I am basically wrong.

Good question. Layout Builder is using core blocks, so you should be able to extract them with views.  However there's certainly still a very important use case for structured data. E.g. for an entity type with a custom layout built with Layout Builder, also having separate teaser image and text fields attached so these can be extracted into a view is perfectly valid. It's certainly not a case of just chuck everything into Layout Builder.

Pagination

Add new comment

Mar 13 2019
Mar 13

Views is a powerful and highly popular feature in most Drupal sites. For today’s blog post, we're going to look at how to create a Views Area plugin that will display sort links at the top of the page.

So what is an Area plugin ?

As per the documentation on Drupal.org, they are plugins governing areas of views, such as header, footer, and empty text.

Task at hand

For our example, we will be using a search view. Let’s say that this view is created to accept full text search of a content type, sort by relevance by default, and also can be able to sort by created date in descending order.

If we stick to the default Drupal behaviour and use it’s exposed forms we will get a form with buttons as shown below.

However, for this task we want to alter the default behaviour and make the form show links instead. The end result of the sort links should look like image below:

In order to achieve this we build this form using an Area plugin.

Let us call this view search, and add these settings via UI:

  • Add a field Configure filter criterion: Search: Full text search

Give a path to the view page

Now to add the sort criteria. For this instance we:

  • set relevance to be the default sort, and created date to be the optional sort.

And now for the fun part!

The header section is where we need to add the area plugin. It will add logic to render these two sort options and how each link should work on a user click.

In order for us to add the plugin to the header as shown below, let’s jump into start coding.

The Code

Following the rules of Drupal annotation-based plugins, the plugin should reside under the right directory and namespace for an Area plugin. E.g. app/modules/custom/my_search/src/Plugin/views/area

There we can create a PHP class which we will name as SortBy.php

As for any plugin to work area plugin needs the three main ingredients as well.

  • Namespace needs to follow PSR-4 standards and reside in the Drupal\my_module\Plugin\views\area namespace

  • Must use the @ViewsArea annotation

  • Must implement a particular interface or extend the a base class, in this instance the base class is AreaPluginBase

Namespace needs to follow PSR-4 standards

namespace Drupal\my_search\Plugin\views\area;use Drupal\Core\Url;use Drupal\views\Plugin\views\area\AreaPluginBase;

Annotation

/**
* Defines an area plugin to display a header sort by option.
*
* @ingroup views_area_handlers
*
* @ViewsArea("my_search_sort_by")
*/

Extend the class from AreaPluginBase

class SortBy extends AreaPluginBase {}

To make things a bit easier I will add a few constants in to the play.

// Query parameter of search form.
const KEYWORD_PARAM_NAME = 'search';

// Query parameter created by view for created date field.
const CREATED_DATE_PARAM = 'created_date';

// Query parameter created by view for relevance field.
const RELEVANCE_PARAM = 'search_api_relevance';

// Search view's route name.
const SEARCH_PAGE_ROUTE = 'view.search.page_1';

For this particular example we only need to override the render() method.

/**
* Render the area.
*
* @param bool $empty
*   (optional) Indicator if view result is empty or not. Defaults to FALSE.
*
* @return array
*   In any case we need a valid Drupal render array to return.
*/

public function render($empty = FALSE) {
// Sort criteria array.
// This will be our render array that will be used to generate the
// desired html.
$sort_links = [];

// Drupal request query.
$request_query = \Drupal::request()->query;

// Default query options for date sort criteria.
$date_options = [
  'query' => [
    'sort_by' => self::CREATED_DATE_PARAM,
    'sort_order' => 'DESC',
    'search' => $request_query->get(self::KEYWORD_PARAM_NAME),
  ],
];

So, now we already have a known route name of the search view which is view.search.page_1 all we need to do is to pass in the $route_parameters, $options values into Url::fromRoute()

// Default query options for relevance sort criteria.
$relevance_options = [
  'query' => [
    'sort_by' => self::RELEVANCE_PARAM,
    'sort_order' => 'DESC',
    'search' => $request_query->get(self::KEYWORD_PARAM_NAME),
  ],
];

These query parameters will be later displayed as below when a search is made from the form and sorted by created date.

http://mysite/search?sort_by=created_date&sort_order=DESC&search=Thomas

// Determine which criteria is currently active.
// Default is set to relevance.
$active_link = self::RELEVANCE_PARAM;

// On search page load we need check if a GET query is passed in   
// having key sort_by
if ($request_query->has('sort_by') && $request_query->get('sort_by') === self::CREATED_DATE_PARAM) {
  $active_link = self::CREATED_DATE_PARAM;
}

$sort_links = [
  [
    'title' => 'Relevance',
    'link' => Url::fromRoute(self::SEARCH_PAGE_ROUTE, [], $relevance_options),
    'active' => $active_link === self::RELEVANCE_PARAM,
  ],
  [
    'title' => 'Date',
    'link' => Url::fromRoute(self::SEARCH_PAGE_ROUTE, [], $date_options),
    'active' => $active_link === self::CREATED_DATE_PARAM,
  ],
];

// Finally we return our render array

return [
  '#theme' => 'cdu_sort_by_links',
  '#sort_links' => $sort_links,

  // Tell Drupal this varies by url.
  '#cache' => [
    'contexts' => ['url'],
  ],
];

// End of render function
}

Important thing to note is the use of  Url::fromRoute() to generate the link rather than using a hardcoded /search?some_stuff

This is because of one good reason: if someone goes into the view and decides to change page url from /search to /content/search - the Url:: code will keep working but the hard-coded href="https://previousnext.com.au/search..." will not.

At the moment we're hardcoding the view/route name, because this is a one off plugin. If you were building an area plugin that worked for all you could use

$this->displayHandler and $this->view to dynamically derive the route name and support any view.

Now our plugin is ready, but it is not discoverable by Drupal. So we need to add:

app/modules/custom/my_search/my_search.views.inc file and add below code under hook_views_data_alter()

function my_search_views_data_alter(array &$data) {
 $data['views']['my_search'] = [
  'title' => t('Sort by'),
  'help' => t('Provides sort by option.'),
  'area' => [
    'id' => 'my_search_sort_by',
  ],
];
return $data;
}

We need to add this because views area plugins are a bit odd. There is a core issue to try and make them behave like normal plugins.

After adding this, we are all set.

Now we can add Sort by plugin under view header.

Final outcome

On the search page you should see the sort links according to the applied theme

When you click on relevance; the url should change into

http://mysite/search?sort_by=relevance&sort_order=DESC

And on sort by date click;

http://mysite/search?sort_by=created_date&sort_order=DESC

References

 

Add new comment

Feb 11 2019
Feb 11

The time has come to close the book on our government Drupal distribution, aGov. We are no longer actively developing sites using aGov, and instead are focussing our efforts on the new GovCMS Drupal 8 distribution. Here’s a short history of aGov and how we got to this point.

Why we developed aGov

PreviousNext has a long history of developing sites for government at all levels. Back in 2012 there was a big shift in government moving towards accessibility, security and mobile support. There were many government sites built using legacy software that were too expensive and complex to update to conform to these requirements. At the same time government was embracing open source as a legitimate replacement.

We were seeing an increasing number of government agencies coming to us to help them meet these requirements, and we were implementing the same changes on each and every project.

aGov was developed as a response to this. The intention was to create a serious alternative to legacy applications, that could be spun up with minimal effort.

Drupal 7 was not very accessible out of the box, and aGov combined numerous contrib modules and a base theme that helped it meet WCAG 2 AA compliance.

aGov takes off

Thanks in part to the aGov distribution, we saw a huge adoption of Drupal in government. Agencies were able to create sites relatively easily, and host where ever they chose, be that in the cloud or on-premise. At its peak, aGov was in use on 600 sites and has been downloaded almost 200,000 times, consistently staying in the Top 20 Drupal Distributions on Drupal.org.

Distributions versus Starter Kits

Back in 2012, Drupal distributions were seen as the answer to the code reuse problem. In Drupal 7, configuration is all stored in the database, making it difficult to manage. The solution was the Features module, which had to accommodate myriad different configuration formats and try to wrangle them all into code that could be deployed. As aGov was typically modified and extended in 99% of cases, this configuration was difficult to maintain and supporting upgrade paths for each and every site was near impossible.

We improved the situation with a 7.x-2.x release and later with a 7.x-3.x release, however the inherent limitations of Drupal 7 were still there.

GovCMS

Around the 2nd half of 2014 we worked with the Australian Department of Finance to fork aGov to the GovCMS distribution. GovCMS is a hosting, procurement process and Drupal distribution to allow government agencies to build and deploy Drupal websites with minimum friction. The potential of Drupal and aGov was now being realised in a comprehensive Drupal SaaS platform.

The Department of Finance took ownership of the GovCMS codebase, however we still saw a need for more complex sites that would benefit from aGov, so decided to keep it alive. In aGov 7.x-3.x we incorporated UI Kit, the government design system, and in 2015 started active development on a Drupal 8 version.

Drupal 8

PreviousNext invested large amounts of time into the Drupal 8 release, which greatly increased our overall expertise and confidence in using it.

Drupal 8 solved a lot of the problems aGov was initially created for. First class accessibility and mobile support, as well as a comprehensive configuration management system that made moving configuration changes around a breeze.

As such, the Drupal 8 version of aGov was much smaller and simpler than it’s Drupal 7 equivalent. This also meant we would often use vanilla Drupal 8 instead of aGov when working on government sites as there was less need for an ‘opinionated’ (read inflexible) solution for different sites.

Maintenance Fixes Only

For the last two years, we have been supporting aGov, by updating core and contributed modules. However, as we no longer use it ourselves, there has been little impetus to extend or enhance the distribution. It became apparent that we could no longer continue to support it alone.

After six years since its original creation, we will be marking the project’s Maintenance Status as Seeking New Maintainer and its development status as No further development.  We will no longer be active in the issue queue or creating new releases.

If you are a user of aGov, you will be still able to upgrade Drupal core and contributed modules on a site by site basis as before. If you encounter upgrade issues, you should create tickets in the project where the issue occurs.

If anyone is interested in taking over maintainership, please contact us through the project’s issue queue.

Where to from here?

Late last year the GovCMS team successfully launched its Kubernetes-based hosting platform, and improved developer workflows. With 247 live websites and 48 site currently in development, the Drupal-based platform is stronger than ever, and continues to grow. 

PreviousNext has been developing GovCMS sites since day one, and we’re continuing to grow this part of our business. Dropping support for aGov is going to free up some of our time to put to better use on the future of government sites in Australia.

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 11 February 2019

Comments

A pat on the back for the pioneering work you guys did and for pushing the project forward throughout the years. The legacy will live on.

Huge thanks to you Kim, and the whole PreviousNext team for creating and maintaining aGov. It was a "critical success factor" for the adoption of Drupal in Government around Australia, and led to increased maturity in the community.

But thanks also for this very gracious "sunset" post. Yes, it's clearly time to let this one go.

Kudos.

Pagination

Add new comment

Nov 22 2018
Nov 22

Update: Re-published for DrupalSouth 2018 edition

The PreviousNext team are sponsoring and helping to run the sprint day on Wednesday, December 5th 2018, and there are a few things you can do now to hit the ground running on the day.

What's a Sprint Day about anyway?

Contribution Sprints are a great opportunity to get involved in contributing to Drupal. Contributions don't have to be just about code. Issue triage, documentation, and manual testing are examples of non-code contributions.

If you are new to contributing, you can take a look at the New Contributor tasks on the Drupal.org Contributor Tasks page.

While there will be experienced contributors there on the day to help, keep in mind, this is not a training session. :-)

Set Up a Development Environment

There is more than one way to shear a sheep, and there is also more than one way to set up a local development environment for working on Drupal.

We've create a Drupal project starter kit for sprint attendees which should speed up this process. Head over to https://github.com/previousnext/drupal-project and follow the README.

If you have any issues, feel free to post them in the Github issue queue https://github.com/previousnext/drupal-project/issues and we'll try and resolve them before the day.

Find Issues to Work On

If you want to see what might be an interesting issue to work on, head over to the Drupal.org Issue Queue and look for issues tagged with 'DrupalSouth 2018'. These are issues that others have tagged.

You can also tag an issue yourself to be added to the list.

Being face-to-face with fellow contributors is a great opportunity to have discussions and put forward ideas. Don't feel like you need to come away from the day having completed lines and lines of code.

We look forward to seeing you all there!

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 22 November 2018

Add new comment

Nov 02 2018
Nov 02

As part of the session I presented in Drupal Europe, REST Ready: Auditing established Drupal 8 websites for use as a content hub, I presented a module called “Entity Access Audit”.

This has proved to be a useful tool for auditing our projects for unusual access scenarios as part of our standard go-live security checks or when opening sites up to additional mechanisms of content delivery, such as REST endpoints. Today this code has been released on Drupal.org: Entity Access Audit.

There are two primary interfaces for viewing access results, the overview screen and a detailed overview for each entity type. Here is a limited example of the whole-site overview showing a few of the entity types you might find in core or custom modules:

Entity access audit

Here is a more detailed report for a single entity type:

Entity access audit

The driving motivation behind these interfaces was being able to visually scan entity types and ensure that the access results align with our expectations. This has so far helped identify various bugs in custom and contributed code.

In order to conduct a thorough access test, the module uses a predefined set of dimensions and then uses a cartesian product of these dimensions to test every combination. The dimensions tested out of the box, where applicable to the given entity type are:

  • All bundles of an entity type.
  • If the current user is the entity owner or not.
  • The access operation: create, view, update, delete.
  • All the available roles.

It’s worth noting that these are only common factors used to determine access results, they are not comprehensive. If access was determined by other factors, there would be no visibility of this in the generated reports.

The module is certainly not a silver bullet for validating the security of Drupal 8 websites, but has proved to be a useful additional tool when conducting audits.

Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 2 November 2018

Add new comment

Oct 08 2018
Oct 08

In this blog post, we'll have a look at how contributed Drupal modules can remove the core deprecation warnings and be compatible with both Drupal 8 and Drupal 9.

Ever since Drupal Europe, we know Drupal 9 will be released in 2020. As per @catch’s comment in 2608496-54

We already have the continuous upgrade path policy which should mean that any up-to-date Drupal 8 module should work with Drupal 9.0.0, either with zero or minimal changes.

Drupal core has a proper deprecation process so it can be continuously improved. Drupal core also has a continuous process of removing deprecated code usages in core should not trigger deprecated code except in tests and during updates, because of proper deprecation testing.

The big problem for contributed modules aka contrib is the removal of deprecated code usage. To allow contrib to keep up with core's removal of deprecation warnings contrib needs proper deprecation testing which is being discussed in support deprecation testing for contributed modules on Drupal.org.

However, Drupal CI build process can be controlled by a drupalci.yml file found in the project. The documentation about it can be found at customizing DrupalCI Testing for Projects.

It is very easy for contributed modules to remove their usage of deprecated code. All we need is to add the following drupalci.yml file to your contributed modules and fix the fails.

# This is the DrupalCI testbot build file for Dynamic Entity Reference.
# Learn to make one for your own drupal.org project:
# https://www.drupal.org/drupalorg/docs/drupal-ci/customizing-drupalci-testing
build:
  assessment:
    validate_codebase:
      phplint:
      phpcs:
        # phpcs will use core's specified version of Coder.
        sniff-all-files: true
        halt-on-fail: true
    testing:
      # run_tests task is executed several times in order of performance speeds.
      # halt-on-fail can be set on the run_tests tasks in order to fail fast.
      # suppress-deprecations is false in order to be alerted to usages of
      # deprecated code.
      run_tests.phpunit:
        types: 'PHPUnit-Unit'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false
      run_tests.kernel:
        types: 'PHPUnit-Kernel'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false
      run_tests.functional:
        types: 'PHPUnit-Functional'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false
      run_tests.javascript:
        concurrency: 15
        types: 'PHPUnit-FunctionalJavascript'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false

This drupalci.yml will check all the Drupal core coding standards. This can be disabled by the following change:

      phpcs:
        # phpcs will use core's specified version of Coder.
        sniff-all-files: false
        halt-on-fail: false

This file also only runs PHPUnit tests, to run legacy Simpletest you have to the following block:

      run_tests.simpletest:
         types: 'Simpletest'
         testgroups: '--all'
         suppress-deprecations: false
         halt-on-fail: false

But if you still have those, you probably want to start there, because they won't be supported in Drupal 9.

Last but not the least if you think the is module is not ready yet to fix all the deprecation warning you can set suppress-deprecations: true.

As a contrib module maintainer or a contrib module consumer I encourage you to add this file to all the contrib modules you maintain or use, or at least create an issue in the module's issue queue so that at the time of Drupal 9 release all of your favourite modules will be ready. JSONAPI module added this file in https://www.drupal.org/node/2982964 which inspired me to add this to DER in https://www.drupal.org/node/3001640.

Photo of Jibran Ijaz

Posted by Jibran Ijaz
Senior Drupal Developer

Dated 8 October 2018

Comments

What Alex said. I did this for the JSON API module because we want it to land in Drupal core. But we are running into the problems explained in the issue linked by Alex.

Despite Drupal 9 being announced, Drupal Continuous Integration system is not yet ready for modules trying to keep current with all deprecations for Drupal 9, while remaining compatible with both simultaneously minors with security team coverage (current + previous, ATM 8.6 + 8.5) and the next minor (next, ATM 8.7). Hopefully we soon will be :)

Thanks for writing about this though, I do think it's important that more module maintainers get in this mindset!

DrupalCI always runs contrib tests against the latest core branch. As a contrib module maintainer if I have to make a compatibility change for core minor version then I create a new release and add that to releases notes after the stable release of core minor version e.g. 8.x-2.0-alpha8. I never have to create a new release for the core patch release at least till now but yes I don't know how would I name the new release if I ever have to do that but then again that's contrib semvar issue.

DrupalCI runs against whichever core branch the maintainer has configured it to run against.

If a contributed module wants to remove usages of deprecations, it should probably never do that against the "Development" branch, as there isnt a way for a contrib module to both remove those deprecations, *and* still be compatible with supported or security branches. The earliest that a contrib module should try to remove new deprecations is at the pre-release phase, as at that point we're unlikely to introduce new deprecations.

Pagination

Add new comment

Sep 25 2018
Sep 25

Drupal 8.6 has shipped with the Media Library! It’s just one part of the latest round of improvements from the Media Initiative, but what a great improvement! Being brand new it’s still in the “experimental” module state but we’ve set it up on this website to test it out and are feeling pretty comfortable with its stability.

That said, I highly encourage you test it thoroughly on your own site before enabling any experimental module on a production site. Don’t just take my word for it :)

What it adds

The Media Library has two main parts to it...

Grid Listing

There’s the Grid Listing at /admin/content/media, which takes precedence over the usual table of media items (which is still available under the “Table” tab). The grid renders a new Media Library view mode showing the thumbnail and compact title, as well as the bulk edit checkbox.

The new media library grid listing page

Field Widget

Then there’s the field widget! The field widget can be set on the “Manage Form Display” page of any entity with a Media Reference Field. Once enabled, an editor can either browse existing media (by accessing the Grid Listing in a modal) or create a new media item (utilising the new Media Library form mode - which is easy to customise).

Media reference field with the new Media Library form widget

Media Library widget once media has been added, which shows a thumbnail of the media

The widget is very similar to what the ‘Inline Entity Form’ module gave you, especially when paired with the Entity Browsers IEF submodule. But the final result is a much nicer display and in general feels like a nicer UX. Plus it’s in core so you don’t need to add extra modules!

The widget also supports bulk upload which is fantastic. It respects the Media Reference Fields cardinality, so limit it to one - and only file can be uploaded or selected from the browser. Allow more than one and upload or select up to that exact number.  The field even tells you how many you can add and how many you have left. And yes, the field supports drag and drop :)

What is doesn’t add

WYSIWYG embedding

WYSIWYG embed support is now being worked on for a future release of Drupal 8 core, you can follow this Meta issue to keep track of the progress. It sounds like some version of Entity Embed (possibly limited to Media) will make it’s way in and some form of CKEditor plugin or button will be available to achieve something similar to what the Media Entity Browser, Entity Browser, Entity Embed and Embed module set provides currently.

Until then though, we’ve been working on integrating the Media Libraries Grid Listing into a submodule of Media Entity Browser to provide editors with the UX improvements that came with Media Library but keeping the same WYSIWYG embed process (and the contrib modules behind it) they’re currently used to (assuming they’re already using Media Entity Browser, of course). More on this submodule below.

This is essentially a temporary solution until the Media Initiative team and those who help out on their issue queue (all the way from UX through to dev) have the time and mental space to get it into core. It should hopefully have all same the bulk upload features the field widget has, it might even be able to support bulk embedding too!

View mode or image style selectors for editors

Site builders can set the view mode of the rendered media entity from the manage display page, which in turn allows you to set an image style for that view mode, but editors can’t change this per image (without needing multiple different Media reference fields).

There is work on supporting this idea for images uploaded via CKEditor directly, which has nothing to do with Media, but I think it would be a nice feature for Media embedding via WYSIWYG as well. Potentially also for Media Reference Fields. But by no means a deal breaker.

Advanced cropping

From what I can gather there are no plans to add any more advanced cropping capabilities into core. This is probably a good thing since cropping requirements can differ greatly and we don’t want core to get too big. So contrib will still be your goto for this. Image Widget Crop is my favourite for this, but there’s also the simpler Focal Point.

You can test out the submodule from the patch on this issue and let us know what you think! Once the patch is added, enable the submodule then edit your existing Entity Browsers and swap the View widget over to the “Media Entity Browser (Media Library)” view.

Form for changing the Entity Browser view widget

It shouldn’t matter if you’ve customised your entity browser. If you’ve added something like Dropzone for drag-and-drop support it *should* still work (if not, check the Dropzone or Entity Browser issue queues). If you’ve customised the view it uses however, you might need to redo those customisations on the new view.

I also like updating the Form Mode of the Entity Browsers IEF widget to use the new Media Library form display, which I always pair back to just the essential fields (who really needs to manually set the author and created time of uploaded media?).

You still can’t embed more than one media item at a time. But at least now you also can’t select more than one item when browsing so that’s definitely an improvement.

Modal of the Media Entity Browser showing the same Grid listing

Plus editors will experience a fairly consistent UX between browsing and uploading media on fields as they do via the WYSIWYG.

Once setup and tested (ensuring you’ve updated any Media Reference Fields to use the new Media Library widget too) you can safely disable the base Media Entity Browser module and delete any unused configuration - it should just be the old “Media Entity Browser” view.

Please post any feedback on the issue itself so we can make sure it’s at its best before rolling another release of the module.

Happy days!

I hope you have as much fun setting up the Media Library as I did. If you want to contribute to the Media Initiative I’m sure they’ll be more than happy for the help! They’ve done a fantastic job so far but there’s still plenty left to do.

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 25 September 2018

Comments

Nice and useful article of Media Library in core usage into each Drupal 8 project.
Thank you!

Pagination

Add new comment

Aug 17 2018
Aug 17

Allow sitebuilders to easily add classes onto field elements with the new element_class_formatter module.

Adding classes onto a field element (for example a link or image tag - as opposed to the wrapper div) isn't always the easiest thing to do in Drupal. It requires preprocessing into the elements render array, using special Url::setOptions functions, or drilling down a combinations of objects and arrays in your Twig template.

The element_class_formatter module aims to make that process easier. At PreviousNext we love field formatters! We write custom ones where needed, and have been re-using a few generic ones for quite a while now. This module extends our generic ones into a complete set, to allow for full flexibility, sitebuilding efficiency and re-usability of code. 

To use this module, add and enable it just like any other, then visit one of your Manage Display screens. The most widely available formatter is the Wrapper (with class) one, but the others follow a similar naming convention; "Formatter name (with class)". The majority of these formatters extend a core formatter, so all the normal formatter options should still be available.

The manage display page with the formatter selected for three different field types

The manage display page with new (with class) field formatters selected

Setting classes on the configuration pane of a link field

The field formatter settings, with all the default options

Use this module alongside Reusable style guide components with Twig embed, Display Suite with Layouts and some Bare templates to get optimum Drupal markup. Or just use it to learn how to write your own custom field formatters!

For feature requests or issues please see the modules Issue queue on Drupal.org

A quick "class formatter" module comparison

There are a couple of "class formatter" modules around so lets do a quick comparison;

Lets say we have a node content type, which has a link field and we're looking at the teaser view mode. The standard markup might look something like this;

<div class="node node--teaser"> 

  <div class="field"> 
    <div class="field-item">

      <a href="http://example.com">Example link</a>

    </div>   
  </div>
 
</div>

Then we'll use our field formatters to add my-custom-class into the markup, and see where it ends up.

Element Class Formatter

As mentioned above, this module adds a class to a fields element. So the actual link of a link field, for example. The field template markup is untouched.

The class is set at the view mode (configuration) level, so content editors don't get to choose what class goes on the link. So for our node teaser with link field example, all nodes get the same class every time the teaser is displayed. The new markup would be:

<div class="node node--teaser">

  <div class="field">
    <div class="field-item">

     <a href="http://example.com" class="my-custom-class">Example link</a>
  
    </div>
  </div>

</div>

Field Formatter Class

The Field Formatter Class module is similar to the Element Class Formatter module except that is adds the class to field template markup, not the link. Otherwise it works the same way, on a view mode level.

<div class="node node--teaser">

  <div class="field my-custom-class">
    <div class="field-item">

     <a href="http://example.com">Example link</a>
  
    </div>
  </div>

</div>

I haven't actually used this module before, as it's more natural for me to work in Twig templates, utilising embeds and includes. But if you like doing things via the UI then check it out.

Entity Class Formatter

Entity Class Formatter is a very nice module which lets you (site builder or content editor) add a class to the fields parent entity. It further differs from the above modules in that it's a combination of configuration and content. You (the site builder) define a set of classes that the content editor can choose from. Each node can have a different class from the pre-defined list.

So say there's two node teasers in our markup;

<div class="node node--teaser my-custom-class">

  <div class="field">
    <div class="field-item">

     <a href="http://example.com">Example link</a>
  
    </div>
  </div>

</div>
<div class="node node--teaser my-other-custom-class"> 

  <div class="field"> 
    <div class="field-item"> 

      <a href="http://example.com">Example link</a>   

    </div>   
  </div> 
</div>

Which is really handy for adding things like grid or variant classes to things. As you can see it does nothing to the link fields markup, it's actually it's own separate field, and just utilises a Field Formatter so you can define how the field should be used.

Summary

You can actually use all these modules together, as they target different parts of the markup. They're complimentary, not competitors. We definitely use the entity_class_formatter together with element_class_formatter and let Twig handle the middle part. 

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 17 August 2018

Add new comment

Aug 14 2018
Aug 14

There is not a lot of documentation available about what's the difference between running a browser in WebDriver mode vs Headless so I did some digging...

Apparently, there are two ways to run Chrome for testing:

  • As WebDriver
  • As Headless

WebDriver:

There are two ways to run Chrome as WebDriver:

Using Selenium:

Run Selenium standalone server in WebDriver mode and pass the path of ChromeDriver bin along with the config e.g. Selenium Dockerfile

This works fine with Nightwatch standard setup, \Drupal\FunctionalJavascriptTests\JavascriptTestBase and also with Drupal core's new \Drupal\FunctionalJavascriptTests\WebDriverTestBase.

Using ChromeDriver:

Run ChromeDriver in WebDriver mode e.g. chromedriver Dockerfile

This works fine with Nightwatch, JTB, and WTB.

Headless:

Using Chrome

Run Chrome browser binary in headless mode. e.g. Chrome headless Dockerfile

Nightwatch is not working with this set up, at least I was unable to configure it. See https://github.com/nightwatchjs/nightwatch/issues/1390 and https://github.com/nightwatchjs/nightwatch/issues/1439 for more info. \DMore\ChromeDriver can be used to run the javascript tests.

Using ChromeDriver

Using Selenium ChromeDriver can be run in headless mode something like this:

const fs = require('fs');
const webdriver = require('selenium-webdriver');
const chromedriver = require('chromedriver');

const chromeCapabilities = webdriver.Capabilities.chrome();
chromeCapabilities.set('chromeOptions', {args: ['--headless']});

const driver = new webdriver.Builder()
  .forBrowser('chrome')
  .withCapabilities(chromeCapabilities)
  .build();

DrupalCI is running ChromeDriver without Selenium and testing Nightwatch and WTB on it.

Conclusion

The question is which is the best solution to run Nightwatch and JTB/WTB tests using the same setup?

  • We had seen some memory issues with Selenium containers in the past but we haven't run into any issue recently so I prefer this and you can swap Selenium container to use different browsers for testing.
  • We have also seen some issues while running ChromeDriver in WebDriver mode. It just stops working mid-test runs.
  • I was unable to get Headless Chrome working with Nightwatch but it needs more investigation.
  • Headless ChromeDriver setup on DrupalCI is quite stable. For JTB this would mean that we could use anyone from \Drupal\FunctionalJavascriptTests\DrupalSelenium2Driver and DMore\ChromeDriver.

Please share your ideas and thoughts, thanks!

For more info:

Photo of Jibran Ijaz

Posted by Jibran Ijaz
Senior Drupal Developer

Dated 14 August 2018

Comments

We are also having a discussion about this in 'Drupal testing trait' merge request, see merge_requests/37.

Might be worth adding to this list that there's an alternative setup I successfully tested in april using the browserless/chrome image with Cucumber and Puppeteer for behavioral tests.

YMMV but to give a rough idea, here's the relevant docker-compose.yml extract :

b-tester:
image: node:9-alpine
command: 'sh -c "npm i"'
volumes:
- ./tests:/tests:cached
working_dir: /tests
network_mode: "host"

# See https://github.com/GoogleChrome/puppeteer/issues/1345#issuecomment-3451…
chrome:
image: browserless/chrome
shm_size: 1gb
network_mode: "host"
ports:
- "3000:3000"

... and in tests/package.json :

"devDependencies": {
"chai": "^4.1.2",
"cucumber": "^4.0.0",
"puppeteer": "^1.1.0"
},
"scripts": {
"test": "cucumber-js"
}

... and to connect, open a page and take screenshots, e.g. in tests/features/support/world.js :

const { setWorldConstructor } = require("cucumber");
const { expect } = require("chai");
const puppeteer = require("puppeteer");
const PAGE = "http://todomvc.com/examples/react/#/";

class TodoWorld {

... (snip)

async openTodoPage() {
// See https://github.com/joelgriffith/browserless
// (via https://github.com/GoogleChrome/puppeteer/issues/1345#issuecomment-3451…)
this.browser = await puppeteer.connect({ browserWSEndpoint: 'ws://localhost:3000' });
this.page = await this.browser.newPage();
await this.page.goto(PAGE);
}

... (snip)

async takeScreenshot() {
await this.page.screenshot({ path: 'screenshot.png' });
}

... (snip)
}

setWorldConstructor(TodoWorld);

→ To run tests, e.g. :
$ docker-compose run b-tester sh -c "npm test"

(result in tests/screenshot.png)

I ran out of time to make a prototype repo, but my plan was to integrate https://github.com/garris/BackstopJS

Pagination

Add new comment

Aug 08 2018
Aug 08

Malicious users can intercept or monitor plaintext data transmitting across unencrypted networks, jeopardising the confidentiality of sensitive data in Drupal applications. This tutorial will show you how to mitigate this type of attack by encrypting your database queries in transit.

With attackers and data breaches becoming more sophisticated every day, it is imperative that we take as many steps as practical to protect sensitive data in our Drupal apps. PreviousNext use Amazon RDS for our MariaDB and MySQL database instances. RDS supports SSL encryption for data in transit, and it is extremely simple to configure your Drupal app to connect in this manner.

1. RDS PEM Bundle

The first step is ensuring your Drupal application has access to the RDS public certificate chain to initiate the handshake. How you achieve this will depend on your particular deployment methodology - we have opted to bake these certificates into our standard container images. Below are the lines we've added to our PHP Dockerfile.

# Add Amazon RDS TLS public certificate.
ADD https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem  /etc/ssl/certs/rds-combined-ca-bundle.pem
RUN chmod 755 /etc/ssl/certs/rds-combined-ca-bundle.pem

If you use a configuration management tool like ansible or puppet, the same principal applies - download that .pem file to a known location on the app server.

If you have limited control of your hosting environment, you can also commit this file to your codebase and have it deployed alongside your application.

2. Drupal Database Configuration

Next you need to configure Drupal to use this certificate chain if it is available. The PDO extension makes light work of this. This snippet is compatible with Drupal 7 and 8.

$rds_cert_path = "/etc/ssl/certs/rds-combined-ca-bundle.pem";
if (is_readable($rds_cert_path)) {
  $databases['default']['default']['pdo'][PDO::MYSQL_ATTR_SSL_CA] = $rds_cert_path;
}

3. Confirmation

The hard work is done, you'll now want to confirm that the connections are actually encrypted.

Use drush to smoke check the PDO options are being picked up correctly. Running drush sql-connect should give you a new flag: --ssl-ca.

$ drush sql-connect

mysql ... --ssl-ca=/etc/ssl/certs/rds-combined-ca-bundle.pem

If that looks OK, you can take it a step further and sniff the TCP connection between Drupal and the RDS server.

This requires root access to your server, and the tcpflow package installed - this tool will stream the data being transmitted over port 3306. You are wanting to see illegible garbled data - definitely not content that looks like a SQL queries or responses!

Run this command, and click around your site while logged in (to ensure minimal cache hits).

$ tcpflow -i any -C -g port 3306

This is the type of output which indicates the connection is encrypted.

tcpflow: listening on any

x1c
"|{mOXU{7-rd 0E
W$Q{C3uQ1g3&#a]9o1K*z:yPTqxqSvcCH#Zq2Hf8Fy>5iWlyz$A>jtfV9pdazdP7
tpQ=
i\R[dRa+Rk4)P5mR_h9S;lO&/=lnC<U)U87'^[email protected]{4d'Qj2{10
YKIXMyb#i(',,j4-\1I%%N>F4P&!Y5_*f^1bvy)Nmga4jQ3"W0[I=[3=3\NLB0|8TGo0>I%^Q^~jL
L*HhsM5%7dXh6w`;B;;|kHTt[_'CDm:PJbs$`/fTv'M .p2<KTE
lt3'[*z]n6)O*Eiq9w20Rq453*mm=<gwJ_'tn]#p`SQ]5hGDLnn?YQ
DDujr!e7D#d^[email protected]+v3Hy(('7O\2.6{0+
V{+m'[cq|6t!Zhv,_/:EJbBF9D8Qz+2t=E(6}jR}qDezq'~)ikO$Y:F:G,UjC[{qF;/srT?7mm=#DDUNa"%|"[email protected]<szV*B^g/Ij;-f~r~X~t-]}Yvr9zpO0Yf2mOoZ-{muU1w6R.'u=zCfT,S|Cp4.<vRN_gqc[vER?NLN_XGgve-O}3.q'b*][email protected](|Sm15c&=k6Ty$Ak_ZaA.`vE=]V($Bm;_Z)sp..~&!9}uH+K>JP' Ok&erw
W")wLLi1%l5#lDV85nj>R~7Nj%*\I!zFt?w$u >;5~#)/tJbzwS~3$0u'/hK /99.X?F{2DNrpdHw{Yf!fLv
`
[email protected]?AsmczC2*`-/R rA-0(}DXDKC9KVnRro}m#IP*2]ftyPU3A#.?~+MDE}|l~uPi5E&hzfgp02!lXnPJLfMyFOIrcq36s90Nz3RX~n?'}ZX
'Kl[k<TK 
xqj^'Wobyg.oz#kh35'@NlJ{r'qlQ;YE>{#fBa4B\D-H`;c/~O,{DWrltYDbu
cB&H\hVaZIDYTP|JpTw0 |(ElJo{[email protected]#5#[email protected]#{f)ux(EES'Ur]N!P[cp`8+Z-$vh%Hnk=K^%-[KQF'2NzTfjSgxG'/p HYMxgfOGx1"'SEQ1yY&)DC*|z{')=u`TS0u0{xp-(zi6zp3uZ'~E*ncrGPD,oW\m`2^ Hn0`h{G=zohi6H[d>^BJ~ W"c+JxhIu
[{d&s*LFh/?&r8>$x{CG4(72pwr*MRVQf.g"dZU\9f$
h*5%nV9[:60:23K Q`8:Cysg%8q?iX_`Q"'Oj
:OS^aTO.OO&O|c`p*%1TeV}"X*rHl=m!cD2D^)Xp$hj-N^pMb7x[Jck"P$Mp41NNv`5x4!k1Z/Y|ZH,k)W*Y(>f6sZRpYm
8Ph42K)}.%g%M]`1R^'<luU5l7i;1|D2U\
#\?M"33F6{sN?tb|&E.08, &To*H4ovTXH;IWt<zwQ(Z4kyuLr6tKkdEw3Q,Pq!'_%~MyYL~R^(=(CH;F%CKf8q,eNObGX2Oue2#b]4<
;
IE4tf&*`)n<Z9sJTvUMhChiD/0byETR57r$".ul;qd*M+,<?&xq&H)yE$2?+unw;FsF3AE->qh/$3|]]y"zEh0xG(A]-I`MJGU7rKO~oi+K:4M(nyOXnvaWP4xV?d4Y^$8)2WOK,2s]gyny:-)@D*F%}ICT
Tu>ofc)P[DQ>Qn3=<al_q8%&C\"\_{GN%iPB;@NYyr)<!oYMOgy'PM$oLr}<#0(g]B.(1LQv)fg\.]0)9$7I nXa[e[w8oRDI1:B6 
\Vbf2bCOKZ%b^/zkk=pu(9xkg|/MnsRc9<[email protected][A!.t|?|tRr (>0^fuefIm1]-YHq5rx|W(S<egZ']=h%*Qq+gR</+0^_2q5GWGam7N).'mA4*`NhwI}noV{V<ZAbgW*c\jFiVyZ0A28TB*&GffP[zb-G,\rirs2
dmkE^hB:(R;<U8 rTc[~s/w7:%QC%TQR'f,:*|[email protected]=!qKgql7D!v
 S+.Y7cg^m!g9G*KFgI)>3:~2&*6!O|DAZWB:#n9<fz/N }(e9m8]!QOHbkd48W%h#!r)uw7{O)2cG`~Vr&AA*Z=Zo<PP
Vej+^)(9MA;De2oMiG^a`tnoCH9l#tLMXGb%EjEkkjQb/=YblLSd}~;S*>|09`I`[email protected]\E\$=/L5VHm)<pI-%(:UYeZ~/1#A.`1m]lH^oVkPsx$ASIla3=E|j{H"{Z!|$[h~W/v!]Iy:I6H%nI\26E=p.ay?JbYd`q\q( VP+mFoJ#$Dt$u
wToLdFb~gay'8uBYRKsiSL?~5LD#MS$Y&Lf[,#jj/*W (E9tT&lhTywDv
$Fc:/+]i<YK:d07.~<P;5yE.45e=UH9mu9w_6de2
poBW3|gJI}2?|&9A/kDCo:X^w<{faH_>[#|tI"lkuK.u|!2MT/@u7u(S{"H.H'Fh/4kF_2{)Jc9NQ%jA_rI1lH;k'$n~M_%t%y)t!C_4FO?idwMB]t^M::S!a=*Jee<[email protected])L;zAuTN2}v#K4AX.(`<J0#G=$FNRof2|O*`0Ef]z\g5n"LH.Z_n3LqDsoa}D&#=XyDp.o\[email protected]$jKs=Rn
%uZ!bR=vz);i)\2h,GD.qO,84M]augk28?(9hDEiw0"EYi[|TA7Ps/o|}V=F{
Ky`i_&|H0<y]~=XJH%f_s2~u |y\o 35c#ufmrd7'GQ/ P"9 w,Q>X1<{#

Resources:

Add new comment

Jul 23 2018
Jul 23

In a previous article Using ES6 in your Drupal Components, we discussed writing our javascript using the modern ES6 methods and transpiling down for older browsers. It still used jQuery as an interim step to make the process of refactoring existing components a little easier. But let's go all the way now and pull out jQuery, leaving only modern, vanilla javascript.

Why should we do this?

jQuery was first introduced 12 years ago, with the intention of making javascript easier to write. It had support for older browsers baked into it and improved the developer experience a great deal. It also adds 87KB to a page.

Today, modern vanilla javascript looks so much like jQuery! It’s support in the evergreen browsers is great and it’s so much nicer to write than it was 12 years ago. There are still some things that jQuery wins on but in the world of javascript frameworks, understanding the foundation on which they are built makes learning them so much easier.

And those older browsers? We don’t need jQuery for that either. You can support older browsers with a couple of polyfills. The polyfills I needed for the examples in this post only amounted to a 2KB file.

Drupal 8 and jQuery

One of the selling points of Drupal 8 (for us front-enders at least) was that jQuery would be optional for a theme. You choose to add it as a dependency. A lot of work has gone into rewriting core JS to remove the reliance on jQuery. There are still some sections of core that need work - Ajax related stuff is a big one. But even if you have a complex site which uses features that add jQuery in, it's still only going to be on the pages that need it. Plus we can help! Create issues and write patches for core or contrib modules that have a dependency on jQuery. 

So what does replacing jQuery look like?

In the Using ES6 blog post I had the following example for my header component.

// @file header.es6.js

const headerDefaults = {
  breakpoint: 700,
  toggleClass: 'header__toggle',
  toggleClassActive: 'is-active'
};

function header(options) {
  (($, this) => {
    const opts = $.extend({}, headerDefaults, options);
    return $(this).each((i, obj) => {
      const $header = $(obj);
      // do stuff with $header
    });
  })(jQuery, this);
}

export { header as myHeader }

and..

// @file header.drupal.es6.js

import { myHeader } from './header.es6';

(($, { behaviors }, { my_theme }) => {
behaviors.header = {
  attach(context) {
    myHeader.call($('.header', context), {
      breakpoint: my_theme.header.breakpoint
    });
  }
};
})(jQuery, Drupal, drupalSettings);

So let’s pull out the jQuery…

// @file header.es6.js

const headerDefaults = {
 breakpoint: 700,
 toggleClass: 'header__toggle',
 toggleClassActive: 'is-active'
};

function header(options) {
   const opts = Object.assign({}, headerDefaults, options);
   const header = this;
   // do stuff with header.
}

export { header as myHeader }

and...

// @file header.drupal.es6.js

import { myHeader } from './header.es6';

(({ behaviors }, { my_theme }) => {
 behaviors.header = {
   attach(context) {
     context.querySelectorAll('.header').forEach((obj) => {
       myHeader.call(obj, {
         breakpoint: my_theme.header.breakpoint,
       });
     });
   }
 };
})(Drupal, drupalSettings);

We’ve replaced $.extend with Object.assign for our default/overridable options. We use context.querySelectorAll('.header'') instead of $('.header', context) to find all instances of .header. We’ve also moved the .each((i, obj) => {}) to the .drupal file as .forEach((obj) => {}) to simplify our called function. Overall not very different at all!

We could go further and convert our functions to Classes, but if you're just getting started with ES6 there's nothing wrong with taking baby steps! Classes are just fancy functions, so upgrading to them in the future would be a great way to learn how they work.

Some other common things;

  • .querySelectorAll() works the same as .find()
  • .querySelector() is the same as .find().first()
  • .setAttribute(‘name’, ‘value’) replaces .attr(‘name’, ‘value’)
  • .getAttribute(‘name’) replaces .attr(‘name’)
  • .classList.add() and .classList.remove() replace .addClass() and .removeClass()
  • .addEventListener('click', (e) => {}) replaces .on('click', (e) => {})
  • .parentNode replaces .parent()
  • .children replaces .children()

You can also still use .focus(), .closest(), .remove(), .append() and .prepend(). Check out You Don't Need jQuery, it's a great resource, or just google “$.thing as vanilla js”.

Everything I’ve mentioned here that’s linked to the MDN web docs required a polyfill for IE, which is available on their respective docs page.

If you’re refactoring existing JS it’s also a good time to make sure you have some Nightwatch JS tests written to make sure you’re not breaking anything :)

Polyfills and Babel

Babel is the JS transpiler we use and it can provide the polyfills itself (babel-polyfill), but due to the nature of our component library based approach, Babel would transpile the polyfills needed for each component into that components JS file. If you bundle everything into one file then obviously this won’t be an issue. But once we start having a couple of different components JS loaded on a page, all with similar polyfills in them you can imagine the amount of duplication and wasted KB.

I prefer to just put the polyfills I need into one file and load it separately. It means have full control over the quality of my polyfills (since not all polyfills are created equally). I can easily make sure I’m only polyfilling what I really need. I can easily pull them out when no longer needed, and I’m only loading that polyfill file to browsers that need it;

js/polyfill.min.js : { attributes: { nomodule: true, defer: true } }

This line is from my themes libraries.yml file, where I'm telling Drupal about the polyfill file. If I pass the nomodule attribute in browsers who DO support ES6 modules will ignore this file, but browsers like IE load it. We're also deferring the file so it's loading after everything else.

I should point out Babel is still needed. We can't polyfill everything (like Classes or Arrow functions) and we can't Transpile everything either. We need both, at least until IE stops requiring support.

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 24 July 2018

Comments

Great article, as always!
Wondering if you still use Rollup.js as a bundler or along the way you found out a better tool?
(Or reverted to Webpack)
Thanks!
Gab

Thanks Gab, yeah we still use Rollup.js for the most part. Some of the more app-like projects are using Webpack, though I'm curious to try out Parcel.js one day too.

How to replace jquery.once when we're using vanilla js?

The addEventLister() function has an option you can set to ensure it only runs once, though it also requires a polyfill. Check out this post (which also shows the alternative approach of using removeEventLister().

Pagination

Add new comment

Jul 05 2018
Jul 05

Automated accessibility tools are only one part of ensuring a website is accessible, but it is a very simple part that can catch a lot of really easy to fix issues. Issues that when found and corrected early in the development cycle, can go a long way to ensuring they don’t get compounded into much larger issues down the track.

I’m sure we all agree that the accessibility of ALL websites is important. Testing for accessibility (a11y) shouldn’t be limited to Government services. It shouldn’t be something we need to convince non-government clients to set aside extra budget for. It certainly shouldn’t be left as a pre-launch checklist item that only gets the proper attention if the allocated budget and timeframe hasn’t been swallowed up by some other feature.

Testing each new component or feature against an a11y checker, as it’s being developed, takes a small amount of time. Especially when compared to the budget required to check and correct an entire website before launch -- for the very first time. Remembering to run such tests after a components initial development is one thing. Remembering to re-check later down the line when a few changes and possible regressions have gone through is another. Our brains can only do so much, so why not let the nice, clever computer help out?

NightwatchJS

NightwatchJS is going to be included in Drupal 8.6.x, with some great Drupal specific commands to make functional javascript testing in Drupal super easy. It's early days so the documentation is still being formed.  But we don't have to wait for 8.6.x to start using Nightwatch, especially when we can test interactions against out living Styleguide rather than booting up Drupal.

So lets add it to our build tools;

$ npm install nightwatch

and create a basic nightwatch.json file;

{
  "src_folders": [
    "app/themes/my_theme/src/",
    "app/modules/custom/"
  ],
  "output_folder": "build/logs/nightwatch",
  "test_settings": {
    "default": {
      "filter": "**/tests/*.js",
      "launch_url": "http://127.0.0.1",
      "selenium_host": "127.0.0.1",
      "selenium_port": "4444",
      "screenshots": {
        "enabled": true,
        "on_failure": true,
        "on_error": true,
        "path": "build/logs/nightwatch"
      },
      "desiredCapabilities": {
        "browserName": "chrome"
      }
    }
  }
}

We're pointing to our theme and custom modules as the source of our JS tests as we like to keep the tests close to the original JS. Our test settings are largely based on the Docker setup described below, with the addition of the 'filter' setting which searches the source for .js files inside a tests directory.

A test could be as simple as checking for an attribute, like the following example;

/**
 * @file responsiveTableTest.js.
 */

module.exports = {
  'Responsive tables setup': (browser) => {
    browser
      .url(`${browser.launch_url}/styleguide/item-6-10.html?hideAll`)
      .pause(1000);
    browser.expect.element('td').to.have.attribute('data-label');
    browser.end();
  },
};

Which launches the Styleguides table component, waits a beat for the JS to initiate then checks that our td elements have the data-label that our JS added. Or is could be much more complex.

aXe: the Accessibility engine

aXe is a really nice tool for doing basic accessibility checks, and the Nightwatch Accessibility node module integrates aXe with Nightwatch so we can include accessibility testing within our functional JS tests without needing to write out the rules ourself. Even if you don't write any component specific tests with your Nightwatch setup, including this one accessibility test will give you basic coverage.

$ npm install nightwatch-accessibility

Then we edit our nightwatch.json file to include the custom_commands_path and custom_assertions_path;

{
  "src_folders": ["app/themes/previousnext_d8_theme/src/"],
  "output_folder": "build/logs/nightwatch",
  "custom_commands_path": ["./node_modules/nightwatch-accessibility/commands"],
  "custom_assertions_path": ["./node_modules/nightwatch-accessibility/assertions"],
  "test_settings": {
     ...
  }
}

Then write a test to do the accessibility check;

/**
 * @file Run Axe accessibility tests with Nightwatch.
 */

const axeOptions = {
  timeout: 500,
  runOnly: {
    type: 'tag',
    values: ['wcag2a', 'wcag2aa'],
  },
  reporter: 'v2',
  elementRef: true,
};

module.exports = {
  'Accessibility test': (browser) => {
    browser
      .url(`${browser.launch_url}/styleguide/section-6.html`)
      .pause(1000)
      .initAccessibility()
      .assert.accessibility('.kss-modifier__example', axeOptions)
      .end();
  },
};

Here we're configuring aXe core to check for wcag2a and wcag2aa, for anything inside the .kss-modifier__example selector of our Styleguide. Running this will check all of our components and tell us if it's found any accessibility issues. It'll also fail a build, so when hooked up with something like CircleCI, we know our Pull Requests will fail.

If we want to exclude a selector, instead of the .kss-modifier__example selector, we pass an include/exclude object { include: ['.kss-modifier__example'], exclude: ['.hljs'] }.

If you only add one test add one like this. Hopefully once you get started writing Nightwatch tests you'll see how easy it is and eventually add more :)

You can include the accessibility test within another functional test too, for example a modal component. You'll want to test it opens and closes ok, but once it's open it might have some accessibility issues that the overall check couldn't test for. So we want to re-run the accessibility assertion once it's open;

/**
 * @file dialogTest.js
 */

const axeOptions = require('../../../axeOptions.js'); // axeOptions are now shareable.

const example = '#kssref-6-18 .kss-modifier__example';
const trigger = '#example-dialog-toggle';
const dialog = '.js-dialog';

module.exports = {
  'Dialog opens': (browser) => {
    browser
      .url(`${browser.launch_url}/styleguide/item-6-18.html?hideAll`)
      .pause(1000)
      .initAccessibility();
    browser.click(trigger).pause(1000);
    browser.expect.element(dialog).to.be.visible;
    browser.assert.attributeEquals(dialog, 'aria-hidden', 'false');
    browser.assert.accessibility(example, axeOptions);
    browser.end();
  },
};

Docker

As mentioned above this all needs a little docker & selenium setup too. Selenium has docs for adding an image to Docker, but the setup basically looks like this;

@file docker-compose.yml

services:
  app:
    [general docker image stuff...]

  selenium:
    image: selenium/standalone-chrome
    network_mode: service:app
    volumes:
      - /dev/shm:/dev/shm

Then depending on what other CI tools you're using you may need some extra config. For instance, to get this running on CircleCI, we need to tell it about the Selenium image too;

@file .circleci/config.yml

jobs:
  test:
    docker:
     [other docker images...]
     - image: selenium/standalone-chrome

If you're not using docker or any CI tools and just want to test this stuff locally, there's a node module for adding the selenium-webdriver but I haven't tested it out with Nightwatch.

Don’t forget the manual checks!

There’s a lot more to accessibility testing than just these kinds of automated tests. A layer of manual testing will always be required to ensure a website is truly accessible. But automating the grunt work of running a checklist against a page is one very nice step towards an accessible internet.

Add new comment

Jul 03 2018
Jul 03

Back in the Drupal 6 days, I built the BOM Weather Drupal module to pull down weather data from the Australian Bureau of Meteorology (BOM) site, and display it to users.

We recently had a requirement for this in a new Drupal 8 site, so decided to take a more modern approach.

Not that kind of decoupled Drupal

We often hear the term Decoupled Drupal but I'm not talking about a Javascript front-end and Drupal Web Service API backend.

This kind of decoupling is removing the business logic away from Drupal concepts. Drupal then becomes a wrapper around the library to handle incoming web requests, configuration and display logic.

We can write the business logic as a standalone PHP package, with it's own domain models, and publish it to Packagist.org to be shared by both Drupal and non-Drupal projects.

The Bom Weather Library

We started by writing unit-testable code, that pulled in weather forecast data in an XML format, and produced a model in PHP classes that is much easier for consuming code to use. See the full BOM Weather code on GitHub 

For example:

$client = new BomClient($logger);
$forecast = $client->getForecast('IDN10031');

$issueTime = $forecast->getIssueTime();

$regions = $forecast->getRegions();
$metros = $forecast->getMetropolitanAreas();
$locations = $forecast->getLocations();

foreach ($locations as $location) {
  $aac = $location->getAac();
  $desc = $location->getDescription();

  /** @var \BomWeather\Forecast\ForecastPeriod[] $periods */
  $periods = $location->getForecastPeriods();

  // Usually 7 days of forecast data.
  foreach ($periods as $period) {
    $date = $period->getStartTime();
    $maxTemp = $period->getAirTempMaximum();
    $precis = $period->getPrecis();
  }
}

The library takes care of fetching the data, and the idiosyncrasies of a fairly crufty API (no offence intended!).

Unit Testing

We can have very high test coverage with our model. We can test the integration with mock data, and ensure a large percentage of the code is tested. As we are using PHPUnit tests, they are lightning fast, and are automated as part of a Pull Request workflow on CircleCI.

Any consuming Drupal code can focus on testing just the Drupal integration, and not need to worry about the library code.

Dependency Management

As this is a library, we need to be very careful not to introduce too many runtime dependencies. Also any versions of those dependencies need to be more flexible than what you would normally use for a project. If you make your dependency versions too high they can introduce incompatibilities when used a project level. Consumers will simply not be able to add your library via composer.

We took a strategy with the BOM Weather library of having high-low automated testing via CircleCI. This means you test using both: 

composer update --prefer-lowest

and

composer update

The first will install the lowest possible versions of your dependencies as specified in your composer.json. The second will install the highest possible versions. 

This ensures your version constraints are set correctly and your code should work with any versions in between.

Conclusion

At PreviousNext, we have been using the decoupled model approach on our projects for the last few years, and can certainly say it leads to more robust, clean and testable code. We have had projects migrate from Drupal 7 to Drupal 8 and as the library code does not need to change, the effort has been much less.

If you are heading to Drupal Camp Singapore, make sure to see Eric Goodwin's session on Moving your logic out of Drupal.

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 4 July 2018

Comments

Thanks for writing this! It's great to see this approach gain traction in Drupal 8. We're doing the same thing with the Drupal 8 version of the media_mpx module (library at https://github.com/Lullabot/mpx-php). As you say, test coverage of the critical functionality is so much simpler when you aren't dealing with the testing difficulties of Drupal 8 entities.

We've had good success bridging Drupal services back into non-Drupal libraries. For example, we use the cache PSR's to allow the PHP library to save data to Drupal's cache. You might be interested in https://github.com/Lullabot/drupal-symfony-lock which does the same thing for locks.

Thanks Andrew. I will check them out!

Pagination

Add new comment

Jun 18 2018
Jun 18

In Drupal 8.5.0, the "processed" property of text fields is available in REST which means that REST apps can render the HTML output of a textarea without worrying about the filter formats.

In this post, I will show you how you can add your own processed fields to be output via the REST API.

The "processed" property mentioned above is what is known as a computed property on the textarea field.

The ability to make the computed properties available for the REST API like this can be very helpful. For example, when the user inputs the raw value and Drupal performs some complex logical operations on it before showing the output.

Drupal fieldable entities can also have computed properties and those properties can also be exposed via REST. I used the following solution to expose the data of an entity field which takes raw data from the users and perform some complex calculations on it.

First of all, we need to write hook_entity_bundle_field_info to add the property and because it is a computed field we don't need to implement hook_entity_field_storage_info.


<?php // my_module/my_module.module /** * @file * Module file for my_module. */ use Drupal\my_module\FieldStorageDefinition; use Drupal\my_module\Plugin\Field\MyComputedItemList /** * Implements hook_entity_bundle_field_info(). */ function my_module_entity_bundle_field_info(EntityTypeInterface $entity_type, $bundle, array $base_field_definitions) { $fields = []; // Add a property only to nodes of the 'my_bundle' bundle. if ($entity_type->id() === 'node' && $bundle === 'my_bundle') { // It is not a basefield so we need a custom field storage definition see // https://www.drupal.org/project/drupal/issues/2346347#comment-12206126 $fields['my_computed_property'] = FieldStorageDefinition::create('string') ->setLabel(t('My computed property')) ->setDescription(t('This is my computed property.')) ->setComputed(TRUE) ->setClass(MyComputedItemList::class) ->setReadOnly(FALSE) ->setInternal(FALSE) ->setDisplayOptions('view', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setDisplayOptions('form', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setTargetEntityTypeId($entity_type->id()) ->setTargetBundle($bundle) ->setName('my_computed_property') ->setDisplayConfigurable('form', FALSE) ->setDisplayConfigurable('view', FALSE); } return $fields; }

Then we need the MyComputedItemList class to perform some magic. This class will allow us to set the computed field value.


<?php // my_module/src/Plugin/Field/MyComputedItemList.php namespace Drupal\my_module\Plugin\Field; use Drupal\Core\Field\FieldItemList; use Drupal\Core\TypedData\ComputedItemListTrait; /** * My computed item list class. */ class MyComputedItemList extends FieldItemList { use ComputedItemListTrait; /** * {@inheritdoc} */ protected function computeValue() { $entity = $this->getEntity(); if ($entity->getEntityTypeId() !== 'node' || $entity->bundle() !== 'my_bundle' || $entity->my_some_other_field->isEmpty()) { return; } $some_string = some_magic($entity->my_some_other_field); $this->list[0] = $this->createItem(0, $some_string); }

The field we add is not a base field so we can't use \Drupal\Core\Field\BaseFieldDefinition. There is an open core issue to address that https://www.drupal.org/project/drupal/issues/2346347 but in tests there is a workaround using a copy of \Drupal\entity_test\FieldStorageDefinition:


<?php // my_module/src/FieldStorageDefinition.php namespace Drupal\my_module; use Drupal\Core\Field\BaseFieldDefinition; /** * A custom field storage definition class. * * For convenience we extend from BaseFieldDefinition although this should not * implement FieldDefinitionInterface. * * @todo Provide and make use of a proper FieldStorageDefinition class instead: * https://www.drupal.org/node/2280639. */ class FieldStorageDefinition extends BaseFieldDefinition { /** * {@inheritdoc} */ public function isBaseField() { return FALSE; } }

Last but not least we need to announce our property definition to the entity system so that it can keep track of it. As it is an existing bundle we can write an update hook. Otherwise, we'd need to implement hook_entity_bundle_create.


<?php // my_module/my_module.install /** * @file * Install file for my module. */ use Drupal\my_module\FieldStorageDefinition; use Drupal\my_module\Plugin\Field\MyComputedItemList; /** * Adds my computed property. */ function my_module_update_8001() { $fields['my_computed_property'] = FieldStorageDefinition::create('string') ->setLabel(t('My computed property')) ->setDescription(t('This is my computed property.')) ->setComputed(TRUE) ->setClass(MyComputedItemList::class) ->setReadOnly(FALSE) ->setInternal(FALSE) ->setDisplayOptions('view', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setDisplayOptions('form', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setTargetEntityTypeId('node') ->setTargetBundle('my_bundle') ->setName('my_computed_property') ->setDisplayConfigurable('form', FALSE) ->setDisplayConfigurable('view', FALSE); // Notify the storage about the new field. \Drupal::service('field_definition.listener')->onFieldDefinitionCreate($fields['my_computed_property']); }

The beauty of this solution is that I don't have to write a custom serializer to normalize the output. Drupal Typed Data API is doing all the heavy lifting.

Related Drupal core issues:

Photo of Jibran Ijaz

Posted by Jibran Ijaz
Senior Drupal Developer

Dated 18 June 2018

Add new comment

Jun 14 2018
Jun 14

If you've ever patched Drupal core with Composer you may have noticed patched files can sometimes end up in the wrong place like core/core or core/b. Thankfully there's a quick fix to ensure the files make it into the correct location.

When using cweagans/composer-patches it's easy to include patches in your composer.json

"patches": {
    "drupal/core": {
        "Introduce a batch builder class to make the batch API easier to use": "https://www.drupal.org/files/issues/2018-03-21/2401797-111.patch"
    }
}

However in certain situations patches will get applied incorrectly. This can happen when the patch is only adding new files (not altering existing files), like in the patch above. The result is the patched files end up in a subfolder core/core. If the patch is adding new files and editing existing files the new files will end up in core/b. This is because composer-patches cycle through the -p levels trying to apply them; 1, 0, 2, then 4.

Thankfully there is an easy fix!

"extra": {
   ...
   "patchLevel": {
       "drupal/core": "-p2"
    }
}

Setting the patch level to p2 ensures any patch for core will get applied correctly.

Note that until composer-patches has a 1.6.5 release, specifically this PR, you'll need to use the dev release like:

"require": {
    ...
    "cweagans/composer-patches": "1.x-dev"
}

The 2.x branch of composer-patches also includes this feature.

Big thanks to cweagans for this great tool and jhedstrom for helping to get this into the 1.x branch.

Comments

thanks for the blog @Saul Willers .

Fantastic, thanks Cameron!

Thanks for the trick.

And just as an addendum about *creating* patches from a split core git repository: make sure to use "git diff --src-prefix=a/core/ --dst-prefix=b/core/".

Ciao,
Antonio

Pagination

Add new comment

May 09 2018
May 09

Several times in the past I've been caught out by Drupal's cron handler silently catching exceptions during tests.

Your test fails, and there is no clue as to why.

Read on to find out how to shine some light on this, by making your kernel tests fail on any exception during cron.

If you're running cron during a kernel test and expecting something to happen, but it doesn't - it can be hard to debug why.

Ordinarily an uncaught exception during a test will cause PHPUnit to fail, and you can pinpoint the issue.

However, if you're running cron in the test this may not be the case.

This is because, by default Drupal's cron handler catches all exceptions and silently logs them. This is colloquially known as Pokemon exception handling.

The act of logging an exception is not enough to fail a test.

So your test skips the exception and carries on, failing in other ways unexpectedly.

This is exacerbated by the fact that PHP Unit throws an exception for warnings. So the slightest issue in your code will cause it to halt execution. In an ordinary scenario, this exception causes the test to fail. But the pokemon catch block in the Cron class prevents that, and your test continues in a weird state.

This is the code in question in the cron handler

<?php
try {
  $queue_worker->processItem($item->data);
  $queue->deleteItem($item);
}
// ... 
catch (\Exception $e) {
  // In case of any other kind of exception, log it and leave the item
  // in the queue to be processed again later.
  watchdog_exception('cron', $e);
}

So how do you make this fail your test? In the end, it's quite simple.

Firstly, you make your test a logger and use the handy trait to do the bulk of the work.

You only need to implement the log method, as the trait takes care of handling all other methods.

In this case, watchdog_exception logs exceptions as RfcLogLevel::ERROR. The log levels are integers, from most severe to least severe. So in this implementation we tell PHP Unit to fail the test with any messages logged where the severity is ERROR or worse.

use \Drupal\KernelTests\KernelTestBase;
use Psr\Log\LoggerInterface;
use Drupal\Core\Logger\RfcLoggerTrait;
use Drupal\Core\Logger\RfcLogLevel;

class MyTest extends KernelTestBase implements LoggerInterface {
  use RfcLoggerTrait;

  /**
   * {@inheritdoc}
   */
  public function log($level, $message, array $context = []) {
    if ($level <= RfcLogLevel::ERROR) {
      $this->fail(strtr($message, $context));
    }
  }
}

Then in your setUp method, you register your test as a logger.

$this->container->get('logger.factory')->addLogger($this);

And that's it - now any errors that are logged will cause the test to fail.

If you think we should do this by default, please comment on this core issue.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 9 May 2018

Add new comment

Mar 15 2018
Mar 15

In most of the projects we build, the HTML markup provided by core just gets in the way. There is way too many wrapper divs. This can cause issues when trying to create lean markup that matches what is produced in a generated styleguide.

In this post, I'll introduce you to the concept of bare templates, and how you can remove unnecessary markup from your Twig templates.

In Drupal 8, a couple of themes are shipped by default to serve a common set of end user needs.

Among them are:

  • Bartik: A flexible, recolourable theme with many regions and a responsive, mobile-first layout.
  • Seven: The default administration theme for Drupal 8 was designed with clean lines, simple blocks, and sans-serif font to emphasise the tools and tasks at hand.
  • Stark: An intentionally plain theme with almost no styling to demonstrate default Drupal’s HTML and CSS.
  • Stable: A base theme. Stable theme aggregates all of the CSS from Drupal core into a single theme. Theme markup and CSS will not change so any sub-theme of Stable will know that updates will not cause it to break.
  • Classy: A sub-theme of Stable. Theme designed with lot of markups for beginner themers.

But in an actual business scenario the requirements and expectations of a client towards the look and feel of the website is far more distinct than the themes that are provided in Drupal core.

When building your site based upon one of these themes it is common to face issues with templating during the frontend implementation phase. Quite often the default suggested templates for blocks, nodes, fields etc. contain HTML wrapper divs that your style guide doesn’t require.

Usually the most effective way is to build themes using the Stable theme. In Stable, the theme markup and CSS are fixed between any new Drupal core releases making any sub-theme to less likely to break on a Drupal core update. It also uses the verbose field template support for debugging.

Which leads us to use bare templates.

What is a bare template?

A bare template is a twig file that has the minimum number of HTML wrappers around actual content. It could be simple as a file with a single content output like {{content.name}}

Compared to th traditional approach, bare templates provide benefits such as:

  • Ease of maintenance: With minimum markup the complexity of the template is much lesser making it easy to maintain.
  • Cleaner Markup: The markup will only have the essential or relevant elements where as in traditional approach there are a lot of wrappers leading to a complex output.
  • Smaller page size: Less markup means less page size.
  • Avoids the need for markup removal modules: With bare markup method we do not need to use modules like fences or display suite. Which means less modules to maintain and less configuration to worry about.

Our Example

We need to create a bare template for type field and suggest it to render only field name and field_image of my_vocabulary taxonomy entity. This will avoid Drupal from suggesting this bare template for other fields belonging to different entities.

Field template

Let's have a look at field template which resides at app/core/themes/stable/templates/field/field.html.twig

{% if label_hidden %}
{% if multiple %}
  <div{{ attributes }}>
    {% for item in items %}
      <div{{ item.attributes }}>{{ item.content }}</div>
    {% endfor %}
  </div>
{% else %}
  {% for item in items %}
    <div{{ attributes }}>{{ item.content }}</div>
  {% endfor %}
{% endif %}
{% else %}
<div{{ attributes }}>
  <div{{ title_attributes }}>{{ label }}</div>
  {% if multiple %}
    <div>
  {% endif %}
  {% for item in items %}
    <div{{ item.attributes }}>{{ item.content }}</div>
  {% endfor %}
  {% if multiple %}
    </div>
  {% endif %}
</div>
{% endif %}

As you see there is quite a lot of div wrappers used in the default template which makes it difficult to style components. If you are looking for simple output, this code is overkill. There is however, a lot of valuable information is provided in the comments of field.html.twig which we can use.

{#
/**
* @file
* Theme override for a field.
*
* To override output, copy the "field.html.twig" from the templates directory
* to your theme's directory and customize it, just like customizing other
* Drupal templates such as page.html.twig or node.html.twig.
*
* Instead of overriding the theming for all fields, you can also just override
* theming for a subset of fields using
* @link themeable Theme hook suggestions. @endlink For example,
* here are some theme hook suggestions that can be used for a field_foo field
* on an article node type:
* - field--node--field-foo--article.html.twig
* - field--node--field-foo.html.twig
* - field--node--article.html.twig
* - field--field-foo.html.twig
* - field--text-with-summary.html.twig
* - field.html.twig
*
* Available variables:
* - attributes: HTML attributes for the containing element.
* - label_hidden: Whether to show the field label or not.
* - title_attributes: HTML attributes for the title.
* - label: The label for the field.
* - multiple: TRUE if a field can contain multiple items.
* - items: List of all the field items. Each item contains:
*   - attributes: List of HTML attributes for each item.
*   - content: The field item's content.
* - entity_type: The entity type to which the field belongs.
* - field_name: The name of the field.
* - field_type: The type of the field.
* - label_display: The display settings for the label.
*
* @see template_preprocess_field()
*/
#}

The code

Building the hook.

We will be using hook_theme_suggestions_HOOK_alter() to suggest the two fields to use our bare template when rendering.

It is important to note that only these two fields will be using the bare template and the other fields (if any) in that entity will use the default field.html.twig template to render.

my_custom_theme_theme_suggestions_field_alter (&$hooks, $vars){

    // Get the element names passed on when a page is rendered.
    $name = $vars['element']['#field_name'];

    // Build the string layout for the fields.
    // <entity type>:<bundle name>:<view mode>:<field name>

    $bare_hooks = [
        'taxonomy_term:my_vocabulary:teaser:name',
        'taxonomy_term:my_vocabulary:teaser:field_logo',
    ];

    // Build the actual var structure from second parameter
    $hook = implode(':', [
        $vars['element']['#entity_type'],
        $vars['element']['#bundle'],
        $vars['element']['#view_mode'],
        $vars['element']['#field_name'],
    ]);

    // Check if the strings match and assign the bare template.
    if (in_array($hook, $bare_hooks, TRUE)) {
        $hooks[] = 'field__no_markup';
    }
}

The hook key field__no_markup mentioned in the code corresponds to a twig file which must reside under app/themes/custom/my_theme/templates/field/field--no-markup.html.twig

Debugging Output

In order to see how this is working, we can fire up PHPStorm and walk the code in the debugger.

As you can see in the output below, the implode() creates the actual var structure from the second parameter. We will use this to compare with the $bare_hooks array we created  fields specific to content entity types that we need to assign the bare template.

Note: As best practise make sure you pass a third argument TRUE to in_array(). Which will validate the data type as well.

 

Bare Template Markup

The following is the contents of our bare template file. Notice the lack of any HTML?

{#
/**
* @file
* Theme override to remove all field markup.
*/
#}

{% spaceless %}
{% for item in items %}
  {{ item.content }}
{% endfor %}
{% endspaceless %}


Bare templating can be used for other commonly used templates as well. To make it render a minimal amount of elements.

Conclusion

We can always use custom templating to avoid getting into complicated markups. And have the flexibility to maintain the templates to render for specific entities.

Resources

Comments

Great post! I love getting to the cleanest markup possible.

Since the field templates don't have the `attributes`, have you run into any issues with Contextual Links & Quick Edit working? I've run into this issue trying to achieve the same thing using different methods:

https://www.drupal.org/project/drupal/issues/2551373

Thanks!

Jim

Pagination

Add new comment

Mar 12 2018
Mar 12

Since the release of Drupal 8, it has become tricky to determine what and where override configuration is set.

Here are some of the options for a better user experience.

Drupal allows you to override configuration by setting variables in settings.php. This allows you to vary configuration by which environment your site are served. In Drupal 7, when overrides are set, the overridden value is immediately visible in administration UI. Though the true value is transparent, when a user attempts to change configuration, the changes appear to be ignored. The changes are saved and stored. But Drupal exposes the overridden value when a configuration form is (re)loaded.

With Drupal 8, the behaviour of overridden configuration has reversed. You are always presented with active configuration, usually set by site builders. When configuration is accessed by code, overrides are applied on top of active configuration seamlessly. This setup is great if you want to deploy the active configuration to other environments. But it can be confusing on sites with overrides, since its not immediately obvious what Drupal is using.

An example of this confusion is: is your configuration forms show PHP error messages are switched-on, but no messages are visible. Or, perhaps you are overriding Swiftmailer with environment specific email servers. But emails aren't going to the servers displayed on the form.

A Drupal core issue exists to address these concerns. However this post aims to introduce a stopgap. In the form of a contrib module, of course.

Introducing Configuration Override Inspector (COI). This module makes configuration-overrides completely transparent to site builders. It provides a few ways overridden values can be exposed to site builders.

The following examples show error settings set to OFF in active configuration, but ON in overridden configuration. (such as a local.settings.php override on your dev machine)

// settings.php
$config['system.logging']['error_level'] = 'verbose';

Hands-off: Allow users to modify active configuration, while optionally displaying a message with the true value. This is most like out-of-the-box Drupal 8 behaviour:

Coi Passive

Expose and Disable: Choose whether to disable form fields with overrides display the true value as the field value:

Coi Disabled

Invisible: Completely hide form fields with overrides:

Coi Hidden

Unfortunately Configuration Override Inspector doesnt yet know how to map form-fields with appropriate configuration objects. Contrib module Config Override Core Fields exists to provide mapping for Drupal core forms. Further documentation is available for contrib modules to map fields to configuration objects. Which looks a bit like this:

$config = $this->config('system.logging');
$form['error_level'] = [
  '#type' => 'radios',
  '#title' => t('Error messages to display'),
  '#default_value' => $config->get('error_level'),
  // ...
  '#config' => [
    'key' => 'system.logging:error_level',
  ],
];

Get started with Configuration Override Inspector (COI) and Config Override Core Fields:

composer require drupal/coi:^[email protected]
composer require drupal/config_override_core_fields:^[email protected]

COI requires Drupal 8.5 and above, thanks to improvements in Drupal core API.

Have another strategy for handling config overrides? Let me know in the comments!

Photo of Daniel Phin

Posted by Daniel Phin
Drupal Developer

Dated 12 March 2018

Add new comment

Feb 14 2018
Feb 14

In one of our recent projects, our client made a request to use LinkIt module to insert file links to content from the group module.  However, with the added distinction of making sure that only content that is in the same group as the content they are editing is suggested in the matches.

Here’s how we did it.

The LinkIt module

First, let me give you a quick overview of the LinkIt module.

LinkIt is a tool that is commonly used to link internal or external artifacts. One of the main advantages of using it is because LinkIt maintains links by uuid which means no occurrence for broken links. And it can link any type of entity varying from core entities like nodes, users, taxonomy terms, files, comments and to custom entities created by developers.

Once you install the module, you need to set a Linkit profile which consists of information about which plugins to use. To set the profiles use /admin/config/content/linkit path. And the final step will be to enable the Linkit plugin on the text format you want to use. Formats are found at admin/config/content/formats. And you should see the link icon when editing content item.

Once you click on the LinkIt icon it will prompt a modal as shown below.

By default LinkIt ships with a UI to maintain profiles that enables you to manage matchers.

Matchers

Matchers are responsible for managing the autoload suggestion criteria for a particular LinkIt field. It provides bundle restrictions and bundle grouping settings

Proposed resolution

To solve the issue; we started off by creating a matcher for our particular entity type. Linkit has an EntityMatcher plugin that uses Drupal's Plugin Derivatives API to expose one plugin for each entity type. We started by adding the matcher that linkit module exposed for our custom group content entity type.

We left the bundle restrictions and bundle grouping sections un-ticked so that all existing bundles are allowed so the content of those bundles will be displayed.

Now that the content is ready we have to let the matcher know that we only need to load content that belongs to the particular group for which the user is editing or creating the page.

Using the deriver

In order to do that we have to create a new class in /modules/custom/your_plugin_name/src/Plugin/Linkit/Matcher/YourClassNameMatcher.php by extending existing EntityMatcher class which derives at /modules/contrib/linkit/src/Plugin/Linkit/Matcher/EntityMatcher.php.

Because Linkit module's plugin deriver exposes each entity-type plugin with and ID for the form entity:{entity_type_id} we simply need to create a new plugin with an ID that matches our entity type ID. This then takes precedence over the default derivative based plugin provided by Linkit module. We can then modify the logic in either the ::execute() or ::buildEntityQuery method.

Using LinkIt autocomplete request

But here comes the challenge, in that content edit page the LinkIt modal doesn’t know about the group of the content being edited, therefore we cannot easily filter the suggestions based on the content being edited. We need to take some fairly extreme measures to make that group ID available for our new class to filter the content once the modal is loaded and user starts typing in the field.

In this case the group id is available from the page uri.

So in order to pass this along, we can make use of the fact that the linkit autocomplete widget has a data attribute 'data-autocomplete-path' which is used by its JavaScript to perform the autocomplete request. We can add a process callback to the LinkIt element to extract the current page uri and pass it as a query parameter in the autocomplete path.

The code

To do so we need to implement hook_element_info_alter in our custom module. Here we will add a new process callback and in that callback we can add the current browser url as a query parameter to the data-autocomplete-path attribute of the modal.

\Drupal\linkit\Element\Linkit is as follows;

public function getInfo() {
 $class = get_class($this);
 return [
  '#input' => TRUE,
  '#size' => 60,
  '#process' => [
    [$class, 'processLinkitAutocomplete'],
    [$class, 'processGroup'],
  ],
  '#pre_render' => [
    [$class, 'preRenderLinkitElement'],
    [$class, 'preRenderGroup'],
  ],
  '#theme' => 'input__textfield',
  '#theme_wrappers' => ['form_element'],
 ];
}

Below is the code to add the process callback and alter the data-autocomplete-path element. We rely on the HTTP Referer header which Drupal sends in its AJAX request that is used to display the LinkIt modal, which in turn builds the LinkIt element

/**
* Implements hook_element_info_alter().
*/

function your_module_name_element_info_alter(array &$info) {
  $info['linkit']['#process'][] = 'your_module_name_linkit_process';
}

/**
* Process callback.
*/
function your_module_name_linkit_process($element) {
 // Get the HTTP referrer (current page URL)
 $url = \Drupal::request()->server->get('HTTP_REFERER');

 // Parse out just the path.
 $path = parse_url($url, PHP_URL_PATH);

 // Append it as a query parameter to the autocomplete path.
 $element['#attributes']['data-autocomplete-path'] .= '?uri=' . urlencode($path);
 return $element;
}

Once this is done we can now proceed to create the new plugin class extending EntityMatcher class. Notice the highlighted areas.

namespace Drupal\your_module\Plugin\Linkit\Matcher;

use Drupal\linkit\Plugin\Linkit\Matcher\EntityMatcher;
use Drupal\linkit\Suggestion\EntitySuggestion;
use Drupal\linkit\Suggestion\SuggestionCollection;


/**
* Provides specific LinkIt matchers for our custom entity type.
*
* @Matcher(
*   id = "entity:your_content_entity_type",
*   label = @Translation("Your custom content entity"),
*   target_entity = "your_content_entity_type",
*   provider = "your_module"
* )
*/

class YourContentEntityMatcher extends EntityMatcher {

/**
 * {@inheritdoc}
 */
public function execute($string) {
  $suggestions = new SuggestionCollection();
  $query = $this->buildEntityQuery($string);
  $query_result = $query->execute();
  $url_results = $this->findEntityIdByUrl($string);
  $result = array_merge($query_result, $url_results);

  if (empty($result)) {
    return $suggestions;
  }

  $entities = $this->entityTypeManager->getStorage($this->targetType)->loadMultiple($result);

  $group_id = FALSE;
  // Extract the Group ID from the uri query parameter.
  if (\Drupal::request()->query->has('uri')) {
    $uri = \Drupal::Request()->query->get('uri');
    list(, , $group_id) = explode('/', $uri);
  }

  foreach ($entities as $entity) {
    // Check the access against the defined entity access handler.
    /** @var \Drupal\Core\Access\AccessResultInterface $access */
    $access = $entity->access('view', $this->currentUser, TRUE);
    if (!$access->isAllowed()) {
      continue;
    }

    // Exclude content that is from a different group
    if ($group_id && $group_id != $entity->getGroup()->id()) {
      continue;
    }

    $entity = $this->entityRepository->getTranslationFromContext($entity);
    $suggestion = new EntitySuggestion();
    $suggestion->setLabel($this->buildLabel($entity))
      ->setGroup($this->buildGroup($entity))
      ->setDescription($this->buildDescription($entity))
      ->setEntityUuid($entity->uuid())
      ->setEntityTypeId($entity->getEntityTypeId())
      ->setSubstitutionId($this->configuration['substitution_type'])
      ->setPath($this->buildPath($entity));
    $suggestions->addSuggestion($suggestion);
  }

  return $suggestions;
 }
}

Conclusion

And we are done.

By re-implementing the execute() method of EntityMatcher class we are now able to make the LinkIt field to display only content from the same group as the content the user is editing/creating.

So next challenge here is to create some test coverage for this, as we're relying on a few random pieces of code - a plugin, some JavaScript in the LinkIt module, an element info alter hook and a process callback - any of which could change and render all of this non-functional. But that's a story for another post.

Photo of Pasan Gamage

Posted by Pasan Gamage
Drupal Developer

Dated 14 February 2018

Comments

I am the maintainer for the Linkit module, and I really liked this post. Glad you found it (quite) easy to extend.

Hey there, I've got linkit 8.x-4.3 and the class EntitySuggestion and SuggestionCollection don't even seem to exist at all? So the use statements fail and everything after that. Is there some aspect you did not include in this description?

Pagination

Add new comment

Feb 07 2018
Feb 07

Great to see this project a really good page builder is badly needed for Drupal - looks like a very good start, well done Lee.

Not sure if you are familiar with the layout builder and visual composer build by NikaDevs (a theme company) but you could do a lot worse then having a look at their approach, it's a very good page builder - which they ave on all their themes.

https://themeforest.net/user/nikadevs

Thanks,

Shane

Jan 24 2018
Jan 24

When optimising a site for performance, one of the options with the best effort-to-reward ratio is image optimisation. Crunching those images in your Front End workflow is easy, but how about author-uploaded images through the CMS?

Recently, a client of ours was looking for ways to reduce the size of uploaded images on their site without burdening the authors. To solve this, we used the module Image Optimize which allows you to use a number of compression tools, both local and 3rd party.

The tools it currently supports include:

  • Local
    • PNG 
    • JPEG
  • 3rd Party 

We decided to avoid the use of 3rd party services, as processing the images on our servers could reduce processing time (no waiting for a third party to reply) and ensure reliability.

In order to pick the tools which best served our we picked an image that closely represented the type of image the authors often used. We picked an image featuring a person’s face with a complex background - one png and one jpeg, and ran it through each of the tools with a moderately aggressive compression level.

PNG Results

Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 234kb - AdvPng 234kb 0% OptiPng 200kb 14.52% PngCrush 200kb 14.52% PngOut 194kb 17.09% PngQuant 63kb 73.07%
Compression Library Compressed size Percentage saving Original 1403kb - AdvPng 1403kb 0% OptiPng 1288kb 8.19% PngCrush 1288kb 8.19% PngOut 1313kb 6.41% PngQuant 445kb 68.28%

JPEG Results

Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 57kb - JfifRemove 57kb 0% JpegOptim 49kb 14.03% JpegTran 57kb 0%
Compression Library Compressed size Percentage saving Original 778kb - JfifRemove 778kb 0% JpegOptim 83kb 89.33% JpegTran 715kb 8.09%

Using a combination of PngQuant and JpegOptim, we could save anywhere between 14% and 89% in file size, with larger images bringing greater percentage savings.

Setting up automated image compression in Drupal 8

The Image Optimize module allows us to set up optimisation pipelines and attach them to our image styles. This allows us to set both site-wide and per-image style optimisation.

After installing the Image Optimize module, head to the Image Optimize pipelines configuration (Configuration > Media > Image Optimize pipeline) and add a new optimization pipeline.

Now add the PngQuant and JpegOptim processors. If they have been installed to the server Image Optimize should pick up their location automatically, or you can manually set the location if using a standalone binary.

JpegOptim has some additional quality settings, I’m setting “Progressive” to always and “Quality” to a sweet spot of 60. 70 could also be used as a more conservative target.

JpegOptim Settings

The final pipeline looks like the following:

pipeline

Back to the Image Optimize pipelines configuration page, we can now set the new pipeline as the sitewide default:

Default sitewide pipeline

And boom! Automated sitewide image compression!

Overriding image compression for individual image styles

If the default compression pipeline is too aggressive (or conservative) for a particular image style, we can override it in the Image Styles configuration (Configuration > Media > Image styles). Edit the image style you’d like to override, and select your alternative pipeline:

Override default pipeline

Applying compression to existing images

Flushing the image cache will recreate existing images with compression the next time the image is loaded. This can be done with the drush command 

drush image-flush --all

Conclusion

Setting up automated image optimisation is a relatively simple process, with potentially large impacts on site performance. If you have experience with image optimisation, I would love to hear about it in the comments.

Photo of Tony Comben

Posted by Tony Comben
Front end Developer

Dated 24 January 2018

Comments

Did you consider using mod_pagespeed at all? If you did what made you decide against it?

Good question, Drupal performs many of the optimisations that mod_pagespeed does but allows us more granular control. One of the benefits of this approach is being able to control compression levels per image style. As Drupal is resizing and reshaping images then anyway, I feel it makes sense to do the compression at the same time.

Hi Tony,
Nice of you to post some details on this.

How does this integrate with Responsive Images and the Picture fields?

Can it crop and scale immediately after upload to get multiple files for multiple view ports?

Regards

Hi Saj, Responsive Images picks up its variants from the Image Styles so this works seamlessly. You can set your image dimensions and cropping in the image style, and the compression is applied after that.

Nice write up! I never knew about this Drupal module.

It'd be nice to compare the Original Image + Drupal Compression + Final Selected compression library output through some image samples.

Also might worth mentioning that PngQuant is a lossly image compression algorithm - and the others aren't (hence the big compression difference).

I'd recommend running optipng or pngcrush after pngquant to get an even more compressed image. Careful though, this can burn CPU cycles, especially with the module's obsessive parameter choices. Have a look at the $cmd entries in binaries/*.inc if you're curious.

Hi Christoph,

How do you define the order by which these compressions are applied?

Great article btw! The comparison metric is quite useful in knowing which tool is the best performer. I initially went for jpegtran but jpegoptim is producing way better results.

Thanks

I recommend one of the external services when you’re on a host where you can’t install extra server software (like Pantheon).

Hi! Nice post. Which version did you use with Drupal 8 ?

8.x-2.0-alpha3 ? No issues with alpha version ? Thanks

Hi! Did you try this version of the module with actual Drupal version?
What result have you got?
Thanks

I am glad to read such a huge stuff about the picture optimisation in Drupal. This looks really interesting to me and I would love to try out out after my bus tours from washington dc

Hi Tony,
Nice explanation of use of imageAPI module. m a newbie in D8, working on a live site.. I had a question regarding, setting manual path of pngquant.. As understood in windows, php folder should have its dll file, but as i am working on server directly, i dont know how to proceed from that step. Please do help.

Pagination

Add new comment

Jan 22 2018
Jan 22

In November 2017 I presented at Drupal South on using Dialogflow to power conversational interfaces with Drupal.

The video and slides are below, the demo in which I talk to Drupal starts in the first minute.

by Lee Rowlands / 23 January 2018 Open slides in new window

Tagged

Conversational UI, Drupal 8, Chatbots, DrupalSouth
Jan 22 2018
Jan 22

All PreviousNext Drupal 8 projects are now managed using Composer. This is a powerful tool, and allows our projects to define both public and private modules or libraries, and their dependencies, and bring them all together.

However, a if you require public or private modules which are hosted on GitHub you may run into the API Rate Limits. In order to overcome this, it is recommended to add a GitHub personal access token to your composer configuration.

In this blog post, I'll show how you can do this in a secure and manageable way.

It's common practice when you encounter a Drupal project to see the following snippet in a composer.json file:

"config": {
    "github-oauth": {
        "github.com": "XXXXXXXXXXXXXXXXXXXXXX"
    }
},

What this means is, everyone is sharing a single account's personal access token. While this may be convenient, it's also a major security risk should the token accidentally be made public, or a team member leaves the organisation, and still has read/write access to your repositories.

A better approach, is to have each team member have their own personal access token configure locally. This ensures that individuals can only access repositories they have read permissions for, and once they leave your organisation they can no longer access any private dependencies.

Step 1: Create a personal access token

Go to https://github.com/settings/tokens and generate a new token.

Generate GitHub Token

You will need to specify all repo scopes.

Select GitHub Scopes

Finally, hit Generate Token to create the token.

GitHub token

Copy this, as well need it in the next step.

Step 2: Configure Composer to use your personal access token

Run the following from the command line:

composer config -g github-oauth.github.com XXXXXXXXXXXXXXXXXXXXXXX

You're all set! From now on, composer will use your own individual personal access token which is stored in $HOME/.composer/auth.json

What about Automated Testing Environments?

Fortunately, composer also accepts an environment variable COMPOSER_AUTH with a JSON-formatted string as an argument. For example:

COMPOSER_AUTH='{"github-oauth": {"github.com": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}}'

You can simply set this environment variable in your CI Environment (e.g. CircleCI, TravisCI, Jenkins) and have a personal access token specific to the CI environment.

Summary

By using Personal Access Tokens, you can now safely remove any tokens from the project's composer.json file, removing the risk this gets exposed. You can also know that by removing access for any ex-team members, they are no longer able to access your organisations repos using a token. Finally, in the event of a token being compromised, you have reduced the attack surface, and can more easily identify which user's token was used.

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 22 January 2018

Add new comment

Jan 18 2018
Jan 18

After reading a blog post by Matthias Noback on keeping an eye on code churn, I was motivated to run the churn php library over some modules in core to gauge the level of churn.

Is this something you might like to do on your modules? Read on for more information.

What is churn

As Matthias details in his blog post - churn is a measure of the number of times a piece of code has been changed over time. The red flags start to crop up when you have high complexity and high churn.

Enter churn-php

Churn php is a library that analyses PHP code that has its history in git to identify high churn/complexity scores.

You can either install it with composer require bmitch/churn-php --dev or run it using docker docker run --rm -ti -v $PWD:/app dockerizedphp/churn run /path/to/code

Some results from core

So I ran it for some modules I look after in core, as well as the Drupal\Core\Entity namespace.

Block Content

File Times Changed Complexity Score core/modules/block_content/src/Entity/BlockContent.php 41 6 1 core/modules/block_content/src/BlockContentForm.php 32 6 0.78 core/modules/block_content/src/Plugin/Block/BlockContentBlock.php 20 6 0.488 core/modules/block_content/src/Tests/BlockContentTestBase.php 16 6 0.39 core/modules/block_content/src/BlockContentTypeForm.php 18 4 0.347 core/modules/block_content/src/Controller/BlockContentController.php 8 6 0.195

Comment

File Times Changed Complexity Score core/modules/comment/src/CommentForm.php 60 45 1 core/modules/comment/src/Entity/Comment.php 55 25 0.548 core/modules/comment/src/Tests/CommentTestBase.php 33 29 0.426 core/modules/comment/src/Controller/CommentController.php 32 20 0.274 core/modules/comment/src/CommentViewBuilder.php 37 16 0.25 core/modules/comment/src/Plugin/Field/FieldFormatter/CommentDefaultFormatter.php 32 18 0.24 core/modules/comment/src/Form/CommentAdminOverview.php 29 17 0.191 core/modules/comment/src/CommentAccessControlHandler.php 17 28 0.19 core/modules/comment/src/CommentLinkBuilder.php 15 29 0.17 core/modules/comment/src/CommentManager.php 29 15 0.157

Drupal\Core\Entity

File Times Changed Complexity Score core/lib/Drupal/Core/Entity/ContentEntityBase.php 115 173 0.808 core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorage.php 61 196 0.465 core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorageSchema.php 56 203 0.427 core/lib/Drupal/Core/Entity/Entity.php 131 43 0.212 core/lib/Drupal/Core/Entity/ContentEntityStorageBase.php 41 105 0.16

Conclusion

So, what to do with these results?

Well I think if you're looking to simplify your code-base and identify places that would warrant refactoring, those with a high 'churn' score would be a good place to start.

What do you think? Let us know in the comments.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 19 January 2018

Comments

Pagination

Add new comment

Jan 15 2018
Jan 15

Managing technical debt is important for the health of all software projects. One way to manage certain types of technical debt is to revisit code and decide if it’s still relevant to the project and to potentially remove it. Doing so can reducing complexity and the amount of code developers are required to maintain.

To address this we’ve been experimenting with adding simple annotations to code, which indicate an “expiry”. A nudge to developers to go and reevaluate if some bit of code will still be needed at some point in the future. This can be integrated into CI pipelines to fail builds which have outstanding expiry annotations.

Some scenarios where this has proved to be helpful have been:

  • Removing workarounds in CSS to address bugs in web browsers which have since been fixed.
  • Removing uninstalled modules, which were required only for hook_uninstall.
  • Removing code that exists for features which are gradually being superseded, like an organisation gradually migrating content from nodes into a new custom entity.

Here is an real snippet of code we were able to recently delete from a project, based on a bug which was fixed upstream in Firefox. I don’t believe without an explicit prompt to revisit the code, which was introduced many months earlier, we would have been able to confidently clean this up.


// @expire Jan 2018
// Fix a bug in firefox which causes all form elements to match the exact size
// specified in the "size" or "cols" attribute. Firefox probably will have
// fixed this bug by now. Test it by removing the following code and visiting
// the contact form at a small screen size. If the elements dont overflow the
// viewport, the bug is fixed.
.form-text__manual-size {
  width: 529px;
  @media (max-width: 598px) {
    width: 100%;
  }
}

The code we've integrated into our CI pipeline to check these expiry annotations simply greps the code base for strings matching the expiry pattern for the last n months worth of time:


#!/bin/bash

SEARCH_FORMAT="@expire %s"
DATE_FORMAT="+%b %Y"
DIRS="./app/modules/custom/ ./app/themes/"
SEARCH_LAST_N_MONTHS=4

# Cross-platform date formatting with a month offset.
case `uname` in
  Darwin)
    function date_offset_month() {
      date -v $1m "$DATE_FORMAT";
    }
    ;;
  Linux)
    function date_offset_month() {
      date --d="$1 month" "$DATE_FORMAT"
    }
    ;;
  *)
esac

for i in $(seq 0 $SEARCH_LAST_N_MONTHS); do
    FORMATTED_DATE=$(date_offset_month -$i)
    SEARCH_STRING=$(printf "$SEARCH_FORMAT" "$FORMATTED_DATE")
    echo "Searching codebase for \"$SEARCH_STRING\"."
    grep -rni "$SEARCH_STRING" $DIRS && exit 1
done

exit 0
Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 16 January 2018

Comments

Nice!
Do you integrate this into your project issue tracking? Maybe have a tech debt story?

Pagination

Add new comment

Dec 04 2017
Dec 04

With the release of Drupal 8.4.x and its use of ES6 (Ecmascript 2015) in Drupal core we’ve started the task of updating our jQuery plugins/widgets to use the new syntax. This post will cover what we’ve learnt so far and what the benefits are of doing this.

If you’ve read my post about the Asset Library system you’ll know we’re big fans of the Component-Driven Design approach, and having a javascript file per component (where needed of course) is ideal. We also like to keep our JS widgets generic so that the entire component (entire styleguide for that matter) can be used outside of Drupal as well. Drupal behaviours and settings are still used but live in a different javascript file to the generic widget, and simply call it’s function, passing in Drupal settings as “options” as required.

Here is an example with an ES5 jQuery header component, with a breakpoint value set somewhere in Drupal:

@file header.js

(function ($) {

 // Overridable defaults
 $.fn.header.defaults = {
   breakpoint: 700,
   toggleClass: 'header__toggle',
   toggleClassActive: 'is-active'
 };

 $.fn.header = function (options) {
   var opts = $.extend({}, $.fn.header.defaults, options);
   return this.each(function () {
     var $header = $(this);
     // do stuff with $header
  }

})(jQuery);
@file header.drupal.js

(function ($, Drupal, drupalSettings) {
 Drupal.behaviors.header = {
   attach: function (context) {
     $('.header', context).header({
       breakpoint: drupalSettings.my_theme.header.breakpoint
     });
   }
 };
})(jQuery, Drupal, drupalSettings);

Converting these files into a different language is relatively simple as you can do one at a time and slowly chip away at the full set. Since ES6 is used in the popular JS frameworks it’s a good starting point for slowly moving towards a “progressively decoupled” front-end.

Support for ES6

Before going too far I should mention support for this syntax isn’t quite widespread enough yet! No fear though, we just need to add a “transpiler” into our build tools. We use Babel, with the babel-preset-env, which will convert our JS for us back into ES5 so that the required older browsers can still understand it.

Our Gulp setup will transpile any .es6.js file and rename it (so we’re not replacing our working file), before passing the renamed file into out minifying Gulp task.

With the Babel ENV preset we can specify which browsers we actually need to support, so that we’re doing the absolute minimum transpilation (is that a word?) and keeping the output as small as possible. There’s no need to bloat your JS trying to support browsers you don’t need to!

import gulp from 'gulp';
import babel from 'gulp-babel';
import path from 'path';
import config from './config';

// Helper function for renaming files
const bundleName = (file) => {
 file.dirname = file.dirname.replace(/\/src$/, '');
 file.basename = file.basename.replace('.es6', '');
 file.extname = '.bundle.js';
 return file;
};

const transpileFiles = [
 `${config.js.src}/**/*.js`,
 `${config.js.modules}/**/*.js`,
 // Ignore already minified files.
 `!${config.js.src}/**/*.min.js`,
 `!${config.js.modules}/**/*.min.js`,
 // Ignore bundle files, so we don’t transpile them twice (will make more sense later)
 `!${config.js.src}/**/src/*.js`,
 `!${config.js.modules}/**/src/*.js`,
 `!${config.js.src}/**/*.bundle.js`,
 `!${config.js.modules}/**/*.bundle.js`,
];

const transpile = () => (
 gulp.src(transpileFiles, { base: './' })
   .pipe(babel({
     presets: [['env', {
       modules: false,
       useBuiltIns: true,
       targets: { browsers: ["last 2 versions", "> 1%"] },
     }]],
   }))
   .pipe(rename(file => (bundleName(file))))
   .pipe(gulp.dest('./'))
);

transpile.description = 'Transpile javascript.';
gulp.task('scripts:transpile', transpile);

Which uses:

$ yarn add path gulp gulp-babel babel-preset-env --dev

On a side note, we’ll be outsourcing our entire Gulp workflow real soon. We’re just working through a few extra use cases for it, so keep an eye out!

Learning ES6

Reading about ES6 is one thing but I find getting into the code to be the best way for me to learn things. We like to follow Drupal coding standards so point our eslint config to extend what’s in Drupal core. Upgrading to 8.4.x obviously threw a LOT of new lint errors, and was usually disabled until time permitted their correction. But you can use these errors as a tailored ES6 guide. Tailored because it’s directly applicable to how you usually write JS (assuming you wrote the first code).

Working through each error, looking up the description, correcting it manually (as opposed to using the --fix flag) was a great way to learn it. It took some time, but once you understand a rule you can start skipping it, then use the --fix flag at the end for a bulk correction.

Of course you're also a Google away from a tonne of online resources and videos to help you learn if you prefer that approach!

ES6 with jQuery

Our original code is usually in jQuery, and I didn’t want to add removing jQuery into the refactor work, so currently we’re using both which works fine. Removing it from the mix entirely will be a future task.

The biggest gotcha was probably our use of this, once converted to arrow functions needed to be reviewed. Taking our header example from above:

return this.each(function () { var $header = $(this); }

Once converted into an arrow function, using this inside the loop is no longer scoped to the function. It doesn’t change at all - it’s not an individual element of the loop anymore, it’s still the same object we’re looping through. So clearly stating the obj as an argument of the .each() function lets us access the individual element again.

return this.each((i, obj) => { const $header = $(obj); }

Converting the jQuery plugins (or jQuery UI widgets) to ES6 modules was a relatively easy task as well… instead of:

(function ($) {

 // Overridable defaults
 $.fn.header.defaults = {
   breakpoint: 700,
   toggleClass: 'header__toggle',
   toggleClassActive: 'is-active'
 };

 $.fn.header = function (options) {
   var opts = $.extend({}, $.fn.header.defaults, options);
   return this.each(function () {
     var $header = $(this);
     // do stuff with $header
  }

})(jQuery);

We just make it a normal-ish function:

const headerDefaults = {
 breakpoint: 700,
 toggleClass: 'header__toggle',
 toggleClassActive: 'is-active'
};

function header(options) {
 (($, this) => {
   const opts = $.extend({}, headerDefaults, options);
   return $(this).each((i, obj) => {
     const $header = $(obj);
     // do stuff with $header
   });
 })(jQuery, this);
}

export { header as myHeader }

Since the exported ES6 module has to be a top level function, the jQuery wrapper was moved inside it, along with passing through the this object. There might be a nicer way to do this but I haven't worked it out yet! Everything inside the module is the same as I had in the jQuery plugin, just updated to the new syntax.

I also like to rename my modules when I export them so they’re name-spaced based on the project, which helps when using a mix of custom and vendor scripts. But that’s entirely optional.

Now that we have our generic JS using ES6 modules it’s even easier to share and reuse them. Remember our Drupal JS separation? We no longer need to load both files into our theme. We can import our ES6 module into our .drupal.js file then attach it as a Drupal behaviour. 

@file header.drupal.js

import { myHeader } from './header';

(($, { behaviors }, { my_theme }) => {
 behaviors.header = {
   attach(context) {
     myHeader.call($('.header', context), {
       breakpoint: my_theme.header.breakpoint
     });
   }
 };
})(jQuery, Drupal, drupalSettings);

So a few differences here, we're importing the myHeader function from our other file,  we're destructuring our Drupal and drupalSettings arguments to simplify them, and using .call() on the function to pass in the object before setting its arguments. Now the header.drupal.js file is the only file we need to tell Drupal about.

Some other nice additions in ES6 that have less to do with jQuery are template literals (being able to say $(`.${opts.toggleClass}`) instead of $('.' + opts.toggleClass')) and the more obvious use of const and let instead of var , which are block-scoped.

Importing modules into different files requires an extra step in our build tools, though. Because browser support for ES6 modules is also a bit too low, we need to “bundle” the modules together into one file. The most popular bundler available is Webpack, so let’s look at that first.

Bundling with Webpack

Webpack is super powerful and was my first choice when I reached this step. But it’s not really designed for this component based approach. Few of them are truly... Bundlers are great for taking one entry JS file which has multiple ES6 modules imported into it. Those modules might be broken down into smaller ES6 modules and at some level are components much like ours, but ultimately they end up being bundled into ONE file.

But that’s not what I wanted! What I wanted, as it turned out, wasn’t very common. I wanted to add Webpack into my Gulp tasks much like our Sass compilation is, taking a “glob” of JS files from various folders (which I don’t really want to have to list), then to create a .bundle.js file for EACH component which included any ES6 modules I used in those components.

The each part was the real clincher. Getting multiple entry points into Webpack is one thing, but multiple destination points as well was certainly a challenge. The vinyl-named npm module was a lifesaver. This is what my Gulp talk looked like:

import gulp from 'gulp';
import gulp-webpack from 'webpack-stream';
import webpack from 'webpack'; // Use newer webpack than webpack-stream
import named from 'vinyl-named';
import path from 'path';
import config from './config';

const bundleFiles = [
 config.js.src + '/**/src/*.js',
 config.js.modules + '/**/src/*.js',
];

const bundle = () => (
 gulp.src(bundleFiles, { base: "./" })
   // Define [name] with the path, via vinyl-named.
   .pipe(named((file) => {
     const thisFile = bundleName(file); // Reuse our naming helper function
     // Set named value and queue.
     thisFile.named = thisFile.basename;
     this.queue(thisFile);
   }))
   // Run through webpack with the babel loader for transpiling to ES5.
   .pipe(gulp-webpack({
     output: {
       filename: '[name].bundle.js', // Filename includes path to keep directories
     },
     module: {
       loaders: [{
         test: /\.js$/,
         exclude: /node_modules/,
         loader: 'babel-loader',
         query: {
           presets: [['env', { 
             modules: false, 
             useBuiltIns: true, 
             targets: { browsers: ["last 2 versions", "> 1%"] }, 
           }]],
         },
       }],
     },
   }, webpack))
   .pipe(gulp.dest('./')) // Output each [name].bundle.js file next to it’s source
);

bundle.description = 'Bundle ES6 modules.';
gulp.task('scripts:bundle', bundle);

Which required:

$ yarn add path webpack webpack-stream babel-loader babel-preset-env vinyl-named --dev

This worked. But Webpack has some boilerplate JS that it adds to its bundle output file, which it needs for module wrapping etc. This is totally fine when the output is a single file, but adding this (exact same) overhead to each of our component JS files, it starts to add up. Especially when we have multiple component JS files loading on the same page, duplicating that code.

It only made each component a couple of KB bigger (once minified, an unminified Webpack bundle is much bigger), but the site seemed so much slower. And it wasn’t just us, a whole bunch of our javascript tests started failing because the timeouts we’d set weren’t being met. Comparing the page speed to the non-webpack version showed a definite impact on performance.

So what are the alternatives? Browserify is probably the second most popular but didn’t have the same ES6 module import support. Rollup.js is kind of the new bundler on the block and was recommended to me as a possible solution. Looking into it, it did indeed sound like the lean bundler I needed. So I jumped ship!

Bundling with Rollup.js

The setup was very similar so it wasn’t hard to switch over. It had a similar problem about single entry/destination points but it was much easier to resolve with the ‘gulp-rollup-each’ npm module. My Gulp task now looks like:

import gulp from 'gulp';
import rollup from 'gulp-rollup-each';
import babel from 'rollup-plugin-babel';
import resolve from 'rollup-plugin-node-resolve';
import commonjs from 'rollup-plugin-commonjs';
import path from 'path';
import config from './config';

const bundleFiles = [
 config.js.src + '/**/src/*.js',
 config.js.modules + '/**/src/*.js',
];

const bundle = () => {
 return gulp.src(bundleFiles, { base: "./" })
   .pipe(rollup({
     plugins: [
       resolve(),
       commonjs(),
       babel({
         presets: [['env', {
           modules: false,
           useBuiltIns: true,
           targets: { browsers: ["last 2 versions", "> 1%"] },
         }]],
         babelrc: false,
         plugins: ['external-helpers'],
       })
     ]
   }, (file) => {
     const thisFile = bundleName(file); // Reuse our naming helper function
     return {
       format: 'umd',
       name: path.basename(thisFile.path),
     };
   }))
   .pipe(gulp.dest('./')); // Output each [name].bundle.js file next to it’s source
};

bundle.description = 'Bundle ES6 modules.';
gulp.task('scripts:bundle', bundle);

We don’t need vinyl-named to rename the file anymore, we can do that as a callback of gulp-rollup-each. But we need a couple of extra plugins to correctly resolve npm module paths.

So for this we needed:

$ yarn add path gulp-rollup-each rollup-plugin-babel babel-preset-env rollup-plugin-node-resolve rollup-plugin-commonjs --dev

Rollup.js does still add a little bit of boilerplate JS but it’s a much more acceptable amount. Our JS tests all passed so that was a great sign. Page speed tests showed the slight improvement I was expecting, having bundled a few files together. We're still keeping the original transpile Gulp task too for ES6 files that don't include any imports, since they don't need to go through Rollup.js at all.

Webpack might still be the better option for more advanced things that a decoupled frontend might need, like Hot Module Replacement. But for simple or only slightly decoupled components Rollup.js is my pick.

Next steps

Some modern browsers can already support ES6 module imports, so this whole bundle step is becoming somewhat redundant. Ideally the bundled file with it’s overhead and old fashioned code is only used on those older browsers that can’t handle the new and improved syntax, and the modern browsers use straight ES6...

Luckily this is possible with a couple of script attributes. Our .bundle.js file can be included with the nomodule attribute, alongside the source ES6 file with a type=”module” attribute. Older browsers ignore the type=module file entirely because modules aren’t supported and browsers that can support modules ignore the ‘nomodule’ file because it told them to. This article explains it more.

Then we'll start replacing the jQuery entirely, even look at introducing a Javascript framework like React or Glimmer.js to the more interactive components to progressively decouple our front-ends!
 

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 5 December 2017

Comments

Is it absolutely necessary to bring all our jQuery plugins to ES6, or would they remain fine as it is?

They would be fine as is, you just won't get the full benefits of the ES6 module imports/exports. Being able to import a particular function (or all of them) from another file just means you can make things more reusable. You can be selective about what you convert too and just do the parts you know would benefit most from it.

Pagination

Add new comment

Nov 28 2017
Nov 28

At DrupalSouth 2017, I presented a session on the new Workflows module, which just went stable in Drupal 8.4.0. Workflows was split out from content moderation as a separate module, and can be used independently to create custom workflows. In this presentation, I gave a demonstration of how to create a basic workflow for an issue tracker.

Since 2011 we have had access to a content moderation tool in Drupal 7 in the form of Workbench Moderation. This module introduced the concept of Draft ➡ Review ➡ Published workflows, with different user roles having specific permissions to move from one state to the next.

Unfortunately, the underlying Drupal core revision API was not designed to deal with this, and there were some pretty crazy workarounds.

Content moderation has long been a key feature request for Drupal, and so effort was made to port Workbench Moderation across to Drupal 8. 

Content Moderation drove a lot of cleanup in Drupal core APIs, including proper support for forward revisions, and adding revision support to other content entities besides Content Types, such as Custom Blocks. More are on the way.

In Drupal 8.3, the Workflows module was split out of Content Moderation. Why you may ask? Well, because the Workflows module provides the state machine engine that Content Moderation relies on.

What is a State Machine?

A state machine defines a set of states and rules on how you can transition between those states.

Door state machine A door state machine

In our simple example of a door, it can only be opened, closed or locked. However, you can't go directly from locked to open, you need to unlock it first.

Content Moderation Workflow Configuration

Content Moderation provides a set of Workflow states and transitions by default.

Content Moderation States Content Moderation States Content Moderation Transitions Content Moderation Transitions

If we were to put this together in a state machine diagram, it would look like the following:

Content Moderation State Machine Content Moderation State Machine

From the above diagram, it becomes clear what the allowed transitions are between states.

So now Workflows has been configured with our Content Moderation states and transitions, what is left for Content Moderation to do?

What Does Content Moderation Do?

It turns out quite a lot. Remember, that Workflows only provides the state machine. It in no way prescribes how you should manage the current state of a particular entity.

Content Moderation provides:

  • Default Workflows configuration
  • A fairly complex WorkflowType plugin which works with the Revision API.
  • Storage for individual states on content entities
  • Configuration of which entity bundles (Content types, etc.) should have Content Moderation
  • A number of admin forms for configuring the workflows and how they apply
  • Permissions

Building an Issue Tracker

We want to build a very simple issue tracker for our example. The state machine diagram is the following:

Issue Tracker State Machine Issue Tracker State Machine

That's the simple bits out of the way. Now, in order to build an issue tracker, we will need to replicate the rest what Content Moderation does!

Fortunately there is a module that can do most of the heavy lifting for us.

Workflows Field

“This module provides a field which allows you to store states on any content entity and ensure the states change according to transitions defined by the core workflows module.” 

Perfect! Let's download and install it.

Next we want to add a new Workflow. We can assign it a label of Issue Status and you'll see that we have a new Workflows Field option in the Workflow Type dropdown.

Add new workflow Add new workflow

We can then configure the desire Workflows states and transitions.

Issue States Issue States Issue Transitions Issue Transitions

Thats the our Workflows configured. Now we need to create a new Issue content type to attach our workflow to. It's assumed you know how to create a content type already. If not, check out the User Guide.

Next, we need to add our Workflows Field to our Issue content type. Follow the usual steps to add a field, and in the drop down choose Workflows as the field type, and our previously created Issue Status workflow.

Add workflows field Add workflows field

Test it out!

Now we can test our our workflow by creating a new Issue from the Content page. If everything was configured correctly, we should see a new field on the edit form for Status.

Issue status form Issue status form

Given the transitions we defined in our workflow, we should only be allowed to see certain values in the drop-down, depending on the current state.

Testing workflow constraints Testing workflow constraints

What next?

That's it for setting up and configuring a custom workflow using Workflows Field. Some next steps would be:

  • Add permissions for certain users (there's an issue for that #2904573 )
  • Add email notifications

How would you use the new Workflows API?

Let me know in the comments!

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 29 November 2017

Add new comment

Nov 23 2017
Nov 23

As seen in the recent Uber hack, storing secrets such as API tokens in your project repository can leave your organisation vulnerable to data breaches and extortion. This tutorial demonstrates a simple and effective way to mitigate this kind of threat by leveraging Key module to store API tokens in remote key storage.

Even tech giants like Uber are bitten by poor secret management in their applications. The snippet below describes how storing AWS keys in their repository resulted in a data breach, affecting 57 million customers and drivers.

Here’s how the hack went down: Two attackers accessed a private GitHub coding site used by Uber software engineers and then used login credentials they obtained there to access data stored on an Amazon Web Services account that handled computing tasks for the company. From there, the hackers discovered an archive of rider and driver information. Later, they emailed Uber asking for money, according to the company.

Uber could have avoided this breach by storing their API keys in a secret management system. In this tutorial, I'll show you how to do exactly this using the Key module in conjunction with the Lockr key management service.

This guide leverages a brand-new feature of Key module (as of 8.x-1.5) which allows overriding any configuration value with a secret. In this instance we will set up the MailChimp module using the this secure config override capability.

Service Set-Up

Before proceeding with the Drupal config, you will need a few accounts:

  • Mailchimp offer a "Forever Free" plan.
  • Lockr offer your first key and 1,500 requests for free.

These third-party services provide us with a simple example. Other services are available.

Dependencies

There are a few modules you'll need to add to your codebase.

composer require \
  "drupal/key:^1.5" \
  "drupal/lockr" \
  "drupal/mailchimp"

Configuration

  1. Go to /admin/modules  and enable the MailChimp, Lockr and Key modules.
  2. Go to /admin/config/system/lockr
  3. Use this form to generate a TLS certificate that Lockr uses to authenticate your site. Fill out the form and submit.
    Lockr Create Auth Certificate
  4. Enter the email address you used for your Lockr account and click Sign up.
  5. You should be now be re-prompted to log in - enter the email address and password for your Lockr account.
  6. In another tab, log into the MailChimp dashboard
    1. Go to the API settings page - https://us1.admin.mailchimp.com/account/api/
    2. Click Create A Key
    3. Note down this API key so we can configure in Drupal in the next step.
      MailChimp API Dashboard
  7. In your Drupal tab, go to /admin/config/system/keys and click Add Key
  8. Create a new Key entity for your MailChimp token. The important values here are:
    1. Key provider - ensure you select Lockr
    2. Value - paste the API token you obtained from the MailChimp dashboard.
      Key Create MailChimp Token
  9. Now we need to set up the configuration overrides. Go to /admin/config/development/configuration/key-overrides and click Add Override
  10. Fill out this form, the important values here are:
    1. Configuration type: Simple configuration
    2. Configuration name: mailchimp.settings
    3. Configuration item: api_key
    4. Key: The name of the key you created in the previous step.
      Key Override Create

... and it is that simple.

Result

The purpose of this exercise is to ensure the API token for our external services are not saved in Drupal's database or code repository - so lets see what those look like now.

MailChimp Config Export - Before

If you configured MailChimp in the standard way, you'd see a config export similar to this. As you can see, the api_key value is in plaintext - anyone with access to your codebase would have full access to your MailChimp account.

api_key: 03ca2522dd6b117e92410745cd73e58c-us1
cron: false
batch_limit: 100
api_classname: Mailchimp\Mailchimp
test_mode: false

MailChimp Config Export - After

With the key overrides feature enabled, the api_key value in this file is now null.

api_key: null 
cron: false
batch_limit: 100
api_classname: Mailchimp\Mailchimp
test_mode: false

There are a few other relevant config export files - lets take a look at those.

Key Entity Export

This export is responsible for telling Drupal where Key module stored the API token. If you look at the key_provider and key_provider_settings values, you'll see that it is pointing to a value stored in Lockr. Still no API token in sight!

dependencies:
 module:
   - lockr
   - mailchimp
id: mailchimp_token
label: 'MailChimp Token'
description: 'API token used to authenticate to MailChimp email marketing platform.'
key_provider: lockr
key_provider_settings:
 encoded: aes-128-ctr-sha256$nHlAw2BcTCHVTGQ01kDe9psWgItkrZ55qY4xV36BbGo=$+xgMdEzk6lsDy21h9j….
key_input: text_field

Key Override Export

The final config export is where the Key entity is mapped to override MailChimp's configuration item. 

status: true
dependencies:
 config:
   - key.key.mailchimp_token
   - mailchimp.settings
id: mailchimp_api_token
label: 'MailChimp API Token'
config_type: system.simple
config_name: mailchimp.settings
config_item: api_key
key_id: mailchimp_token

Conclusion

Hopefully this tutorial shows you how accessible these security-hardening techniques have become. 

With this solution implemented, an attacker can not take control of your MailChimp account simply by gaining access to your repository or a database dump. Also remember that this exact technique can be applied to any module which uses the Configuration API to store API tokens.

Why? Here are a few examples of ways popular Drupal modules could harm your organisation if their configs were exposed (tell me about your own worst-case scenarios in the comments!).

  • s3fs - An attacker could leak or delete all of the data stored in your bucket. They could also ramp up your AWS bill by storing or transferring terabytes of data.
  • SMTP - An attacker could use your own SMTP server against you to send customers phishing emails from a legitimate email address. They could also leak any emails the compromised account has access to.

What other Drupal modules could be made more securing in this way? Post your ideas in the comments!

Go forth, and build secure Drupal projects!

Photo of Nick Santamaria

Posted by Nick Santamaria
Systems Operations Developer

Dated 24 November 2017

Add new comment

Nov 21 2017
Nov 21

Need a way to mix fields from referenced entities with regular fields from managed display?

Then the Display Suite Chained Fields module might be for you.

So how do you go about using the module?

Step 1: Enable a display suite layout for the view mode

To use the chained fields functionality, you must enable a display suite layout for the view mode. Select a layout other than none and hit Save.

Screenshot showing how to go about enabling a layout Enabling a layout

Step 2: Enable the entity reference fields you wish to chain

To keep the manage display list from being cluttered, you must manually enable the entity reference fields you wish to show chained fields from. For example, to show the author's picture, you might enable the 'Authored by' entity reference field, which points to the author. After you've enabled the required fields, press Save.

Screenshot showing enabling the fields for chaining Enabling fields for chaining

Step 3: Configure the chained fields as required

Finally, just configure the chained fields as normal.

Screenshot showing chained fields available for configuration Configuring chained fields

That's it - let me know your thoughts in the comments or the the issue queue.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 22 November 2017

Add new comment

Nov 21 2017
Nov 21

We recently Open Sourced our temporary environment builder, M8s. In this blog post we will be demoing everything you need to get started!

by Nick Schuch / 21 November 2017

Introduction

In this video we will introduce you to M8s and the problem which it is solving.

[embedded content]

Provisioning a M8s cluster

Now that you are acquainted with the M8s project, it's time to get a cluster provisioned!

In this video we will setup a Google Kubernetes Engine cluster and deploy the M8s API components.

[embedded content]

Setting up CircleCI

Now that our M8s cluster is up and running, it's time to setup our pipeline to run a build.

In this video we will be configuring CircleCI to run the M8s CLI.

[embedded content]

Pushing a topic branch

It's time to put it all together!

In this video we will be pushing a topic branch to demonstrate how M8s interacts with a Pipeline.

[embedded content]

Finale

You made it to the finale! In this video we will be checking out the build environment and how a developer can access the Mailhog and Solr containers.

[embedded content]

Conclusion

To learn more about the M8s project you can go and checkout:

We welcome any and all feedback via Twitter and our Github Project issues page.

Tagged

m8s, Kubernetes, Drupal Development Photo of Nick Schuch

Posted by Nick Schuch
Sys Ops Lead

Dated 21 November 2017

Add new comment

Your name

Comment (required) About text formats

Restricted HTML

  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <h2> <h3> <h4> <h5> <h6>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Nov 20 2017
Nov 20

In my recent talk at DrupalSouth Auckland 2017 I took a hard look at the hyperbole of Drupal supposedly powering over a million websites. Where does Drupal really sit in relation to other CMS platforms, both open source and proprietary? What trends are emerging that will impact Drupal's market share? The talk looked outside the Drupal bubble and took a high level view of its market potential and approaches independent firms can take to capitalise on Drupal's strengths and buffer against its potential weaknesses.

But, Drupal powers over a million websites!

One of the key statistics that Drupalers hold onto is that it's powered over a million websites since mid 2014 when Drupal 7 was in ascendance. However, since Drupal 8 was released in late 2015, Drupal's overall use has stalled at around 1.2m websites, as seen circled in red on the Drupal Core usage statistics graph below.

Drupal install graph

The main reason for this stall in growth was that Drupal 8 was a major architectural re-write that wasn't essential or even affordable for many Drupal 7 sites to migrate to. For clients considering major new projects, many held off on committing to Drupal 8 until there were more successful case studies in the wild and didn't commission new Drupal 7 sites given that version was nearing a decade old. Anecdotally, 2016 was a tough year for many Drupal firms as they grappled with this pause in adoption.

Of course, Drupal 8 is now a well-proven platform and is experiencing steady uptake as circled in green on the usage graph above. This uptake corresponds with a down tick in Drupal 7 usage, but also indicates a softening of total Drupal usage. If we extrapolate these trend lines in a linear fashion, then we can see that Drupal 8 might surpass Drupal 7 usage around 2023.

Drupal usage extrapolation

Of course, technology adoption doesn't move in a straight line! Disruptive technologies emerge that rapidly change the playing field in a way that often can't be envisaged. The example that springs to mind is Nokia's market share was still growing when the iPhone 4 was released in 2010. By the time the iPhone 4s was released in 2011, Nokia's sales volumes had almost halved, leading to Microsoft's catastrophic purchase of the handset division in 2013 and subsequent re-sale for 5% of the purchase value in 2016. Oops!

Builtwith stats

Despite this downward trend in overall Drupal usage, we can take comfort that its use on larger scale sites is growing, powering 5.7% of the Top 10,000 websites according to Builtwith.com. However, its market share of the Top 100,000 (4.3%) and Top Million (3%) websites is waning, indicating that other CMS are gaining ground with smaller sites. It's also worth noting that Builtwith only counts ~680,000 Drupal websites, indicating that the other ~500,000 Drupal.org is detecting are likely to be development and staging sites.

So, where are these other sites moving to when they're choosing a new CMS? 

Wordpress usage

Looking at the stats from W3Techs, it's clear to see that Wordpress accounts for almost all of the CMS growth, now sitting at around 30% of total market share.

Wordpress has been able to achieve this dominance by being a fantastic CMS for novice developers and smaller web agencies to build clients' websites with. This is reinforced by Wordpress having an exceptional editor experience and a hugely popular SAAS platform at Wordpress.com.

Drupal's place in the CMS market

The challenge Wordpress poses to other open-source CMS platforms, like Joomla, Typo3 and Plone, all with under 1% market share and falling, is their development communities are likely to look direct their efforts to other platforms. Drupal is able to hedge against this threat by having a large and highly engaged community around Drupal 8, but it's now abundantly clear that Drupal can't compete as a platform for building smaller brochure-ware style sites that Wordpress and SAAS CMS like Squarespace are dominating. We're also seeing SAAS platforms like Nationbuilder eat significantly into Drupal's previously strong share of the non-profit sector.

With all the hype around Headless or Decoupled CMS, Drupal 8 is well positioned to play a role as the backend for React or Angular Javascript front-ends. Competitors in this space are SAAS platforms like Contentful and Directus, with proprietary platforms like Kentico pivoting as a native cloud CMS service designed to power decoupled front-ends.

We often talk of Drupal as a CMS Framework, where it competes against frameworks like Ruby on Rails, .NET and Django to build rich web based applications. Drupal 8 is still well placed to serve this sector if the web applications are also relying on large scale content and user management features.

Which brings us to the Enterprise CMS sector, where Drupal competes head to head with proprietary platforms like Adobe Experience Manager, Sitecore and legacy products from Opentext, IBM and Oracle. The good news is that Drupal holds its own in this sector and has gained very strong market share with Government, Higher Education, Media and "Challenger" Enterprise clients.

This "Comfort zone" for Drupal usage is characterised by clients building large scale platforms with huge volumes of content and users, high scalability and integration with myriad third party products. Operationally, these clients often have well established internal web teams and varying degrees of self reliance. They're often using Agile delivery methods and place high value on speed to market and the cost savings associated with open-source software.

Where Drupal is gaining a competitive edge since the release of Drupal 8 is against the large proprietary platforms like Adobe Experience Manager and Sitecore. These companies market a platform of complementary products in a unified stack to their clients through long standing partnerships with major global digital agencies and system integrators. It's no surprise then that Acquia markets their own platform in a similar way to this sector where Drupal serves as the CMS component, complemented by subscription-based tools for content personalisation, customer segmentation and cloud based managed hosting. Acquia have actively courted global digital media agencies with this offering through global partnerships to give Drupal a toe hold in this sector.

Garnter Magic Quadrant CMS

This has meant Acquia has made significant headway into larger Enterprise clients through efforts like being recognised as a "Leader" in the Gartner Magic Quadrant for CMS, lending Drupal itself some profile and legitimacy as a result. This has driven Enterprise CIOs, CTOs and CMOs to push their vendors to offer Drupal services, who have looked to smaller Drupal firms to provide expertise where required. This is beneficial to independent Drupal services firms in the short term, but the large digital agencies will quickly internalise these skills if they see a long term market for Drupal with their global clients.

As one of those independent Drupal firms, PreviousNext have staked a bet that not all Enterprise customers will want to move to a monolithic platform where all components are provided by a single vendor's products. We're seeing sophisticated customers wanting to use Drupal 8 as the unifying hub for a range of best-of-breed SAAS platforms and cloud services. 

Drupal 8 hub

This approach means that Enterprise customers can take advantage of the latest, greatest SAAS platforms whilst retaining control and consistency of their core CMS. It also allows for a high degree of flexibility to rapidly adapt to market changes. 

What does this all mean for Drupal 8?

The outcome of our research and analysis has led to a few key conclusions about what the future looks like for Drupal 8:

  • Drupal's overall market share will steadily fall as smaller sites move to SAAS CMS and self-managed Wordpress installs.
  • The "comfort zone" of Government, Media, Higher Education and "Challenger" Enterprise clients will grow as many of these clients upgrade or switch to Drupal 8 from Drupal 7 or proprietary platforms.
  • Drupal will gain traction in the larger Enterprise as the global digital agencies and system integrators adopt Drupal 8 as a direct alternative to proprietary CMS products. 
  • Independent Drupal services firms have a good opportunity to capitalise on these trends through partnerships with larger global agencies and specialisation in technologies that complement Drupal 8 as a CMS.
  • A culture of code contribution needs to grow within the larger clients and agencies moving to Drupal to ensure the burden of maintaining Drupal's development isn't shouldered by smaller independent firms and individual developers. 

Despite the fact that we've probably already passed "Peak Drupal", we're firm believers that Drupal 8 is the right tool for large scale clients and that community has the cohesion to adapt to these existential challenges!

Add new comment

Nov 16 2017
Nov 16

At PNX, style guide driven development is our bag. It’s what we love: building a living document that provides awesome reference for all our front end components. And Drupal 8, with its use of Twig, complements this methodology perfectly. The ability to create a single component, and then embed that component and its markup throughout a Drupal site in a variety of different ways without having to use any tricks or hacks is a thing of beauty.

Create a component

For this example we are going to use the much loved collapsible/accordion element. It’s a good example of a rich component because it uses CSS, JS, and Twig to provide an element that’s going to be used everywhere throughout a website.

To surmise the component it’s made up of the following files:

collapsible.scss
collapsible.widget.js
collapsible.drupal.js
collapsible.twig
collapsible.svg

The .scss file will end up compiling to a .css file, but we will be using SASS here because it’s fun. The widget.js file is a jQuery UI Widget Factory plugin that gives us some niceties - like state. The drupal.js file is a wrapper that adds our accordion widget as a drupal.behavior. The svg file provides some pretty graphics, and finally the twig file is where the magic starts.

Let’s take a look at the twig file:

{{ attach_library('pnx_project_theme/collapsible') }}
<section class="js-collapsible collapsible {{ modifier_class }}">
  <h4 class="collapsible__title">
    {% block title %}
      Collapsible
    {% endblock %}
  </h4>
  <div class="collapsible__content">
    {% block content %}
      <p>Curabitur blandit tempus porttitor. Cum sociis natoque penatibus et
        magnis dis parturient montes, nascetur ridiculus mus. Morbi leo risus,
        porta ac consectetur ac, vestibulum at eros. Praesent commodo cursus
        magna, vel scelerisque nisl consectetur et. Fusce dapibus, tellus ac
        cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo
        sit amet risus.</p>
    {% endblock %}
  </div>
</section>

This is a standard-ish BEM based component. It uses a js-* class to attach the widget functionality. We also have a {{ modifier_class }} variable, that can be used by kss-node to alter the default appearance of the collapsible (more on this later). There are two elements in this component title and content. They are expressed inside a twig block. What this means is we can take this twig file and embed it elsewhere. Because the component is structured this way, when it’s rendered in its default state by KSS we will have some default content, and the ability to show it's different appearances/styles using modifier_class.

Our twig file also uses the custom Drupal attach_library function which will bring in our components CSS and JS from the following theme.libraries.yml entry:

collapsible:
  css:
    component:
      src/components/collapsible/collapsible.css: {}
  js:
    src/components/collapsible/collapsible.widget.js : {}
    src/components/collapsible/collapsible.drupal.js : {}
  dependencies:
    - core/jquery
    - core/drupal
    - core/jquery.once
    - core/jquery.ui
    - core/jquery.ui.widget

This is a pretty meaty component so it’s got some hefty javascript requirements. Not a problem in the end as it’s all going to get minified and aggregated by Drupal Cores library system.

And there we have it - a rich javascript component. It’s the building block for all the cool stuff we are about to do.

Use it in a field template override

As it stands we can throw this component as-is into KSS which is nice (although we must add our css and js to KSS manually, attach_library() won’t help us there sadly - yet), but we want drupal to take advantage of our twig file. This is where twigs embed comes in. Embed in twig is a mixture of the often used include, and the occasionally used extend. It’s a super powerful piece of kit that lets us do all the things.

Well these things anyway: include our twig templates contents, add variables to it, and add HTML do it.

Because this is an accordion, it’s quite likely we’ll want some field data inside it. The simplest way to get this happening is with a clunky old field template override. As an example I’ll use field--body.html.twig:

{% for item in items %}
  {% embed '@pnx_project_theme/components/collapsible/collapsible.twig' %}
    {% block title %}
      {{ label }}
    {% endblock %}
    {% block content %}
      {{ item.content }}
    {% endblock %}
  {% endembed %}
{% endfor %}

Here you can see the crux of what we are trying to achieve. The collapsible markup is specified in one place only, and other templates can include that base markup and then insert the content they need to use in the twig blocks. The beauty of this is any time this field is rendered on the page, all the markup, css and js will be included with it, and it all lives in our components directory. No longer are meaty pieces of markup left inside Drupal template directories - our template overrides are now embedding much richer components.

There is a trick above though, and it’s the glue that brings this together. See how we have a namespace in the embed path - all drupal themes/modules get a twig namespace automatically which is just @your_module_name or @your_theme_name - however it points to the theme or modules templates directory only. Because we are doing style guide driven development and we have given so much thought to creating a rich self-contained component our twig template lives in our components directory instead, so we need to use a custom twig namespace to point there. To do that, we use John Albins Component Libraries module. It lets us add a few lines to our theme.info.yml file so our themes namespace can see our component templates:

component-libraries:
  pnx_project_theme:
    paths:
      - src
      - templates

Now anything in /src or /templates inside our theme can be included with our namespace from any twig template in Drupal.

Use it in a field formatter

Now let’s get real because field template overrides are not the right way to do things. We were talking about making things DRY weren’t we?

Enter field formatters. At the simple end of this spectrum our formatter needs an accompanying hook_theme entry so the formatter can render to a twig template. We will need a module to give the field formatter somewhere to live.

Setup your module file structure as so:

src/Plugin/Field/FieldFormatter/CollapsibleFormatter.php
templates/collapsible-formatter.html.twig
pnx_project_module.module
pnx_project_module.info.yml

Your formatter lives inside the src directory and looks like this:

<?php

namespace Drupal\pnx_project_module\Plugin\Field\FieldFormatter;

use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\FormatterBase;
use Drupal\Core\Form\FormStateInterface;

/**
 * A field formatter for trimming and wrapping text.
 *
 * @FieldFormatter(
 *   id = "collapsible_formatter",
 *   label = @Translation("Collapsible"),
 *   field_types = {
 *     "text_long",
 *     "text_with_summary",
 *   }
 * )
 */
class CollapsibleFormatter extends FormatterBase {

  /**
   * {@inheritdoc}
   */
  public function viewElements(FieldItemListInterface $items, $langcode = NULL) {
    $elements = [];

    foreach ($items as $delta => $item) {
      $elements[$delta] = [
        '#theme' => 'collapsible_formatter',
        '#title' => $items->getFieldDefinition()->getLabel(),
        '#content' => $item->value,
        '#style' => NULL,
      ];
    }

    return $elements;
  }

}

And the hook_theme function lives inside the .module file:

<?php

/**
 * @file
 * Main module functions.
 */

/**
 * Implements hook_theme().
 */
function pnx_project_module_theme($existing, $type, $theme, $path) {
  return [
    'collapsible_formatter' => [
      'variables' => [
        'title' => NULL,
        'content' => NULL,
        'style' => NULL,
      ],
    ],
  ];
}

Drupal magic is going to look for templates/collapsible-formatter.html.twig in our module directory automatically now. Our hook_theme template is going to end up looking pretty similar to our field template:

{% embed '@pnx_project_theme/components/collapsible/collapsible.twig' with { modifier_class: style } %}
  {% block title %}
    {{ title }}
  {% endblock %}
  {% block content %}
    {{ content }}
  {% endblock %}
{% endembed %}

Now jump into the field display config of a text_long field, and you’ll be able to select the collapsible and it’s going to render our component markup combined with the field data perfectly, whilst attaching necessary CSS/JS.

Add settings to the field formatter

Let's take it a bit further. We are missing some configurability here. Our component has a modifier_class with a mini style (a cut down smaller version of the full accordion). You'll notice in the twig example above, we are using the with notation which works the same way for embed as it does for include to allow us to send an array of variables through to the parent template. In addition our hook_theme function has a style variable it can send through from the field formatter. Using field formatter settings we can make our field formatter far more useful to the site builders that are going to use it. Let's look at the full field formatter class after we add settings:

class CollapsibleFormatter extends FormatterBase {

  /**
   * {@inheritdoc}
   */
  public function viewElements(FieldItemListInterface $items, $langcode = NULL) {
    $elements = [];

    foreach ($items as $delta => $item) {
      $elements[$delta] = [
        '#theme' => 'collapsible_formatter',
        '#title' => !empty($this->getSetting('label')) ? $this->getSetting('label') : $items->getFieldDefinition()->getLabel(),
        '#content' => $item->value,
        '#style' => $this->getSetting('style'),
      ];
    }

    return $elements;
  }

  /**
   * {@inheritdoc}
   */
  public function settingsSummary() {
    $summary = [];
    if ($label = $this->getSetting('label')) {
      $summary[] = 'Label: ' . $label;
    }
    else {
      $summary[] = 'Label: Using field label';
    }
    if (empty($this->getSetting('style'))) {
      $summary[] = 'Style: Normal';
    }
    elseif ($this->getSetting('style') === 'collapsible--mini') {
      $summary[] = 'Style: Mini';
    }
    return $summary;
  }

  /**
   * {@inheritdoc}
   */
  public function settingsForm(array $form, FormStateInterface $form_state) {
    $form['label'] = [
      '#title' => $this->t('Label'),
      '#type' => 'textfield',
      '#default_value' => $this->getSetting('label'),
      '#description' => t('Customise the label text, or use the field label if left empty.'),
    ];
    $form['style'] = [
      '#title' => t('Style'),
      '#type' => 'select',
      '#options' => [
        '' => t('Normal'),
        'collapsible--mini' => t('Mini'),
      ],
      '#description' => t('See <a href="https://www.previousnext.com.au/styleguide/section-6.html#kssref-6-1" target="_blank">Styleguide section 6.1</a> for a preview of styles.'),
      '#default_value' => $this->getSetting('style'),
    ];
    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public static function defaultSettings() {
    return [
      'label' => '',
      'style' => '',
    ];
  }

}

There's a few niceties there: It allows us to set a custom label (for the whole field), it automatically assigns the correct modifier_class, it links to the correct section in the style guide in the settings field description, and it adds a settings summary so site builders can see the current settings at a glance. These are all patterns you should repeat.

Let's sum up

We've created a rich interactive BEM component with its own template. The component has multiple styles and displays an interactive demo of itself using kss-node. We've combined its assets into a Drupal library and made the template - which lives inside the style guides component src folder - accessible to all of Drupal via the Component Libraries module. We've built a field formatter that allows us to configure the components appearance/style. Without having to replicate any HTML anywhere.

The component directory itself within the style guide will always be the canonical source for every version of the component that is rendered around our site.

Photo of Jack Taranto

Posted by Jack Taranto
Front end developer

Dated 16 November 2017

Add new comment

Nov 06 2017
Nov 06

Its extremely important to have default values that you can rely on for local Drupal development, one of those is "localhost". In this blog post we will explore what is required to make our local development environment appear as "localhost".

In our journey migrating to Docker for local dev we found ourselves running into issues with "discovery" of services eg. Solr/Mysql/Memcache.

In our first iteration we used linking, allowing our services to talk to each other, some downsides to this were:

  • Tricky to compose an advanced relationship, lets use PHP and PanthomJS as an example:
    • PHP needs to know where PhantomJS is running
    • PhantomJS needs to know the domain of the site that you are running locally
    • Wouldn't it be great if we could just use "localhost" for both of these configurations?
  • DNS entries only available within the containers themselves, cannot run utilities outside of the containers eg. Mysql admin tool

With this in mind, we hatched an idea.....

What if we could just use "localhost" for all interactions between all the containers.

  • If we wanted to access our local projects Apache, http://localhost (inside and outside of container)
  • If we wanted to access our local projects Mailhog, http://localhost:8025 (inside and outside of container)
  • If we wanted to access our local projects Solr, http://localhost:8983 (inside and outside of container)

All this can be achieved with Linux Network Namespaces in Docker Compose.

Network Namespaces

Linux Network Namespaces allow for us to isolate processes into their own "network stacks".

By default, the following happens when a container gets created in Docker:

  • Its own Network Namespace is created
  • A new network interface is added
  • Provided an IP on the default bridge network

However, if a container is created and told to share the same Network Namespace with an existing container, they will both be able to interface with each other on "localhost" or "127.0.0.1".

Here are working examples for both OSX and Linux.

OSX

  • Mysql and Mail share the PHP containers Network Namespace, giving us "localhost" for "container to container" communication.
  • Port mapping for host to container "localhost"
version: "3"

services:
  php:
    image: previousnext/php:7.1-dev
    # You will notice that we are forwarding port which do not belong to PHP.
    # We have to declare them here because these "sidecar" services are sharing
    # THIS containers network stack.
    ports:
      - "80:80"
      - "3306:3306"
      - "8025:8025"
    volumes:
      - .:/data:cached

  db:
    image: mariadb
    network_mode: service:php

  mail:
    image: mailhog/mailhog
    network_mode: service:php

Linux

All containers share the Network Namespace of the users' host, nothing else is required.

version: "3"

services:
  php:
    image: previousnext/php:7.1-dev
    # This makes the container run on the same network stack as your
    # workstation. Meaning that you can interact on "localhost".
    network_mode: host
    volumes:
      - .:/data

  db:
    image: mariadb
    network_mode: host

  mail:
    image: mailhog/mailhog
    network_mode: host

Trade offs

To facilitate this approach we had to make some trade offs:

  • We only run 1 project at a time. Only a single process can bind to port 80, 8983 etc.
  • Split out the Docker Compose files into 2 separate files, making it simple for each OS can have its own approach.

Bash aliases

Since we split out our Docker Compose file to be "per OS" we wanted to make it simple for developers to use these files.

After a couple of internal developers meetings, we came up with some bash aliases that developers only have to setup once.

# If you are on a Mac.
alias dc='docker-compose -f docker-compose.osx.yml'

# If you are running Linux.
alias dc='docker-compose -f docker-compose.linux.yml'

A developer can then run all the usual Docker Compose commands with the shorthand dc command eg.

dc up -d

This also keeps the command docker-compose available if a developer is using an external project.

Simple configuration

The following solution has also provided us with a consistent configuration fallback for local development.

We leverage this in multiple places in our settings.php, here is 1 example:

$databases['default']['default']['host'] = getenv("DB_HOST") ?: '127.0.0.1';

  • Dev / Stg / Prod environments set the DB_HOST environment variable
  • Local is always the fallback (127.0.0.1)

Conclusion

While the solution may have required a deeper knowledge of the Linux Kernel, it has yielded us a much simpler solution for developers.

How have you managed Docker local dev networking? Let me know in the comments below.

Photo of Nick Schuch

Posted by Nick Schuch
Sys Ops Lead

Dated 7 November 2017

Add new comment

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web