Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jul 26 2018
Jul 26

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name":"visreg",  "version":"1.0.0",  "description":"Website with visual regression testing",  "scripts":{    "test":"echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); constvisreg=require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); constvisreg=require('../../../../../../../../tests/config/globalHides.js');describe('Search Block',function(){  it('should look good',function(){    browser      .url('./')      .checkElement('.search-block-form',{hide:visreg.hide,remove:visreg.remove})      .forEach((item)=>{        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Feb 01 2018
Feb 01

Paragraphs is a powerful Drupal module that makes gives editors more flexibility in how they design and layout the content of their pages. However, they are special in that they make no sense without a host entity. If we talk about Paragraphs, it goes without saying that they are to be attached to other entities.
In Drupal 8, individual migrations are built around an entity type. That means we implement a single migration for each entity type. Sometimes we draw relationships between the element being imported and an already imported one of a different type, but we never handle the migration of both simultaneously.
Migrating Paragraphs needs to be done in at least two steps: 1) migrating entities of type Paragraph, and 2) migrating entities referencing imported Paragraph entities.

Migration of Paragraph entities

You can migrate Paragraph entities in a way very similar to the way of migrating every other entity type into Drupal 8. However, a very important caveat is making sure to use the right destination plugin, provided by the Entity Reference Revisions module:

destination: plugin: ‘entity_reference_revisions:paragraph’ default_bundle: paragraph_type destination:plugin:entity_reference_revisions:paragraphdefault_bundle:paragraph_type

This is critical because you can be tempted to use something more common like entity:paragraph which would make sense given that Paragraphs are entities. However, you didn’t configure your Paragraph reference field as a conventional Entity Reference one, but as an Entity reference revisions field, so you need to use an appropriate plugin.

An example of the core of a migration of Paragraph entities:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: 'feed.url/endpoint' ids: id: type: integer item_selector: '/elements' fields: - name: id label: Id selector: /element_id - name: content label: Content selector: /element_content process: field_paragraph_type_content/value: content destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: paragraph_type migration_dependencies: { } plugin:urldata_fetcher_plugin:httpdata_parser_plugin:jsonurls:'feed.url/endpoint'    type:integeritem_selector:'/elements'    name:id    label:Id    selector:/element_id    name:content    label:Content    selector:/element_contentfield_paragraph_type_content/value:contentdestination:plugin:'entity_reference_revisions:paragraph'default_bundle:paragraph_typemigration_dependencies:{  }

To give some context, this assumes the feed being consumed has a root level with an elements array filled with content arrays with properties like element_id and element_content, and we want to convert those content arrays into Paragraphs of type paragraph_type in Drupal, with the field_paragraph_type_content field storing the text that came from the element_content property.

Migration of the host entity type

Having imported the Paragraph entities already, we then need to import the host entities, attaching the appropriate Paragraphs to each one’s field_paragraph_type_content field. Typically this is accomplished by using the migration_lookup process plugin (formerly migration).

Every time an entity is imported, a row is created in the mapping table for that migration, with both the ID the entity has in the external source and the internal one it got after being imported. This way the migration keeps a correlation between both states of the data, for updating and other purposes.

The migration_lookup plugin takes an ID from an external source and tries to find an internal entity whose ID is linked to the external one in the mapping table, returning its ID in that case. After that, the entity reference field will be populated with that ID, effectively establishing a link between the entities in the Drupal side.

In the example below, the migration_lookup returns entity IDs and creates references to other Drupal entities through the field_event_schools field:

field_event_schools: plugin: iterator source: event_school process: target_id: plugin: migration_lookup migration: schools source: school_id field_event_schools:  plugin:iterator  source:event_school  process:    target_id:      plugin:migration_lookup      migration:schools      source:school_id

However, while references to nodes or terms basically consist of the ID of the referenced entity, when using the entity_reference_revisions destination plugin (as we did to import the Paragraph entities), two IDs are stored per entity. One is the entity ID and the other is the entity revision ID. That means the return of the migration_lookup processor is not an integer, but an array of them.

process: field_paragraph_type_content: plugin: iterator source: elements process: temporary_ids: plugin: migration_lookup migration: paragraphs_migration source: element_id target_id: plugin: extract source: '@temporary_ids' index: - 0 target_revision_id: plugin: extract source: '@temporary_ids' index: - 1 field_paragraph_type_content:  plugin:iterator  source:elements  process:    temporary_ids:      plugin:migration_lookup      migration:paragraphs_migration      source:element_id    target_id:      plugin:extract      source:'@temporary_ids'      index:        -0    target_revision_id:      plugin:extract      source:'@temporary_ids'      index:        -1

What we do then is, instead of just returning an array (it wouldn’t work obviously), use the extract process plugin with it to get the integer IDs needed to create an effective reference.

Summary

In summary, it’s important to remember that migrating Paragraphs is a two-step process at minimum. First, you must migrate entities of type Paragraph. Then you must migrate entities referencing those imported Paragraph entities.

More on Drupal 8

Top 5 Reasons to Migrate Your Site to Drupal 8

Creating your Emulsify 2.0 Starter Kit with Drush

Oct 13 2017
Oct 13
October 13th, 2017

Welcome to the second episode in our new video series for Emulsify. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to create an Emulsify 2.0 starter kit with Drush. This blog post follows the video closely, so you can skip ahead or repeat sections in the video by referring to the timestamps for each section.

PURPOSE [00:15]

This screencast will specifically cover the Emulsify Drush command. The command’s purpose is to setup a new copy of the Emulsify theme.

Note: I used the word “copy” here and not “subtheme” intentionally. This is because the subtheme of your new copy is Drupal Core’s Stable theme, NOT Emulsify.

This new copy of Emulsify will use the human-readable name that your provide, and will build the necessary structure to get you on your way to developing a custom theme.

REQUIREMENTS [00:45]

Before we dig in too deep I recommend that you have the following installed first:

  • a Drupal 8 Core installation
  • the Drush CLI command at least major version 8
  • Node.js preferably the latest stable version
  • a working copy of the Emulsify demo theme 2.X or greater

If you haven’t already watched the Emulsify 2.0 composer install presentation, please stop this video and go watch that one.

Note: If you aren’t already using Drush 9 you should consider upgrading as soon as possible because the next minor version release of Drupal Core 8.4.0 is only going to work with Drush 9 or greater.

RECOMMENDATIONS [01:33]

We recommend that you use PHP7 or greater as you get some massive performance improvements for a very little amount of work.

We also recommend that you use composer to install Drupal and Emulsify. In fact, if you didn’t use Composer to install Emulsify—or at least run Composer install inside of Emulsify—you will get errors. You will also notice errors if npm install failed on the Emulsify demo theme installation.

AGENDA [02:06]

Now that we have everything setup and ready to go, this presentation will first discuss the theory behind the Drush script. Then we will show what you should expect if the installation was successful. After that I will give you some links to additional resources.

BACKGROUND [02:25]

The general idea of the command is that it creates a new theme from Emulsify’s files but is actually based on Drupal Core’s Stable theme. Once you have run the command, the demo Emulsify theme is no longer required and you can uninstall it from your Drupal codebase.

WHEN, WHERE, and WHY? [02:44]

WHEN: You should run this command before writing any custom code but after your Drupal 8 site is working and Emulsify has been installed (via Composer).

WHERE: You should run the command from the Drupal root or use a Drush alias.

WHY: Why you should NOT edit the Emulsify theme’s files. If you installed Emulsify the recommended way (via Composer), next time you run composer update ALL of your custom code changes will be wiped out. If this happens I really hope you are using version control.

HOW TO USE THE COMMAND? [03:24]

Arguments:

Well first it requires a single argument, the human-readable name. This name can contain spaces and capital letters.

Options:

The command has defaults set for options that you can override.

This first is the theme description which will appear within Drupal and in your .info file.

The second is the machine-name; this is the option that allows you to pick the directory name and the machine name as it appears within Drupal.

The third option is the path; this is the path that your theme will be installed to, it defaults to “themes/custom” but if you don’t like that you can change it to any directory relative to your web root.

Fourth and final option is the slim option. This allows advanced users who don’t need demo content or don’t want anything but the bare minimum required to create a new theme.

Note:

Only the human_readable_name is required, options don’t have to appear in any order, don’t have to appear at all, or you can only pass one if you just want to change one of the defaults.

SUCCESS [04:52]

If your new theme was successfully created you should see the successful output message. In the example below I used the slim option because it is a bit faster to run but again this is an option and is NOT required.

The success message contains information you may find helpful, including the name of the theme that was created, the path where it was installed, and the next required step for setup.

THEME SETUP [05:25]

Setting up your custom theme. Navigate to your custom theme on the command line. Type the yarn and watch as pattern lab is downloaded and installed. If the installation was successful you should see a pattern lab successful message and your theme should now be visible within Drupal.

COMPILING YOUR STYLE GUIDE [05:51]

Now that we have pattern lab successfully installed and you committed it to you version control system, you are probably eager to use it. Emulsify uses npm scripts to setup a local pattern lab instance for display of your style guide.

The script you are interested in is yarn start. Run this command for all of your local development. You do NOT have to have a working Drupal installation at this point to do development on your components.

If you need a designer who isn’t familiar with Drupal to make some tweaks, you will only have to give them your code base, have them use yarn to install, and yarn start to see your style guide.

It is however recommended the initial setup of your components is done by someone with background knowledge of Drupal templates and themes as the variables passed to each component will be different for each Drupal template.

For more information on components and templates keep an eye out for our soon to come demo components and screencasts on building components.

VIEWING YOUR STYLE GUIDE [07:05]

Now that you have run yarn start you can open your browser and navigate to the localhost URL that appears in your console. If you get an error here you might already have something running on port 3000. If you need to cancel this script hit control + c.

ADDITIONAL RESOURCES [07:24]

Thank you for watching today’s screencast, we hope you found this presentation informative and enjoy working with Emulsify 2.0. If you would like to search for some additional resources you can go to emulsify.info or github.com/fourkitchens/emulsify.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Sep 28 2017
Sep 28
September 28th, 2017

If your site was built with Drupal within the last few years, you may be wondering what all the D8 fuss is about. How is Drupal 8 better than Drupal 6 or 7? Is it worth the investment to migrate? What do you need to know to make a decision? In this post we’ll share the top five reasons our customers—people like you—are taking the plunge. If you know you’re ready, tell us.

  1. Drupal 8 has a built-in services-based, API architecture. That means you can build new apps to deliver experiences across lots of devices quickly and your content only needs to live in one place. D8’s architecture means you don’t have to structure your data differently for each solution—we’ve helped clients build apps for mobile, Roku, and Amazon Alexa using this approach (read how we helped NBC). If you’re on Drupal 6 now, a migration to Drupal 8 will allow you to do unleash the power of your content with API integration.
  2. You can skip Drupal 7 and migrate straight to D8. If you’re on Drupal 6, migrating directly to Drupal 8 is not just doable—it’s advisable. It will ensure every core and contributed module, security patch, and improvement is supported and compatible for your site for longer.
  3. The Drupal 8 ecosystem is ready. One of the reasons people love Drupal is for the amazing variety of modules available. Drupal 8 is mature enough now that most of the major Drupal modules you have already work for D8 sites.
  4. Drupal 8 is efficient. Custom development on Drupal 8 is more efficient than previous versions—we’ve already seen this with our D8 clients and others in the Drupal community are saying the same thing. When you add that to the fact that Drupal 8 is the final version to require migration—all future versions will be minor upgrades—you’ve got a solid business reason to move to Drupal 8 now.
  5. It’s a smart business decision. Drupal 6 is no longer supported—and eventually Drupal 7 will reach “end of life”—which means any improvements or bug fixes you’re making to your existing site will need to be re-done when you do make the move. Migrating to Drupal 8 now will ensure that any investments you make to improving or extending your digital presence are investments that last.

If you’re still not sure what you need, or if you would like to discuss a custom review and recommendation, get in touch. At Four Kitchens, we provide a range of services, including user experience research and design, full-stack development, and support services, each with a strategy tailored to your success.

LET’S TALK!

Read about American Craft Council’s move to Drupal 8
Your site should use component-based theming, here’s how
See what we’ve done for other clients >>
Read more about the services we provide >>
Meet the team >>

Web Chef Todd Ross Nienkerk
Todd Ross Nienkerk

Todd Ross Nienkerk is the CEO and co-founder of Four Kitchens. He was born in a subterranean cave in the future.

Aug 09 2017
Aug 09
August 9th, 2017

We are excited to announce the completion of the second major development phase of our engagement with Forcepoint: improving the authoring experience for editors and implementing a new design.

Reimagining the Editorial Experience

Four Kitchens originally launched Forcepoint’s spiffy new Drupal site in January 2016. Since then, Forcepoint’s marketing strategy has evolved, and they hired a marketing agency to perform some brand consulting, while Four Kitchens implemented their new approach in rebuilding the site. We also took the opportunity to revisit the editorial experience in Drupal’s administrative backend.

Four Kitchens has been using Paragraphs on some recent Drupal 8 projects and found it to be a compelling solution for clients that like to exert substantive editorial control at the individual page level—clients like Forcepoint. Providing content templates for markup that works hand in hand with the component-driven theming approach we favor is a primary benefit we get from using Paragraphs for body content.

Editorially, the introduction of Paragraphs gives Forcepoint a more flexible means of controlling content layout for individual pages without having to rely as heavily on Panels as we did for the initial launch. We’re still using Panels for boilerplate and some content type specific data rendering, but the reduced complexity required for editors to layout body content will allow their content to evolve and scale more easily.

In addition to using paragraphs for WYSIWYG content entry, Forcepoint editors are now also able to insert and rearrange related content, Views, Marketo forms, videos, and components that require more complex markup to render.

We’re big proponents of carefully crafted content models and structured data. Overusing Paragraphs runs the risk of removing some or even a lot of that structure. Used judiciously however, it allows us to give clients like Forcepoint the flexibility they want while still enforcing desirable constraints inherent in the design.

Congratulations!

We’ve been working with Forcepoint for over a year now, and are incredibly proud of the solutions we’ve created with them. This kind of close relationship and collaboration is what we strive for with all of our partners. We thrive on understanding our partners’ underlying business challenges and goals, collaborating with their teams, and creating solutions that delight their customers.

The Forcepoint team was led by Chris Devidal as the project manager, working alongside Taylor Smith who acted as internal product owner. Jeff Tomlinson was technical lead and assisted Patrick Coffey who adeptly wrangled all the difficult backend issues. Significant frontend technical leadership was provided by Evan Willhite who worked with Brad Johnson to implement a challenging design. Props also go to Keith Halpin, Neela Joshi and Adam Bennett at Forcepoint for their many contributions.

Web Chef Jeff Tomlinson
Jeff Tomlinson

Jeff Tomlinson enjoys working with clients to provide them with smart solutions to realize their project’s goals. He loves riding his bicycle, too.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web