May 23 2019
May 23

Sometimes clients ask for the wrong thing. Sometimes developers build the wrong thing, because they didn’t ask the right questions. If you’re solving the wrong problem, it doesn’t matter how elegant your solution is.

One of the most important services that we as developers and consultants can provide is being able to help guide our clients to what they need, rather than simply giving them what they want. Sometimes those two things are aligned, but more often than not, figuring out the right thing to build takes some discovering.

Why don’t wants and needs match? It might be because the client hasn’t spent enough time thinking about the question, or because they haven’t approached it from the right angle. If that’s the case, we can help them to do that, either by asking the right questions or by acting as their rubber duck, providing a sounding board for their ideas. Alternatively, it might be because, as a marketing or content specialist, they lack sufficient awareness of the potential technological solutions to the question, and we can offer that.

Once you’ve properly understood the problem, you can start to look for a solution. In this article, I’ll talk about some examples of problems like this that we’ve recently helped clients to solve, and how those solutions led us to contribute two new Drupal modules.

There must be a module for that

Sometimes the problems are specific to the client, and the solutions need to be bespoke. Other times the problems are more general, and there’s already a solution. One of the great things about open source is that somebody out there has probably faced the same problem before, and if you’re lucky, they’ve shared their solution.

In general, I’d prefer to avoid writing custom code, for the same reasons that we aren’t rolling our own CMS. There are currently over 43,000 contributed modules available for Drupal, some of which solve similar problems, so sometimes the difficult part is deciding which of the alternatives to choose.

Sometimes there isn’t already a solution, or the solution isn’t quite right for your needs. Whenever that’s the case, and the problem is a generic one, we aim to open source the solutions that we build. Sometimes it’s surprising that there isn’t already a module available. Recently on my current project we came across two problems that felt like they should have been solved a long time ago, very generic issues for people editing content for the web - exactly the sort of thing that you’d expect someone in the Drupal community to have already built.

How hard could it be?

One area that sometimes causes friction between clients and vendors is around estimates. Unless you understand the underlying technology, it isn’t always obvious why some things are easy and others are hard.

tasks comicXKCD -tasks

Even experienced developers sometimes fail to grasp this - here’s a recent example where I did exactly that.

We’re building a site in Drupal 8, making heavy use of the Paragraphs module. When adding a webform to a paragraph field, there’s a select list with all forms on the site, sorted alphabetically. To improve usability for the content editors, the client was asking for the list to be sorted by date, most recently created first. Still thinking in Drupal 6 and 7 mode, I thought it would be easy. Use a view for selection, order the view by date created, job done - probably no more than half an hour’s work. Except that in Drupal 8, webforms are no longer nodes - they’re configuration entities, so there is no creation date to order by. What I’d assumed would be trivial would in fact require major custom development, the cost of which wouldn’t be justified by the business value of the feature. But there’s almost always another way to do things, which won’t be as time-consuming, and while it might not be what the client asked for, it’s often close enough for what they need.

What’s the real requirement?

In the example above, what the content editors really wanted was an easy way to find the relevant piece of content. The creation date seemed like the most obvious way to find it. If you jump to a solution before considering the problem, you can waste it going down blind alleys. I spent a while digging around in the code and the database before I realised sorting the list wouldn’t be feasible. By enabling the Chosen module, we made the list searchable - not what the client had asked for, but it gave them what they needed, and provided a more general solution to help with other long select lists. As is so often the case, it was five minutes of development work, once I’d spent hours going down a blind alley.

This is a really good example of why it’s so important to validate your assumptions before committing to anything, and why we should value customer collaboration over contract negotiation - for developers and end users to be able to have open conversations is enormously valuable to a smooth relationship, and it enables the team to deliver a more usable system.

Do you really need square pegs?

One area where junior developers sometimes struggle is in gauging the appropriate level of specificity to use in solving a problem. Appropriate specificity is particularly relevant when working with CSS, but also in terms of development work more generally. Should we be building something bespoke to solve this particular problem, or should we be thinking about it as one instance of a more generic problem? As I mentioned earlier, unless your problem is specific to your client’s business, somebody has probably already solved it.

With a little careful thought, a problem that originally seemed specific may actually be general. For example, try to avoid building CMS components for one-off pieces of a design. If we make our CMS components more flexible, it makes the system more useful for content editors, and may even mean that the next requirement can be addressed without any extra development effort.

Sometimes there can be a sense that requirements are immutable, handed down from on high, carved into stone tablets. Because a client has asked for something, it becomes a commandment, rather than an item on a wish list. Requirements should always be questioned The further the distance between clients and developers, the harder it can be to ask questions. Distance isn’t necessarily geographical - with good remote collaboration, and open lines of communication, developers in different time zones can build a healthy client relationship. Building that relationship enables developers to ask more questions and find out what the client really needs, and it also helps them to be able to push back and say no.

Work with the grain

It can be tempting to imagine that the digital is infinitely malleable; that because we’re working with the virtual, anything is possible. When clients ask “can we do X?, I usually answer that it’s possible, but the more relevant question is whether it’s feasible.

Just as the web has a grain, most technologies have a certain way of working, and it’s better to work with your framework rather than against it. Developers, designers and clients should work together to understand what’s simple and what’s complicated within the constraints. Is the extra complexity worth it, or would it be better to simplify things and deliver value quicker?

Sometimes that can feel like good cop, bad cop, where the designers offer the world, and developers say no. But the point isn’t that I don’t want to do the work, or that I want to charge clients more money. It’s that I would much rather deliver quick wins by using existing solutions, rather than having developers spend time on tasks that don’t bring business value, like banging their heads against the wall trying to bend a framework to match a “requirement” that nobody actually needs. It’s better for everyone if developers are able to work on more interesting things.

Time is on my side

As an example of an issue where a little technical knowledge went a long way, we were looking at enabling client-side sorting of tables. Sometimes those tables would include dates. We found an appropriate module, and helped to get the Drupal 8 version working, but date formats can be tricky. What is readable to a human in one cultural context isn’t necessarily easy for another, or for a computer, so it’s useful to add some semantic markup to provide the relevant machine-readable data.

Drupal has pretty good facilities for managing date and time formats, so surely there must be a module already that allows editors to insert dates into body text? Apparently not, so I built CKEditor Datetime.

With some helpful tips from the community on Drupal Slack, I found some CKEditor examples, and then started plumbing it in to Drupal. Once I’d got that side of things sorted, I got some help from the plugin maintainer to get the actual sorting sorted. A really nice example of open source communities in action.

Every picture tells a story

Another challenge that was troubling our client’s content team was knowing what their images would look like when they’re rendered. Drupal helpfully generates image derivatives at different sizes, but when the different styles have different aspect ratios, it’s important to be able to see what an image will look like in different contexts. This is especially important if you’re using responsive images, where the same source image might be presented at multiple sizes depending on the size of the browser window.

To help content editors preview the different versions of an image, we built the Image Styles Display module. It alters the media entity page to show a preview of every image style in the site, along with a summary of the effects of that image style. If there are a lot of image styles, that might be overwhelming, and if the aspect ratio is the same as the original, there isn’t much value in seeing the preview, so each preview is collapsible, using the summary/details element, and a configuration form controls which styles are expanded by default. A fairly simple idea, and a fairly simple module to build, so I was surprised that it didn’t already exist.

I hope that these modules will be useful for you in your projects - please give them a try:

If you have any suggestions for improvement, please let me know using the issue queues.

May 16 2019
May 16

One of the best things about Drupal’s open-source ecosystem is that it empowers you to be open-minded. Given the vast array of solutions and modules available, users can customize their site to their whims. Alternatively, if you think up and code something new, your contributions can be shared online with other users. With all of the customization available, Drupal is a conducive platform for outside-the-box thinking.

federated search

Decoupling is a recent example of this philosophy. Where a standard Drupal website would feature a Drupal-powered front and backend, decoupling opens the door for a variety of possibilities. A decoupled site can utilize different platforms and technologies for both the front and backend. For example, a decoupled site could utilize Drupal’s backend CMS while running a React-powered frontend. Such is Drupal’s flexibility that it can power scores of different, user-facing channels from a single backend, including other sites, native apps, Internet of Things (IoT), and more.

This decoupled or “headless” concept has more applications than just for site design, though. The search function of a website, for one, can benefit from components that utilize this headless approach – and not a moment too soon. As Google has begun to sunset its Google Search Appliance offering, there is now a need for an open and flexible search tool with enterprise-level capabilities.

At this year’s Midwest Drupal Camp, the team from Palantir demonstrated that a decoupled approach to site search was viable. This solution, federated search, allows for indexing and searching across multiple sites. For organizations with a large web portfolio across different platforms, this open federated search solution can fill the gap left by Google.

Decoupled Drupal Facts & Myths

Understanding why federated search for Drupal is important requires an understanding of how regular site search functions operate. At the core, the search feature is built from three different components: the source, index and results. The source simply refers to all of the searchable content on a given site, from blogs to landing pages. The index is a compilation of metadata that makes the content form the source easier to parse. At Duo, we often use Apache Solr, a platform-agnostic, open source solution for indexing, as it provides speed, power and its own server capabilities. Finally, the results refers to the front-end experience that compiles and delivers the search results to the user.

The above setup will work fine for most simple websites, but larger organizations often require a more robust solution. With federated search, users can query across multiple sites across different platforms without placing much strain on Drupal, since Apache Solr is handling generating the index and providing results. This is accomplished through some tweaking of the basic site search formula.

Part of what makes this search so powerful is that it takes advantage of Drupal’s backend without relying on its frontend. For that, Apache Solr’s dedicated servers empower this new search solution by shouldering the burden of indexing and providing the results. Before it can work, though, some configuration is needed. Based on this configuration, Apache Solr can encompass searches across different sites – including sites that aren’t built with Drupal. Creating this custom solution, in conjunction with the Search API and Search API Solr modules, will ensure that the different data types being indexed will be standardized.  

As for the results section, the best approach is a decoupled one. Building out the front-end component in the React JavaScript library allows for robust searches without slowing down the rest of the site. By using Drupal’s CMS, Apache Solr and React’s power in coordination, any organization can create a search feature that quickly indexes vast ranges of data and delivers it in an easily digestible manner. For a deeper dive, Palantir has made their demo of federated search available.

This powerful and streamlined take on site search has a variety of applications. Before releasing the solution, Palantir originally developed federated search for the University of Michigan, as each department ran their own sites on different platforms. Federated search now allows users to seamlessly search for information across the entire school’s network, regardless of the technology used to deliver the content. Beyond university ecosystems, federated search also presents an opportunity for eCommerce. Using this solution, products from different vendors can be consolidated into a simple search.

Thanks to Drupal being open source, organizations can utilize federated search and any other contributed solution at any time. This level of openness is what makes Duo such champions of the Drupal platform. At Duo, we’re committed to exploring new features like this and helping each of our partners think outside the box. If you’re ready to start rethinking your website or sites, we’re just a click away.

Explore Duo

May 14 2019
May 14

When every business has a website, how do you stand out? Because customers expect so much more from digital brands, utilizing a marketing platform is essential. Marketing automation software enables businesses to deliver personalized engagement, strengthening customer bonds and driving new business.

acquia acquires mautic open marketing platform

Until recently, businesses have, by-and-large, had to rely on proprietary solutions like Marketo and Salesforce Marketing Cloud to handle their needs. Now, Acquia is looking to disrupt the marketplace. The Drupal SaaS giant has just acquired Mautic, an open-source marketing automation and campaign management platform. Now, Drupal users will have an open alternative to manage their customer experience needs.

This is the latest step Acquia has taken toward their vision for an Open Digital Experience Platform. The company has been working in the content management space for years and is now turning toward a data-driven, experience management approach. With their “API-first” philosophy, Acquia hopes to offer Drupal users maximum flexibility and the ability to meet any organizational needs that may arise. This level of freedom gives marketers the chance to customize every step of the customer cycle across channels.

While still a relatively young company, Mautic is a perfect fit for Acquia’s portfolio. While Acquia specializes in content management, personalization and commerce, Mautic will complement their offerings with their specialties in multichannel delivery, campaign management and journey orchestration. And with both Drupal and Mautic built on Symfony and PHP, collaboration and integration will be made easier. With 200,000 installations already, Mautic has already won over many marketers and, with Acquia’s expanded reach, will likely reach many more. Because of Mautic’s API-friendly build, our team at Duo can easily integrate the platform with a site.

Open Marketing (YouTube)

On a more philosophical level, Mautic echoes Acquia’s (and the Drupal community’s) open-source ethos. Currently, Mautic is the only open source alternative to the aforementioned legacy marketing cloud ecosystems. This openness makes it easier for the development community to create marketing platforms built for a company’s specific needs. This ease of use extends to the marketing side, too. Where many proprietary marketing automation software have steep learning curves and are bundled with countless other products in their respective clouds, Mautic’s platform benefits from its ease-of-use and its modern, streamlined design - allowing anyone on a team to get their organization’s message out faster than ever. To take a look at the platform in action, watch the demo embedded below.

[embedded content]

In addition to Mautic’s open offerings, the company also offers several commercial solutions. The first, Mautic Cloud is a fully managed SaaS version of the platform that includes some premium features not openly available. For enterprise clients, Mautic has Maestro. This product allows multinational organizations the ability to segment their marketing efforts by territory, while still allowing for sharing between regions. Campaigns can also be copied and repeated in different territories. These features will remain separate from Acquia Lift, though Duo can handle integrations with either service if needed.

So what does all of this mean for businesses using Drupal? This being an open-source community, it means more freedom! Rather than being locked into whatever is offered by legacy martech vendors, Drupal allows you to leverage best-of-breed services and solutions. At Duo, we’re always striving for more openness. Your solution should match your exact needs and the Drupal environment provides a vast range of solutions to make it your new site a reality. Now, Mautic joins those ranks, and can be used in conjunction with whatever other modules you opt to use. The open source ecosystem that makes Drupal so robust also gives Mautic more flexibility than other marketing automation platforms, and for a cheaper price tag, as well. Finally,

By 2023, spending on global marketing automation is expected to reach $25 billion. This software isn’t going anywhere, so why would you pay for a stale and outdated marketing suite that you may not be able to use to its full potential? With the acquisition of Mautic, Acquia has given marketers a whole new option when it comes to managing customer engagement. Duo can help you navigate the various marketing paths available while ensuring that whatever choice you make will exceed your expectations.

Explore Duo

May 01 2019
May 01

At this year’s DrupalCon, held earlier in April in Seattle, Drupal founder Dries Buytaert gave attendees a preview of the newest version of Drupal: 8.7. Now, the wait is over. Drupal 8.7 is launching today, May 1, adding a new suite of features and fixes that will improve the Drupal experience for everyone with an up-to-date platform.

drupal 8.7 was released on may 1

In his keynote, or “Driesnote,” Dries laid out what made this new release so special. Speaking to the Drupal community at large, Dries shared that the Drupal team had several core objectives when developing Drupal 8.7:

  • Make Drupal easy for content creators and site builders
  • Make Drupal easy to evaluate and adopt
  • Keep Drupal impactful and relevant
  • Reduce total cost of ownership for developers and site owners

Each one of these goals represented a major challenge, but the newest version has delivered a variety of updates that each make Drupal a more robust platform.

Empowering content creators

One of the biggest features in Drupal 8.7 is the newly stable Layout Builder tool. The product of the efforts of 123 contributors and 68 supporting organizations, Layout Builder makes designing pages more user-friendly. As the name implies, the Layout Builder tool enables editors to manually adjust the design and format of a page. With this tool, editors can make changes to the layout without having to involve developers every time. Dries displayed a demo of the Layout Builder tool during the Driesnote, which can be found below.

Along with layout builder, the other major content improvement ushered in by Drupal 8.7 is the updated media module. As of this most recent update, reusable media, images, video and drag-and-drop features for the media module are all stable, with the media library currently in the “experimental” state. Combined with layout builder, these updates make Drupal 8.7 a great update for content editors.

Out of the box functionality

While Drupal 8.7 certainly makes life easier for editors, it doesn’t stop there. New out of the box features make Drupal easier to adopt than ever. New to the Umami theme in Drupal 8.7  are demo articles containing Spanish translations by default and improved accessibility throughout the theme, with new labels and focus styles highlighted. This helps to show Drupal 8.7’s capabilities in terms of both multilingual and accessibility right out of the box. Additionally, the “Welcome tour” feature makes it easier for agencies to demo the platform. All of these features are included with the new update automatically.

Staying relevant

To remain a driving force in the market, Drupal needs to keep up with the times. The biggest breath of fresh air Drupal 8.7 brings to the platform is the addition of the JSON:API to the core. This development extends Drupal’s “API-first” philosophy, enabling decoupled and headless solutions. If this type of buildout is what your organization needs, the Drupal 8.7 update makes developing these popular solutions much easier.

Lowered costs

Because Drupal 9 (D9) is built on the same codebase as D8, the eventual upgrade process will be much easier than a conventional website upgrade. Previously, upgrades were major undertakings that required a lot of development effort. Now, as long as a site is not using any deprecated code, upgrading to D9 will be very straightforward. A tool called drupal-check is already available to check for deprecated code, so it’s already possible to start getting a site ready for D9. In the meantime, Drupal 8.7 offers a number of new features and enhancements and is another step toward the eventual D9 upgrade..

There’s certainly plenty to enjoy with this new release, but the updates don’t stop here. With Drupal on a six-month release cycle, there will be a new version of Drupal on November 1. Drupal 8.8 promises an updated WYSIWYG editor along with a potentially updated Admin UI, an ongoing project for the Drupal team. Beyond that is D9, the latest edition of the platform.

Duo can help you make the most out of Drupal 8.7’s newest features while also planning your roadmap for upgrading. If you’re ready to see what the future of Drupal has to offer, reach out to us today.

Explore Duo

Apr 25 2019
Apr 25

Every year, the Drupal community gathers in a new city for the annual DrupalCon show. More expansive than regional camps, DrupalCon gives attendees from all around the world the chance to collaborate and learn from each other face-to-face.

driesnote confirms drupal 7 support until 2021 

The expansive audience of the convention makes it an ideal venue for Drupal creator Dries Buytaert to address the community. In his “Driesnote,” Dries usually focuses on highlights from the past year and upcoming developments for the platform. As such, the upcoming Drupal 8.7 update got a lot of spotlight, as did the efforts of star contributors working to make Drupal so robust. And with Drupal 9 coming in the not-so-distant future, Dries’ address this year demonstrated the advances being made at the cutting edge of Drupal.

While Dries focused on the present and future benefits of Drupal 8, he didn’t neglect users still on Drupal 7. First launched in 2011, Drupal 7 remains the most widely deployed version of the platform, even though Drupal 8 was released in 2015. The reasons for the relatively slow adoption of Drupal 8 are numerous, ranging from incompatible modules to the standard costs of a redesign. Dries understands that, and in this year’s Driesnote, offered words of support to those who have not made the change.

“There’s no need to panic,” Dries said. Indeed, he said that Drupal 7 will continue to be officially supported for over two-and-a-half years, until November 2021. At that point, Drupal 7 will reach its end of life.

What happens in November 2021, you might ask? We’ve covered the upcoming Drupal release pipeline in an earlier blog post, but the major driving force behind this date is Symfony 3. A major dependency for Drupal 7 and 8, when Symfony 3 is sunset in November 2021, it will expose sites running on D7 and D8 to security threats. Because Symfony 3 is also a major dependency for Drupal 8, most Drupal users will need to upgrade to Drupal 9 before November 2021.

As Dries said, though, there’s no need to panic if you’re on Drupal 7. This end-of-life date is still over two years away, giving you plenty of time to consider your options and decide how to move forward. In the meantime, Drupal 7 will continue to be supported by both the open-source community and agencies like Duo. 

If you’re running Drupal 7 and want to get a head-start, there are a few options. Upgrading to Drupal 8 is the most logical route, as it the direct successor to D7. The Driesnote also noted that more and more modules that D7 users are accustomed to using are now functional in D8, which will make the transition smoother. The big draw of this path, however, is the ease with which you’ll be able to upgrade to Drupal 9. Drupal 8 is built on the same codebase as D9, which means that an upgrade between those two systems will not require a major design or development overhaul.

Another option for D7 users is to bypass D8 altogether. Jumping from Drupal 7 to Drupal 9 would be more akin to a traditional redesign, both in terms of the work involved and the cost. That being said, even though moving from D8 to D9 will be relatively easy, it will still require some effort. Going from D7 to D9 streamlines the process, requiring only one comprehensive upgrade.

Whichever path you take, rest assured that there is time. Dries acknowledges that there are still many users who enjoy the benefits of Drupal 7, and this year’s Driesnote signifies that this crowd hasn’t been forgotten. While ever Drupal 7 site will eventually need an upgrade, users can rest easy knowing that they have plenty of time.

When the time comes to make a decision about upgrading, Duo can help you chart your journey ahead. Whether you want to stay on Drupal 7 or can’t wait for Drupal 9, we’re committed to delivering the best possible version of your site.

Explore Duo

Apr 19 2019
Apr 19

What we learned from our fellow Drupalists

Go to the profile of Lisa Mirabile

On April 7th, our team packed up our bags and headed off to Seattle for one of the bigger can’t miss learning events of the year, DrupalCon.

“Whether you’re C-level, a developer, a content strategist, or a marketer — there’s something for you at DrupalCon.” -https://events.drupal.org/

As you may have read in one of our more recent posts, we had a lot of sessions that we couldn’t wait to attend! We were very excited to find new ideas that we could bring back to improve our services for constituents or the agencies we work with to make digital interactions with government fast, easy, and wicked awesome. DrupalCon surpassed our already high expectations.

GovSummit

At the Government Summit, we were excited to speak with other state employees who are interested in sharing knowledge, including collaborating on open-source projects. We wanted to see how other states are working on problems we’ve tried to solve and to learn from their solutions to improve constituents’ digital interactions with government.

One of the best outcomes of the Government Summit was an amazing “birds of a feather” (BOF) talk later in the week. North Carolina’s Digital Services Director Billy Hylton led the charge for digital teams across state governments to choose a concrete next step toward collaboration. At the BOF, more than a dozen Massachusetts, North Carolina, Georgia, Texas, and Arizona digital team members discussed, debated, and chose a content type (“event”) to explore. Even better, we left with a meeting date to discuss specific next steps on what collaborating together could do for our constituents.

Session Highlights

The learning experience did not stop at the GovSummit. Together, our team members attended dozens of sessions. For example, I attended a session called “Stanford and FFW — Defaulting to Open” since we are starting to explore what open-sourcing will look like for Mass.gov. The Stanford team’s main takeaway was the tremendous value they’ve found in building with and contributing to Drupal. Quirky fact: their team discovered during user testing among high-school students that “FAQ” is completely mysterious to younger people: they expect the much more straightforward “Questions” or “Help.”

Another session I really enjoyed was called “Pattern Lab: The Definitive How-to.” It was exciting to hear that Pattern Lab, a tool for creating design systems, has officially merged its two separate cores into a single one that supports all existing rendering engines. This means simplifying the technical foundation to allow more focus on extending Pattern Lab in new and useful ways (and less just keeping it up and running). We used Pattern Lab to build Mayflower, the design system created for the Commonwealth of Massachusetts and implemented first on Mass.gov. We are now looking at the best ways to offer the benefits of Mayflower — user-centeredness, accessibility, and consistent look and feel — to more Commonwealth digital properties. Some team members had a chance to talk later to Evan Lovely, the speaker and one of the maintainers of Pattern Lab, and were excited by the possibility of further collaboration to implement Mayflower in more places.

There were a variety of other informative topics. Here are some that my peers and I enjoyed, just to name a few:

A Day in the Exhibit Hall

Our exhibit hall booth at DrupalCon 2019 Talking to fellow Drupalists at our booth

On Thursday we started bright and early to unfurl our Massachusetts Digital Service banner and prepare to greet fellow Drupalists at our booth! We couldn’t have done it without our designer, who put all of our signs together for our first time exhibiting at DrupalCon (Thanks Eva!)

It was remarkable to be able to talk with so many bright minds in one day. Our one-on-one conversations took us on several deep dives into the work other organizations are doing to improve their digital assets. Meeting so many brilliant Drupalists made us all the more excited to share some opportunities we currently have to work with them, such as the ITS74 contract to work with us as a vendor, or our job opening for a technical architect.

We left our table briefly to attend Mass.gov: A Guide to Data-Informed Content Optimization, where team members Julia Gutierrez and Nathan James shared how government agencies in Massachusetts are now making data-driven content decisions. Watch their presentation to learn:

  1. How we define wicked awesome content
  2. How we translate indicators into actionable metrics
  3. The technology stack we use to empower content authors

The Splash Awards

To cap it off, Mass.gov, with partners Last Call Media and Mediacurrent, won Best Theme for our custom admin theme at the first-ever Global Splash awards (established to “recognize the best Drupal projects on the web”)! An admin theme is the look and feel that users see when they log in. The success of Mass.gov rests in the hands of all of its 600+ authors and editors. We’ve known from the start of the project that making it easy and efficient to add or edit content in Mass.gov was key to the ultimate goal: a site that serves constituents as well as possible. To accomplish this, we decided to create a custom admin theme, launched in May 2018.

A before-and-after view of our admin theme

Our goal was not just a nicer looker and feel (though it is that!), but a more usable experience. For example, we wanted authors to see help text before filling out a field, so we brought it up above the input box. And we wanted to help them keep their place when navigating complicated page types with multiple levels of nested information, so we added vertical lines to tie together items at each level.

Last Call Media founder Kelly Albrecht crosses the stage to accept the Splash award for Best Theme on behalf of the Mass.gov Team. All the Splash award winners!

It was a truly enriching experience to attend DrupalCon and learn from the work of other great minds. Our team has already started brainstorming how we can improve our products and services for our partner agencies and constituents. Come back to our blog weekly to check out updates on how we are putting our DrupalCon lessons to use for the Commonwealth of Massachusetts!

Interested in a career in civic tech? Find job openings at Digital Service.
Follow us on Twitter | Collaborate with us on GitHub | Visit our site

Apr 02 2019
Apr 02

For many businesses, eCommerce is an increasingly important trend. With the potential for frictionless and simple exchanges, doing business on the web has never been more appealing to customers. To make eCommerce work for your business, though, our site needs to be robust enough to handle every step of every transaction.


drupal decouples commerce

Photo by Igor Miske

With its open source community and robust core, the Drupal platform is a worthy solution for any and all eCommerce needs. While a site built on Drupal will have the enterprise capabilities needed to handle a wide array of business needs, one of the major selling points of the platform is its flexibility. Drupal has the ability to be decoupled, allowing you to use Drupal as your background CMS while utilizing a third-party solution for your front end. If your brand is tied to an existing look or design, a decoupled or “headless” solution will let you have your cake and eat it, too.

As you might imagine, headless solutions have major implications for the possibilities of eCommerce platforms. At this year’s MidCamp, an annual meetup of the Drupal community in the Chicago area, Commerce Guys Product Lead Matt Glaman took a look at the future of headless commerce solutions. What emerged from the session painted a picture of progress, as developers are continuing to create solutions in Drupal that will power a wide array of eCommerce solutions.

Decoupled Drupal Facts & Myths

Leading the path toward making headless commerce more feasible is Commerce Guys, a Drupal software firm that held a couple of events at MidCamp on the subject. Best known for creating the Drupal Commerce module and the shopping-cart software Ubercart before it, Commerce Guys brought years of experience in the eCommerce space to this year’s event.

Offering a look at the technical side of Drupal-based eCommerce, Glaman examined the various challenges and solutions surrounding headless commerce. As he explained, the heavily structured data model of eCommerce in general presents a challenge, as do the relationships between the various layers in a site, such as the data and presentation layers. Making sure that the connections between the Drupal-backend and the myriad of frontend possibilities are robust and stable is essential to making headless commerce solutions work for users.

To address these challenges, Glaman highlighted a variety of existing API solutions. To enable headless commerce, the available options include the GraphQL data query language, JSON-RPC, JSON API and the RESTful Web Services spec. Though the latter two have the convenience factor of being included in Drupal Core, all of these solutions have their strengths and weaknesses in the context of headless commerce.

The efficacy of each solution varies depending on which stage of the eCommerce process you look at. For catalogues and product display pages, using the JSON API offers strong query capabilities but doesn’t support mixed-bundle collections. A GraphQL solution, on the other hand, supports cross-bundling but queries can become very large. Most existing solutions also struggle with “add to cart forms,” as it can be challenging to create reusable solutions. Overcoming that hurdle requires using solutions in concert, such as a GatsbyJS:React and GraphQL build. Finally, for carts, the JSON API requires integration with a Cart API, while the aforementioned query issues with GraphQL require the use of GraphQLMutation plugins to solve. For a deeper look into the nuances of these existing headless commerce solutions, you can watch Glaman’s full presentation here:

[embedded content]

So, while headless commerce in Drupal is achievable, it takes some effort to get to a point where it can handle your business’ eCommerce needs. Luckily, the strength of Drupal’s open source community means that you don’t have to wait for the Commerce Guys to fix everything. A firm like Duo can help you fine-tune an eCommerce solution to fit your needs. Don’t just take our word for it; our previous work with the Chicago Botanic Garden demonstrates our prowess in designing unique and robust eCommerce platforms.

If you’re interested in exploring a headless commerce platform on Drupal or are just looking to make your online business run more smoothly, Duo is the right partner for you.

Explore Duo

Apr 01 2019
Apr 01

Vienna, VA March 19, 2019—Mobomo,

Mobomo, LLC is pleased to announce our award as a prime contractor on the $25M Department of Interior (DOI) Drupal Developer Support Services BPA . Mobomo brings an experienced and extensive Drupal Federal practice team to DOI.  Our team has launched a large number of award winning federal websites in both Drupal 7 and Drupal 8, to include www.nasa.gov, www.usgs.gov, and www.fisheries.noaa.gov.,These sites have won industry recognition and awards including the 2014, 2016, 2017 and 2018 Webby Award; two 2017 Innovate IT awards; and the 2018 MUSE Creative Award and the Acquia 2018 Public Sector Engage award.

DOI has been shifting its websites from an array of Content Management System (CMS) and non-CMS-based solutions to a set of single-architecture, cloud-hosted Drupal solutions. In doing so, DOI requires Drupal support for hundreds of websites that are viewed by hundreds of thousands of visitors each year, including its parent website, www.doi.gov, managed by the Office of the Secretary. Other properties include websites and resources provided by its bureaus  (Bureau of Indian Affairs, Bureau of Land Management, Bureau of Ocean Energy Management, Bureau of Reclamation, Bureau of Safety and Environmental Enforcement, National Park Service, Office of Surface Mining Reclamation and Enforcement, U.S. Fish and Wildlife Service, U.S. Geological Survey) and many field offices.

This BPA provides that support. The period of performance for this BPA is five years and it’s available agency-wide and to all bureaus as a vehicle for obtaining Drupal development, migration, information architecture, digital strategy, and support services. Work under this BPA will be hosted in DOI’s OpenCloud infrastructure, which was designed for supporting the Drupal platform.

Mar 19 2019
Mar 19

One of the challenges with an extensible platform like Drupal is the sheer number (tens of thousands) of available modules can be a daunting task to evaluate which ones are the best fit for your application. Every year, I have posted a round-up of the top modules to help steer newcomers to a vetted list of essential contributed modules. This article will go a step further by offering guidelines to help your project team assemble modules that meet your organization’s needs.

Methodology

There are many criteria to consider when evaluating Drupal community projects. Each factor is not weighted the same and very few should be considered in isolation. Before reviewing the list below, it’s important to assess your organization’s risk profile.

For example, an agency with full-time Drupal developers is going to be able to provide upstream support for modules in the form of patching and other improvements. As a result, modules that might have otherwise been disqualified could be supported in a way that mitigates risk. Many smaller organizations, however, might want to steer more towards stable, vetted projects. Doing so will ease the maintenance burden over time.

Example Project Page

Say, for example, you are evaluating MetatagMaintained by Mediacurrent, this module provides a flexible system for customizing the meta tags used on a website.

Metatag page on Drupal.org

Evaluation criteria

1. Module Usage

The project page will show downloads and site usage. Usage can be viewed by version (ie Drupal 6, 7, 8) and is weighted higher than download counts. Low usage of a few hundred sites, or less, would likely disqualify a project.

module usage

2. Issue Queue Activity

The project issue queue will indicate the level of activity and number of bugs including critical issues and issues flagged for security. The volume of bugs is not necessarily a problem in that larger projects will have more issues reported. The severity of bugs should be considered as well as the overall activity. Projects with only a handful of open issues logged should be considered as a higher risk. Projects with frequent issue updates and patches are considered lower risk.

issue queue activity

3. Security

The Drupal community has a dedicated security team to review projects for potential security vulnerabilities. Each module maintainer has to “opt-in” to receive security coverage and must have a stable release available. Projects that have not opted into security coverage do add risk, which needs to be considered. Check out our blog for more ways to assess modules for security

Drupal.org security advisory

4. Manual Review and Testing 

If possible, have developers spot check module code. Look for red flags in terms of code styling, organization and general adherence to Drupal.org best practices. Additionally, even after a module has been preliminarily screened it should be further evaluated when installed and configured. If the module appears buggy, unstable, or does not appear to fit the project needs, it might need to be replaced with an alternative solution.

index: metatag

5. Release Status

The release status is solely managed by the project maintainers’ and does not fully indicate the stability of the module. The release version is a factor in that it indicates the project maintainers’ assessment of stability. There is no governing model for how releases are labeled. Projects could have 1.0 stable releases and have outstanding security bugs or critical issues while development or sandbox modules can sometimes be production ready. It is important to consider other factors before making a final judgment. Modules with dev releases should be further scrutinized and vetted, including a more exhaustive manual code review. Alternatives should be evaluated. It should be noted that a sandbox module in some cases will still be advantageous to a fully custom coded alternative.

Metatag module release status 8.x

6. Commit Activity

Project code updates and the frequency of stable releases are highly weighted. Projects without commits in 6+ months are considered higher risk. Frequent project updates and releases are a good indicator of stability and support.

commits for Metatag

7. Project information

The maintainer will indicate whether a project is actively maintained, minimally maintained, obsolete, or abandoned. Obsolete projects are immediately disqualified.  

project information

8. Risk Assessment

The size and scope of the module’s functionality should be very strongly weighted. A module that has very few lines of code and adds simply functionality, e.g. a block on a page, is considered relatively low risk. A module that manipulates data in any way is higher risk and problems or the remove of the module could result in data corruption or loss. A module that provides critical functionality to a project will be evaluated much more deeply than a module that provides marginal benefit.

9. Benefit

Modules should be evaluated based on their fit for the project. Every module added to a project generates some level of risk in terms of stability, security, maintenance overhead, etc. Therefore, inclusion depends on the demonstration of benefit.

Your Feedback

Have any suggestions or feedback? Talk to me at https://twitter.com/drupalninja or use the comments below. We are always looking to make improvements to our processes.

Feb 21 2019
Feb 21

A PHP code execution bug affecting the Drupal core CMS has been discovered. The Drupal team has released a highly critical new security patch as a fix, urging all system administrators to update affected modules and configurations immediately.

drupal security patch

Not all Drupal sites are affected by the bug. According to the security advisory issued on drupal.org, sites that meet one of the following conditions are affected: 

  • The site has the Drupal 8 core RESTful Web Services (rest) module enabled and allows PATCH or POST requests, or
  • the site has another web services module enabled, like JSON:API in Drupal 8, or Services or RESTful Web Services in Drupal 7.

This bug arose because some field types are not properly sanitizing data from non-form sources. This enables arbitrary PHP code executions in certain cases. If not dealt with, a malicious user could exploit the bug to invade a Drupal site and take control of an affected server. Given that Drupal is such a popular website publishing CMS worldwide, with over 285,000 sites running on Drupal 8 and more than 800,000 sites running Drupal 7, a security issue of this magnitude warranted a swift response.

Per the advisory released on Wednesday, Drupal recommends users who may be affected to take the following steps:

If a site update cannot be updated immediately, the Drupal Security Team recommended another set of steps. Site admins can an disable all web services modules or configure their web servers to not allow PUT/PATCH/POST requests to web services resources.

The security patch was widely released on Wednesday, but Drupal considered the issue serious enough to let admins know about the release a day in advance. Duo was one of the agencies informed and our team got to work as soon as the patch was released. The advance warning gave our developers enough time to block out time on Wednesday afternoon to ensure that every site we’re responsible for was updated immediately. By the end of the day, we had patched all affected Duo client sites. 

While working with an agency helps guarantee a proactive solution to security fixes, Drupal’s in-house security team also plays a major role in maintaining diligence. A team of over 30 volunteers, the Drupal Security Team only recruits from experienced members of the Drupal community. As such, these active and committed watchdogs are typically able to identify problems before they become widespread. The current bug, for instance, was identified by the Drupal Security Team.

A strong security apparatus means nothing, however, if users don’t put in the necessary effort to keep their sites safe. In 2018, the Drupalgeddon 2 bug affected over a million Drupal sites. While many users subsequently patched their servers, an analysis conducted a couple months after that bug was discovered revealed that over one-hundred thousand sites remained vulnerable. All of the foresight in the world can’t protect a site if security concerns aren’t addressed as soon as they’re brought to light.

While major security issues like the one this week and Drupalgeddon 2 are rare, all code has bugs. The best thing a site can do to defend against potential vulnerabilities is to be proactive, actively searching for and fixing issues. Duo offers these protections on a continuous basis and, as part of the Drupal community, are able to discover and implement solutions as soon as they become available. In an increasingly digital world, it’s critical that companies find a partner who can handle your cybersecurity needs promptly and without complication.

Explore Duo

Feb 01 2019
Feb 01

In this article we will see how to update data models in Drupal 8, how to make the difference between model updating and content updating, how to create default content, and finally, the procedure to adopt for successful deployments to avoid surprises in a continuous integration/delivery Drupal cycle.

Before we start, I would encourage you to read the documentation of the hook hook_update_N() and to take into account all the possible impacts before writing an update.

Updating the database (executing hook updates and/or importing the configuration) is a very problematic task during a Drupal 8 deployment process, because the updating actions order of structure and data is not well defined in Drupal, and can pose several problems if not completely controlled.

It is important to differentiate between a contributed module to be published on drupal.org aimed at a wide audience, and a custom Drupal project (a set of Drupal contrib/custom modules) designed to provide a bespoke solution in response to a client’s needs. In a contributed module it is rare to have a real need to create instances of configuration/content entities, on the other hand deploying a custom Drupal project makes updating data models more complicated. In the following sections we will list all possible types of updates in Drupal 8.

The Field module allows us to add fields to bundles, we must make difference between the data structure that will be stored in the field (the static schema() method) and all the settings of the field and its storage that will be stored as a configuration. All the dependencies related to the configuration of the field are stored in the field_config configuration entity and all the dependencies related to the storage of the field are stored in the field_storage_config configuration entity. Base fields are stored by default in the entity’s base table.  

Configurable fields are the fields that can be added via the UI and attached to a bundle, which can be exported and deployed. Base fields are not managed by the field_storage_config configuration entities and field_config.

To update the entity definition or its components definitions (field defintions for example if the entity is fieldable) we can implement hook_update_N(). In this hook don’t use the APIs that require a full Drupal bootstrap (e.g. database with CRUD actions, services, …), to do this type of update safely we can use the methods proposed by the contract EntityDefinitionUpdateManagerInterface (e.g. updating the entity keys, updating a basic field definition common to all bundles, …)

To be able to update existing data entities or data fields in the case of a fieldable entity following a modification of a definition we can implement hook_post_update_NAME(). In this hook you can use all the APIs you need to update your entities.

To update the schema of a simple, complex configuration (a configuration entity) or a schema defined in a hook_schema() hook, we can implement hook_update_N().

In a custom Drupal project we are often led to create custom content types or bundles of custom entities (something we do not normally do in a contributed module, and we rarely do it in an installation profile), a site building action allows us to create this type of elements which will be exported afterwards in yml files and then deployed in production using Drupal configuration manager.

A bundle definition is a configuration entity that defines the global schema, we can implement hook_update_N() to update the model in this case as I mentioned earlier. Bundles are instances that persist as a Drupal configuration and follow the same schema. To update the bundles, updated configurations must be exported using the configuration manager to be able to import them into production later. Several problems can arise:

  • If we add a field to a bundle, and want to create content during the deployment for this field, using the current workflow (drush updatedb -> drush config-import) this action is not trivial, and the hook hook_post_update_NAME() can’t be used since it’s executed before the configuration import.
  • The same problem can arise if we want to update fields of bundles that have existing data, the hook hook_post_update_NAME() which is designed to update the existing contents or entities will run before the configuration is imported. What is the solution for this problem? (We will look at a solution for this problem later in this article.)

Now the question is: How to import default content in a custom Drupal project?

Importing default content for a site is an action which is not well documented in Drupal, in a profile installation often this import is done in the hook_install() hook because always the data content have not a complex structure with levels of nested references, in some cases we can use the default content module. Overall in a module we can’t create content in a hook_install() hook, simply because when installing a module the integrity of the configuration is still not imported.

In a recent project i used the drush php-script command to execute import scripts after the (drush updatedb -> drush config-import) but this command is not always available during deployment process. The first idea that comes to mind is to subscribe to the event that is triggered after the import of the configurations to be able to create the contents that will be available for the site editors, but the use of an event is not a nice developer experience hence the introduction of a new hook hook_post_config_import_NAME() that will run once after the database updates and configuration import. Another hook hook_pre_config_import_NAME() has also been introduced to fix performance issues.

A workflow that works for me

To achieve a successful Drupal deployment in continuous integration/delivery cycles using Drush, the most generic workflow that I’ve found at the moment while waiting for a deployment API in core is as follows :

  1. drush updatedb
    • hook_update_N() : To update the definition of an entity and its components
    • hook_post_update_N() : To update entities when you made an entity definition modification (entity keys, base fields, …)
  2. hook_pre_config_import_NAME() : CRUD operations (e.g. creating terms that will be taken as default values when importing configuration in the next step)
  3. drush config-import : Importing the configuration (e.g. new bundle field, creation of a new bundle, image styles, image crops, …)
  4. hook_post_config_import_NAME(): CRUD operations (e.g. creating contents, updating existing contents, …)

This approach works well for us, and I hope it will be useful for you. If you’ve got any suggestions for improvements, please let me know via the comments.

Jan 16 2019
Jan 16

In a past life, I worked as an editor for a couple of B2B websites. The first one ran on Drupal 7, while the second, launched several years after the first, was built out on Drupal 8. When this D8 site was ready for action, I was eager to dive in and test out all of the bells and whistles.

drupal admin ui

While I appreciated much of the streamlining and improved functionalities between the two versions, I couldn’t help but notice that the user interface (UI) for Drupal 8 wasn’t much different from Drupal 7.

As it turns out, I wasn’t the only one who saw the resemblance.

In a recent blog post, Drupal founder Dries Buytaert addressed criticisms from the community regarding the Drupal admin UI. Acknowledging that the interface was dated, he wrote:

This critique is not wrong. Drupal's current administration UI was originally designed almost ten years ago when we were working on Drupal 7. In the last ten years, the world did not stand still; design trends changed, user interfaces became more dynamic and end-user expectations have changed with that.

To address this issue, Dries continued, the Drupal team is going to revamp the Drupal admin UI. From the inside and out, this new UI is going to become more in-line with a modern editing experience.

There have been attempts to fix or update the admin UI before, but this new effort is wider in scope. Contrasted with a component like the Material Admin theme, which is also meant to modernize the backend editing experience, Drupal’s plan for a new admin UI is part of a comprehensive modernization strategy. The Admin UI & JavaScript Modernization initiative seeks to update Drupal’s APIs and underlying JavaScript code. This multi-year initiative’s ultimate goal is to make Drupal both more compatible with decoupled applications and easier to navigate for end users.

While these UI advances sound tantalizing, they’re a bit far off, as this initiative will take several years to complete. As the first phase of this plan, however, the Drupal team has a solution they’re planning on releasing sooner. In this case, they’re taking a cue from the community and creating a new theme known as Claro.

With a beta anticipated to come out in Spring 2019, Claro represents an early version of the Admin UI & JavaScript Modernization Initiative’s goals. Claro will be built along the same design principles that guide the initiative despite being built from the D7 core theme. While this first step is limited to visual improvements, insights gained from its adoption will be used during the larger modernization initiative. By developing this temporary solution, Drupal is giving users a chance to experiment with a version of the new layout while still staying open to feedback.

These gradual technical changes are much anticipated, but the big takeaway is that this isn’t just a cosmetic upgrade. Many of the guiding principles behind the admin UI modernization initiative are rooted in accessibility, responsiveness and collaboration. Thanks to the updated JavaScript-based UI, the interface will be React-powered, making the experience more app-like (read: intuitive). Design elements like iconography and colors will be simplified for easier understanding. While an update of this magnitude could easily have eschewed these concerns and opted for a purely aesthetic redo, signs point to a revamped admin UI that will improve the editorial experience for everyone.

Based on these defined principles and the level of attention already dedicated to the project, the Drupal team’s commitment to improving its user experience in this case cannot be understated. When users identified an outdated admin UI as an issue, Drupal offered both a short-solution to address the problem while being open about plans and philosophies for long-term development. By the end of the process, the out-of-the-box Drupal admin UI will be able to compete with other content management systems in terms of usability and appearance.

At Duo, we're committed to making sure that editors can easily and effectively manage their content. If you have any questions about Drupal or any new features on the horizon, please feel free to reach out.

Let's Talk!

Jan 09 2019
Jan 09

There’s no getting around it: Website redesigns are expensive, arduous processes. From project management to quality testing, the challenges involved in undergoing such a design overhaul are legion. Given the costs involved and the extent of labor needed to update any given site, delaying or putting off a major design overhaul is an understandable stance from business owners.  

drupal 8 upgrade

Photo by rawpixel

Drupal sites are not exempt from this reality. For businesses running sites on earlier versions of Drupal, upgrading to Drupal 8 represents a daunting task. With reasoning varying from typical costs to unsupported functionalities, forgoing a Drupal 8 migration has long been the norm. Though Drupal 8 launched back in 2015, adoption has been slow. Globally, there are around 71,000 live sites using Drupal 8 compared to over 333,000 live sites using Drupal 7. 

It’s time for that to change. 

With a suite of new and updated features, Drupal 8 has never been more of an asset. We’ve covered some of the benefits of a D8 migration in the past, but the framework only grows more robust each day. The development community continues to break new ground on Drupal 8, making it a robust solution for all manner of business needs. 

Beyond Drupal 8’s virtues, another, more pressing, reason to upgrade is rearing its head: the end of a lifecycle. As laid out in a blog post, Drupal founder Dries Buytaert has revealed the timetable for the launch of Drupal 9. The new version is expected to launch on June 3, 2020, rather than an initially projected December release. 

Why the accelerated timeline? One of the major dependencies for current Drupal cores is the PHP framework Symfony. The end-of-life date for the latest version, Symfony 3, will be in November 2021. To maintain site security, Drupal 8’s end-of-life is also tied to this date. To reiterate, by November 2021, everyone should have upgraded to Drupal 9. For everyone running Drupal 7 sites, the same deadline applies. 

Luckily, there’s good news. Once your site is upgraded to Drupal 8, moving over to Drupal 9 will be a much easier transition. This is because Drupal 9 is being built on Drupal 8 rather than on an entirely new codebase; the main work that will need to be done mostly involves removing deprecated code and updating dependencies. From a site owner’s standpoint, this means that functionality from Drupal 8 sites will carry over to ones on Drupal 9. With this level of backward compatibility, meeting the deadline of November 2021 will likely be smoother than a full-on website redesign. 

Given the eventual sunsetting of Symfony 3, moving to Drupal 9 is inevitable. The choice to make now is whether to ride out Drupal 7 and then upgrade to 9 before November 2021 or to upgrade to Drupal 8 first. After taking into account the value of Drupal 8, though, it isn’t really much of a choice at all. Easier integrations, a more robust content authoring interface and, crucially, mobile responsiveness are all features that make Drupal 8 sites feel more modern to users and more accessible to owners. Setting all of these bells and whistles aside, a move to Drupal 8 early on will prepare businesses for the eventual, inevitable switch to Drupal 9. Staying ahead of the development curve while adding improved site functionality is a win-win proposition that can put any business ahead.  

In the coming months, we will be taking a deep dive into more of the reasons why upgrading to Drupal 8 is the right move. In the meantime, reach out to us if you have any questions about the process or the challenges involved.

Free D8 Upgrade Worksheet

Jan 03 2019
Jan 03

As we step into 2019, it’s a good time to stop and take a look at what’s been happening with the Drupal editing experience. 2018 seemed to be the “year of the editor” for not only Drupal, but the entire open source CMS world.

 

drupal-editing-trends-main

Photo by Crew on Unsplash

Editors need easy-to-use, highly flexible and accessible tools to create beautiful sites. With the rise of solutions based in react.js coming of age as well as more REST oriented architectures, we can certainly expect to see an explosion of the editing experience, especially in Drupal.

Merging Technology Between The Giants in CMS with Gutenberg Editor

The greatest thing about open source is that it’s open. One platform can adopt the features of another, and they often do when their aims are as close as those of Drupal and WordPress. Last year the Drupal community adopted a project called Gutenberg for a more pristine editing experience. The editor allows you to have a visual inline content creation experience with text, media and, of course, blocks. Gutenberg was created in coordination with a major WordPress effort to introduce the concept of blocks to their CMS. Since Drupal was born in this concept, Gutenberg was an easy adoption into our community.

drupal-editing-trends-gutenberg

This editor of course enhances the experience on both systems. However, I believe Drupal stands to gain the most from this project as the CMS has a long established history of blocks as a delivery system, though the editing experience has needed help. When you add in all the other complexities Drupal offers, such as Paragraphs, Context and Views, the possibilities are endless.

Drupal and WordPress have always been locked in a great debate as to which to use, when, what’s better than what. In this case, it’s great to see how different open source communities can enhance each other. Drupal’s concept of blocks is hands-down proven and, conversely, WordPress has been a favorite for editing content. Even Matt Mullenweg, the founder of WordPress, was heard saying “Drupal is my other favorite open source project” at the 2018 WCUS conference. It’ll be exciting to see what comes of this in 2019 as we progress in Drupal.

Try a Demo of Gutenberg

Layout Builder

Layout Builder is an experimental module still in progress from the Drupal community. It’s a drag-and-drop style editing experience that is geared towards allowing editors the opportunity to create structured content with ease. If you’re familiar with Drupal Panels or Panelizer, you can think of this as a next step from there and in a similar direction as Gutenberg.

According to a Dries Buytaert post, we should see Layout Builder become stable and ready to use by May of 2019. Many of the remaining issues are related to maintaining Drupal’s commitment to accessibility standards. Layout Builder is aiming to keep up with Level AA compliance with WCAG and ATAG.  

Check out his live demo here.

layout builder drupal

Emerging 3rd Party Page Builders like Glazed

For those who desire more free and wild control, we’re seeing solutions like Glazed surface from 3rd party companies like Sooperthemes. Glazed is a Drupal module that allows you to have granular control over padding, margins, custom animations and so on with a rich feature list.

sooperthemes drupal

While the Sooperthemes strongly promotes their pre-built themes, the Glazed module is able to be installed independently into any theme. Paid modules with this kind of functionality are a bit against the grain in the Drupal community, however it’s gained a note of praise from Dries even. I recommend testing it out and seeing if it fits into your organization's needs.

Try a demo of Glazed

Making Trends Work for You

It’s really awesome to see how far we’ve come, but how you make a decision on which direction to go is always the question. There’s a few questions you can ask to help determine what’s best for your company.

  • Do you have the need for consistent and structured content?

  • Does your company have an established brand style guide?

  • How trained are your editors for design and technical editing?

Solutions like Glazed, for example, are very loose and free in what you can do. If you have multiple editors in your company on different teams, tools like this could distract from your brand’s consistency. In the case of needing structured content with some flexibility, I’d keep my eye on what’s to come with Layout Builder. And of course, if you have editors that come from the WordPress side of things, Gutenberg maybe a strong choice to minimize on new training while still benefiting from Drupal’s core features.

As a front end Drupal developer I favor structured, component driven theming patterns. One of the greatest challenges in delivering a custom Drupal theme is meeting the immediate need for an editor to understand how to utilize the tools I’ve made.

I’m a huge fan of Paragraphs, as it offers tons of flexibility for creating reusable components. However, those features and options can get buried in technical feeling UI. It’s exciting to have tools like these to tighten the gap between the features we make and the experience our clients have editing in Drupal.

Let's Talk!
Dec 14 2018
Dec 14

A lot of people have been jumping on the headless CMS bandwagon over the past few years, but I’ve never been entirely convinced. Maybe it’s partly because I don’t want to give up on the sunk costs of what I’ve learned about Drupal theming, and partly because I’m proud to be a boring developer, but I haven’t been fully sold on the benefits of decoupling.

On our current project, we’ve continued to take an approach that Dries Buytaert has described as “progressively decoupled Drupal”. Drupal handles routing, navigation, access control, and page rendering, while rich interactive functionality is provided by a JavaScript application sitting on top of the Drupal page. In the past, we’d taken a similar approach, with AngularJS applications on top of Drupal 6 or 7, getting their configuration from Drupal.settings, and for this project we decided to use React on top of Drupal 8.

There are a lot of advantages to this approach, in my view. There are several discrete interactive applications on the site, but the bulk of the site is static content, so it definitely makes sense for that content to be rendered by the server rather than constructed in the browser. This brings a lot of value in terms of accessibility, search engine optimisation, and performance.

A decoupled system is almost inevitably more complex, with more potential points of failure.

The application can be developed independently of the CMS, so specialist JavaScript developers can work without needing to worry about having a local Drupal build process.

If at some later date, the client decides to move away from Drupal, or at the point where we upgrade to Drupal 9, the applications aren’t so tightly coupled, so the effort of moving them should be smaller.

Having made the decision to use this architecture, we wanted a consistent framework for managing application configuration, to make sure we wouldn’t need to keep reinventing the wheel for every application, and to keep things easy for the content team to manage.

The client’s content team want to be able to control all of the text within the application (across multiple languages), and be able to preview changes before putting them live.

There didn’t seem to be an established approach for this, so we’ve built a module for it.

As we’ve previously mentioned, the team at Capgemini are strongly committed to supporting the open source communities whose work we depend on, and we try to contribute back whenever we can, whether that’s patches to fix bugs and add new features, or creating new modules to fill gaps where nothing appropriate already exists. For instance, a recent client requirement to promote their native applications led us to build the App Banners module.

Aiming to make our modules open source wherever possible helps us to think in systems, considering the specific requirements of this client as an example of a range of other potential use cases. This helps to future-proof our code, because it’s more likely that evolving requirements can be met by a configuration change, rather than needing a code change.

So, guided by these principles, I’m very pleased to announce the Single Page Application Landing Page module for Drupal 8, or to use the terrible acronym that it has unfortunately but inevitably acquired, SPALP.

On its own, the module doesn’t do much other than provide an App Landing Page content type. Each application needs its own module to declare a dependency on SPALP, define a library, and include its configuration as JSON (with associated schema). When a module which does that is installed, SPALP takes care of creating a landing page node for it, and importing the initial configuration onto the node. When that node is viewed, SPALP adds the library, and a link to an endpoint serving the JSON configuration.

Deciding how to store the app configuration and make all the text editable was one of the main questions, and we ended up answering it in a slightly “un-Drupally” way.

On our old Drupal 6 projects, the text was stored in a separate ‘Messages’ node type. This was a bit unwieldy, and it was always quite tricky to figure out what was the right node to edit.

For our Drupal 7 projects, we used the translation interface, even on a monolingual site, where we translated from English to British English. It seemed like a great idea to the development team, but the content editors always found it unintuitive, struggling to find the right string to edit, especially for common strings like button labels. It also didn’t allow the content team to preview changes to the app text.

We wanted to maintain everything related to the application in one place, in order to keep things simpler for developers and content editors. This, along with the need to manage revisions of the app configuration, led us down the route of using a single node to manage each application.

This approach makes it easy to integrate the applications with any of the good stuff that Drupal provides, whether that’s managing meta tags, translation, revisions, or something else that we haven’t thought of.

The SPALP module also provides event dispatchers to allow configuration to be altered. For instance, we set different API endpoints in test environments.

Another nice feature is that in the node edit form, the JSON object is converted into a usable set of form fields using the JSON forms library. This generic approach means that we don’t need to spend time copying boilerplate Form API code to build configuration forms when we build a new application - instead the developers working on the JavaScript code write their configuration as JSON in a way that makes sense for their application, and generate a schema from that. When new configuration items need to be added, we only need to update the JSON and the schema.

Each application only needs a very simple Drupal module to define its library, so we’re able to build the React code independently, and bring it into Drupal as a Composer dependency.

The repository includes a small example module to show how to implement these patterns, and hopefully other teams will be able to use it on other projects.

As with any project, it’s not complete. So far we’ve only built one application following this approach, and it seems to be working pretty well. Among the items in the issue queue is better integration with configuration management system, so that we can make it clear if a setting has been overridden for the current environment.

I hope that this module will be useful for other teams - if you’re building JavaScript applications that work with Drupal, please try it out, and if you use it on your project, I’d love to hear about it. Also, if you spot any problems, or have any ideas for improvements, please get in touch via the issue queue.

Nov 28 2018
Nov 28
Go to the profile of Moshe Weitzman

Jul 23, 2018

I recently worked with the Mass.gov team to transition its development environment from Vagrant to Docker. We went with “vanilla Docker,” as opposed to one of the fine tools like DDev, Drupal VM, Docker4Drupal, etc. We are thankful to those teams for educating and showing us how to do Docker right. A big benefit of vanilla Docker is that skills learned there are generally applicable to any stack, not just LAMP+Drupal. We are super happy with how this environment turned out. We are especially proud of our MySQL Content Sync image — read on for details!

Pretty docks at Boston Harbor. Photo credit.

Docker compose

The heart of our environment is the docker-compose.yml. Here it is, then read on for a discussion about it.

Developers use .env files to customize aspects of their containers (e.g. VOLUME_FLAGS, PRIVATE_KEY, etc.). This built-in feature of Docker is very convenient. See our .env.example file:

MySQL content sync image

The most innovative part of our stack is the mysql container. The Mass.gov Drupal database is gigantic. We have tens of thousands of nodes and 500,000 revisions, each with an unholy number of paragraphs, reference fields, etc. Developers used to drush sql:sync the database from Prod as needed. The transfer and import took many minutes, and had some security risk in the event that sanitization failed on the developer’s machine. The question soon became, “how can we distribute a mysql database that’s already imported and sanitized?” It turns out that Docker is a great way to do just this.

Today, our mysql container builds on CircleCI every night. The build fetches, imports, and sanitizes our Prod database. Next, the build does:

That is, we commit and push the refreshed image to a private repository on Docker Cloud. Our mysql image is 9GB uncompressed but thanks to Docker, it compresses to 1GB. This image is really convenient to use. Developers fetch a newer image with docker-compose pull mysql. Developers can work on a PR and then when switching to a new PR, do a simple ahoy down && ahoy up. This quickly restores the local Drupal database to a pristine state.

In order for this to work, you have to store MySQL data *inside* the container, instead of using a Docker Volume. Here is the Dockerfile for the mysql image.

Drupal image

Our Drupal container is open source — you can see exactly how it’s built. We start from the official PHP image, then add PHP extensions, Apache config, etc.

An interesting innovation in this container is the use of Docker Secrets in order to safely share an SSH key from host to the container. See this answer and mass_id_rsa in the docker-compose.yml above. Also note the two files below which are mounted into the container:

Configure SSH to use the secrets file as private key Automatically run ssh-add when logging into the container

Traefik

Traefik is a “cloud edge router” that integrates really well with docker-compose. Just add one or two labels to a service and its web site is served through Traefik. We use Traefik to provide nice local URLs for each of our services (www.mass.local, portainer.mass.local, mailhog.mass.local, …). Without Traefik, all these services would usually live at the same URL with differing ports.

In the future, we hope to upgrade our local sites to SSL. Traefik makes this easy as it can terminate SSL. No web server fiddling required.

Ahoy aliases

Our repository features a .ahoy.yml file that defines helpful aliases (see below). In order to use these aliases, developers download Ahoy to their host machine. This helps us match one of the main attractions of tools like DDev/Lando — their brief and useful CLI commands. Ahoy is a convenience feature and developers who prefer to use docker-compose (or their own bash aliases) are free to do so.

Bells and whistles

Our development environment comes with 3 fine extras:

  • Blackfire is ready to go — just run ahoy blackfire [URL|DrushCommand] and you’ll get back a URL for the profiling report
  • Xdebug is easily enabled by setting the XDEBUG_ENABLE environment variable in a developer’s .env file. Once that’s in place, the PHP in the container will automatically connect to the host’s PHPStorm or other Xdebug client
  • A chrome-headless container is used by our suite which incorporates Drupal Test Traits — a new open source project we published. We will blog about DTT soon

Wish list

Of course, we are never satisfied. Here are a couple issues to tackle:

Nov 12 2018
Nov 12

I've talked with hundreds of Drupal professionals over the years (probably thousands at this point), and whether or not it's intentional, I think we fall into a pattern. When we talk about the benefits of the platform — specifically Drupal 8 — and what it can do for organizations, we often talk about it in relation to either the front-end user or the website developer.

Drupal 8 Content Editor
Photo by Christin Hume

While both of those audiences are incredibly important, there's a flaw in that tendency. The users and developers are not the ones spending the most time in Drupal on a day-to-day basis — that would be the content editors. Content editors are the men and women who take the created content and publish it online. They push updates to the site, and they are the ones who more or less manage its content.

We've talked about the benefits of Drupal 8 in the past, but today, I want to focus on how the platform specifically impacts content editors. Drupal 8 has made great improvements to the admin interface and to the overall user experience for content editors. Here are some of the best benefits, in my opinion.

Quick Edit

Our clients who have already migrated from Drupal 7 to Drupal 8 would be completely happy if in-line editing were the only new benefit for content editors. This seamingly simple feature is receiving rave reviews, and it's no wonder why. In the past, it was such a pain to click through to each individual piece of content you wanted to edit. Now, to edit content, you technically don't even need to go into the back end.

The Quick Edit feature, which also comes out of the box in Drupal Core, allows content editors the opportunity to edit directly on the front end of a site. If you have a quick change, you can edit the content on the live site, save it and more on to something else. It's so easy, and so impactful.


CKEditor

CKEditor is a new WYSIWYG editor that is far superior to previous Drupal editors, and it is included in Drupal Core. One of the nice elements is a drag-and-drop interface. Speaking of CKEditor, the CKEditor Media Embed Module allows content editors to embed external resources such as videos, images, tweets and more via the editor.


Customization

One of the hallmarks of Drupal has been that you can customize it to fit your needs. The same is true for content editors. My colleague Michael Girgis wrote about how Drupal 8 sites can now use the Material Admin theme that makes for a more pleasant visual appearance — it reflects the styles of Google Material Design Language. That's one way to customize the experience for content editors. Similarly, the Gutenberg editor — which is scheduled for a stable release in December — gives editors a new publishing experience without needing any code.

In addition, the WYSIWYG editor that comes out of the box can be customized based on an editor's need, including hiding some features that are rarely or never used by content editors.


Responsive interface

It's obviously important for websites to be responsive today, but Drupal 8 also makes the back-end admin interface responsive, which is an enormous improvement over earlier versions of Drupal. The reality is that critical updates are necessary when people are away from their computer. Having the toolbar and editor be mobile optimized makes it much easier for editors to make site updates from their phone or other mobile device.

Content editors are the people who spend the most time in Drupal. Drupal 8 makes sure that time is now more enjoyable than ever before.

Let's Talk

Oct 23 2018
Oct 23

PHP 5.6 will officially be no longer supported through security fixes on December 31, 2018. This software has not been actively developed for a number of years, but people have been slow to jump on the bandwagon. Beginning in the new year, no bug fixes will be released for this version of PHP. This opens the door for a dramatic increase in security risks if you are not beginning the new year on a version of PHP 7. PHP 7 was released back in December 2015 and PHP 7.2 is the latest version that you can update to. PHP did skip over 6; so don’t even try searching for it.

Drupal 8.6 is the final Drupal version that will support PHP 5.6. Many other CMS’s will be dropping their support for PHP 5.6 in their latest versions as well. Simply because it is supported in that version does not mean that you will be safe from the security bugs; you will still need to upgrade your PHP version before December 31, 2018. In addition to the security risks, you have already been missing out on many improvements that have been made to PHP.

What Should You Do About This?

You are probably thinking “Upgrade, I get it.” It may actually be more complicated than that and you will need to refactor. 90-95% of your code should be fine. The version your CMS is may affect the complexity of your conversion. Most major CMS’s will handle PHP 7 right out of the box in their most recent versions.

By upgrading to a version of PHP 7, you will see a variety of performance improvements; the most dramatic being speed. The engine behind PHP, Zend Technologies, ran performance tests on a variety of PHP applications to compare the performance of PHP 7 vs PHP 5.6. These tests compared requests per second across the two versions. This relates to the speed at which code is executed, and how fast queries to the database and server are returned. These tests showed that PHP 7 runs twice as fast and you will see additional improvements in memory consumption.

How Can Mobomo Help?

Mobomo’s team is highly experienced, not only in assisting with your conversion, but with the review of your code to ensure your environment is PHP 7 ready.  Our team of experts will review your code and uncover the exact amount of code that needs to be converted. There are a good number of factors that could come into play and affect your timeline. The more customizations and smaller plugins that your site contains, the more complex your code review and your eventual conversion could be. Overall, depending on the complexity of the code, your timeline could vary but this would take a maximum of 3 weeks.

Important Things to Know:

  1. How many contributed modules does your site contain?
  2. How many custom modules does your site contain?
  3. What does your environment look like?
Sep 20 2018
Sep 20

As we enter the month of September and start planning for 2019, it’s a good time to take stock of where Drupal is as a project and see where it’s headed next year and beyond.

Dries Buytaert, founder of Drupal, recently wrote his thoughts around the timeline for Drupal 9 and “end of life” for Drupal 7 and 8. We will look at current Drupal 8 adoption and assess where we sit in 2019 as well.

An important part of this discussion that deserves attention is the rise of Javascript as a programming language, in particular, the rise of React.js. This technology has put CMSs like Drupal in an interesting position. We will look at how React/Javascript are evolving the web and assess what that means for the future of Drupal.

Finally, we will wrap up with thoughts on what these changes mean for both developers and organizations that use Drupal today or evaluating Drupal.

Drupal 8 Adoption

As mentioned previously, Dries has offered his thoughts on the proposed timeline for Drupal 9 in a recent blog entry on his website (see below).

timeline: Drupal 9 will be released in 2020. Drupal 8 end of life is planned for 2021

In early September Drupal 8 released version 8.6.0 which included major improvements to the layout system, new media features, and better migration support. This is in addition to many other improvements that have been released since Drupal 8.0 was first unveiled in late 2015.

In terms of adoption, Drupal has picked up steam with 51% growth from April 2017 to April 2018.

Dries Keynote Drupalcon 2018

graph showing that as of April 2018, 241,000 sites run on Drupal

As encouraging is that news is, it’s still should be noted that Drupal 7’s popularity still far exceeds Drupal 8 both in current usage (800k compared to 210k+ sites) and in terms of growth year over year. 

Drupal’s weekly project usage from August, 2018

graph shows Drupal 7 adoption exceeds that of Drupal 8

Drupal 7 will reach its end of life likely around November 2021 with paid support extending the lifetime with commercial support (as was the case with Drupal 6). Will Drupal 8 reach the level of usage and popularity D7 has? Perhaps not but that is largely due to focus on more robust, “enterprise” level features.

Drupal as a CMS sits largely in between Wordpress and enterprise proprietary CMSs like Adobe CMS and Sitecore in the marketplace. With the release of Drupal 8, the project moved more into the direction of enterprise features (which could explain some of the fall-off in adoption).

Pantheon had two excellent presentations (also at Drupalcon Nashville) that dive deeper into Drupal’s position in relation to other projects, most notably Wordpress. I would recommend watching WordPress vs Drupal: How the website industry is evolving and What's possible with WordPress 5.0 for more information on this topic.

According to builtwith.com, Drupal still has a sizable chunk of Alexa’s Top Million Sites. It should also be noted that Drupal does better the higher you go up the list of those sites which underscores the project’s focus on the enterprise.

CMS market share (builtwith.com)

5% of Alexa's Top Million sites run on Drupal

Drupal usage statistics (builtwith.com)

8.5% of the top 10k websites run on Drupal

With the release of Drupal 8, Drupal’s target audience started consolidating more towards the enterprise user. In the future Drupal’s success as a project will be tied more closely to performance against platforms like Adobe CMS and Sitecore in the marketplace.

React (and Javascript) take over the world

The thing about Javascript is that it’s been around forever (in tech terms) but recently has taken off. I won’t detail all the reasons here. Seth Brown from Lullabot has one of the best write-ups I have seen from a Drupal community perspective. In short, the ability now to run Javascript both in the browser and on the server (Node.js) has led the surge in Javascript development. Github shows us that more projects are built with Javascript than any other technology and Stack Overflow’s survey tells us that Javascript is the current language of choice.

Stack Overflow 2018 survey results

Javascript is the most popular language

Github projects 2018

2.3 million Javascript Github projects

Dries recognizes the importance of Javascript and has spoken about this recently at MIT. In a bit, we will look at some of Dries’ ideas for the future in relation to the Drupal project.

A few years ago we saw several Javascript frameworks pop up. These became very popular for single page applications (SPA) but also had broader appeal because they could make any website feel more interactive. React.js & Ember.js were both released in 2015 and Angular.js is older but has started getting more attention around the same time.

A big issue that needed to be solved with these frameworks was how to address SEO. Initially, these frameworks only rendered the page in the browser which meant site content was largely hidden from search engines. For SPA’s this was not necessarily a deal breaker but this limited the broader adoption of this technology.

Only fairly recently have we seen solutions that are able to use the same framework to serve pages both in the browser and on the server. Why do I bring this up? Because this has been one of the more difficult challenges and React.js addresses it better than any other framework. There are many reasons why React.js adoption is exploding but this is why I believe React is king.

The State of Javascript report from 2017 is often referenced to illustrate React’s popularity (see below):

survey respondents indicated that they have used and would use Javascript again - more than other technologies

John Hannah also has some great graphs on javascriptreport.com that demonstrate React’s dominance in this space (see below).

Npm downloads by technology (1 month)

React downloads (1 month) exceed angular and vue

Npm downloads by technology (1 year)

React downloads (1 year) exceed angular and vue

Finally it should be noted that Facebook’s technology, GraphQL paired with React.js is also on the rise and intertwined with the growth of this technology. GraphQL will come into play when we look at how CMSs are adapting to the surge in Javascript and Frontend frameworks.

React and the CMS

Is React compatible with CMSs of ‘ole (e.g. Wordpress, Drupal, etc.)? Well, yes and no. You can integrate React.js with a Drupal or Wordpress theme like you can many other technologies. In fact, it’s very likely that Drupal’s admin interface will run on React at some point in the future. There is already an effort underway by core maintainers to do so. Whether or not the admin will be fully decoupled is an open question. 

Another example of React admin integration is none other than Wordpress’ implementation of React.js to create the much anticipated Gutenberg WYSIWYG editor.

Gutenberg editor

screenshot of empty Gutenberg editor

In terms of websites in the wild using React with Drupal, there have been solutions out there (TWC, NBA, and others) for many years that use Drupal in a “progressively decoupled” way. The “progressive” approach will still exist as an option in years to come. Dries wrote about this recently in his blog post entitled “How to decouple Drupal in 2018.”

The problem I have with this type of solution is that sometimes you get the best (and worst) of both worlds trying to bolt on a Javascript framework onto a classic templating system. The truth is that Drupal’s templating theme layer is going to have trouble adapting to the new world we now live in (addressed in detail at Drupalcon’s “Farewell to Twig”). 

The real power of React is when you can combine it with GraphQL, React router and other pieces to create a highly performant, interactive experience that users will demand in years to come. To accomplish this type of app-like experience, developers are increasingly looking to API’s to address this dilemma, which we will examine next.

CMS as an API

The last couple of years there have been many Cloud CMS-as-an-API services pop up that have been generating some attention (Contentful might be the most popular). At this time it doesn’t appear that these API’s have disrupted market share for Wordpress & Drupal but they do signify a movement towards the idea of using a CMS as a content service. 

The “Decoupled” movement in the Drupal community (previously known as “Headless”) has been a big topic of conversation for a couple of years now. Mediacurrent’s own Matt Davis has helped organize two “Decoupled Days” events to help the Drupal community consolidate ideas and approaches. Projects like Contenta CMS have helped advance solutions around a decoupled architecture. Dries has also addressed Drupal’s progress towards an “API-first” approach recently on his blog.

While cloud services like Contentful are intriguing there is still no better content modeling tool that Drupal. Additionally, Drupal 8 is already well underway to support JSON API and GraphQL, with the potential to move those modules into core in the near future.

As I look at the landscape of the modern technology stack, I believe Drupal will flourish in the enterprise space as a strong content API paired with the leading Javascript Frontend. React & GraphQL have emerged as the leading candidates to be that Frontend of record.

Next, we will look at a relatively new entrant to the family, JAM stacks, and see where they fit in with Drupal (if at all?) in the future.

JAMStacks - The Silver Bullet?

The popularity of Netlify hosting and static generators has created some buzz in the Drupal community, particularly Gatsby.js, which we will examine in a moment.

Netlify provides some great tooling for static hosted sites and even offers its own cloud CMS. Mediacurrent actually hosts our own website (Mediacurrent.com) on Netlify. Mediacurrent.com runs on Jekyll which integrates with a Drupal 8 backend so we are well aware of some of the benefits and drawbacks of running a static site.

Where Drupal fits into the JAM stack is as the ‘A’ (for API), with ‘J’ being the Javascript Frontend (i.e. React) and ‘M’ being the statically generated markup. Back in 2016 we liked this idea and settled on Jekyll as the tool of choice for our rebuild as it was the most popular and well supported project at the time.

Since then Gatsby.js has risen dramatically in popularity and has a robust source plugin system that enables it to be used as a Frontend for many platforms including Drupal and Wordpress.

The creator of Gatsby, former Drupal developer Kyle Matthews recently spoke on the subject at Decoupled Days 2018. While it’s hard to know if JAM stacks like Gatsby having staying power in the years ahead they do have a lot of appeal in that they simplify many of the decoupled “hard problems” developers commonly run into. The question of scalability is an important one yet to be answered completely but the upside is tremendous. In a nutshell, Gatsby provides an amazingly performant, React/GraphQL solution that can pull in content from practically any source (including Drupal).

Do JAM stacks like Gatsby have staying power? Will these close the complexity gap that blocks more sites (large or not) from decoupling? We will have to stay tuned but the possibilities are intriguing.

Looking Ahead

We have examined the state of Drupal as a project, future release plans and how it is adapting towards a future that is “API First” that also fits well with the React world in which we now live. 

The main takeaway I would offer here is that Drupal, while still an amazing tool for managing content, is better suited as a technology paired with a leading Frontend like React. With the web evolving from monolithic systems to more of a services-type approach, it makes sense to use a best-in-class content modeling tool like Drupal with a best-in-class FE framework like React.js. 

What does that mean for the average Drupal developer? My advice to Drupal developers is to “Learn Javascript, deeply.” There is no time like the present to get more familiar with the latest and greatest technology including GraphQL.

For organizations evaluating Drupal, I do think the “Decoupled” approach should be strongly considered when planning your next redesign or build. That being said, it’s important to have an understanding of how the pieces fit together as well as the challenges and risk to any approach. This article attempts to present a high-level overview of the technology landscape and where it’s headed but every organization’s needs are unique. At Mediacurrent we work with clients to educate them on the best solution for their organization. 

Have questions or feedback? Hit me up at https://twitter.com/drupalninja/

Aug 22 2018
Aug 22

At Mediacurrent, we hear a lot of questions — from the open source community and from our customers — about website accessibility. What are the must-have tools, resources, and modules? How often should I test? To address those and other top FAQs, we hosted a webinar with front end developers Ben Robertson and Tobias Williams, back end developer Mark Casias, and UX designer Becky Cierpich.

The question that drives all others is this: Why should one care about web accessibility? To kick-off the webinar, Ben gave a compelling answer. He covered many of the topics you’ve read about on the Mediacurrent blog: introducing WCAG Web Content Accessibility Guidelines, some the benefits of website accessibility (including improved usability and SEO) and the threats of non-compliance.

[embedded content]

Adam Kirby: Hi everybody, this is Adam Kirby. I'm the Director of Marketing here at Mediacurrent. Thanks everyone for joining us. Today we're going to go over website accessibility frequently asked questions. 

Our first question is: 

Are there automated tools I can use to ensure my site is accessible and what are the best free tools? 

Becky Cierpich: Yes! Automated tools that I like to use —and these are actually all free tools— are WEBAIM’s WAVE tool, you can use that as a browser extension. There's also Chrome Accessibility Developer Tools and Khan Academy has a Chrome Plugin called Tota11y. So with these things, you can get a report of all the errors and warnings on a page. Automated testing catches about 30 percent of errors, but it takes a human to sort through and intellectually interpret the results and then determine the most inclusive user experience. 

What's the difference between human and automated testing? 

Becky Cierpich: Well, as I said, the automated tools can catch 30 percent of errors and we need a human on the back end of that to interpret. And then we use the manual tools, things like Chrome Vox or VoiceOver for Mac, those are some things you can turn on if you want to simulate a user experience from the auditory side, you can do keyboard only navigation to simulate that experience. Those things will really help you to kind of drive behind the wheel of what another user's experiencing and catch any errors in the flow that may have come from, you know, the code not being up to up to spec. 

Then we also have color contrast checkers. WEBAIM has a good one for that and all these are free tools and they can allow you to test one color against another. You can verify areas that have too little contrast and test adjustments that'll fix the contrast. 

What do the terms WCAG, W3C, WAI, Section 508, and ADA Title III mean? 

Mark Casias: I'll take that one - everybody loves a good acronym. WCAG, these are Web Content Accessibility Guidelines. This is the actual document that gives you the ideas of what you need to change or what you want to a base your site on. W3C stands for World Wide Web Consortium - these are the people who control the web standardization. WAI is the Web Accessibility Initiative and refers to the section of the W3C that focuses on accessibility. 

Finally, Section 508 is part of the Rehabilitation Act of 1973, well it was added to that act in 1998, to require Federal agencies to make their electronic and IT  accessible to people with disabilities. ADA Title III is part of the Americans with Disabilities Act which focuses on private businesses, it mandates that they need to be fully accessible to individuals with disabilities. 

 What are the different levels of compliance? 

Tobias Williams: Briefly, the WCAG Web Content Accessibility Guidelines tell us that there are three levels - A, AA, and AAA, with AAA being the highest level of compliance. These are internationally agreed to, voluntary standards. Level A has the minimum requirements for the page to be accessible. Level AA builds on the accessibility of level A, examples include consideration of navigation placement and contrast of colors. Level AAA again builds on the previous level - certain videos have sign language translation and improved color contrast. 

To meet the standard of each level there are 5 requirements that are detailed on the WCAG site. Every every actionable part of the site has to be 100% compliant. WCAG doesn't require that a claim to a standard be made, and these grades are not specified by the ADA but are often referenced when assessing how accessible the site is.

Should we always aim for AAA standards?

Ben Robertson: How we approach it is we think you should aim for AA compliance. That's going to make sure that you're covering all the bases. You have to do an internal audit of who are your users and what are your priorities and who you're trying to reach. And then see out of those, where do the AAA guidelines come in and where can you get the biggest bang for your buck? I think that the smart way to do it is to prioritize. Because when you get to the AAA level, it can be a very intense process, like captioning every single video on your site. So you have to prioritize. 

What Drupal modules can help with accessibility?

Mark Casias: Drupal.org has a great page that lists a good portion of these modules for accessibility.  One that they don't have on there is the AddtoAny module that allows you to share your content. We investigated this for our work on the Grey Muzzle site and found this was the most accessible option. Here are some other modules you can try:

  • Automatic Alternative Text -The module uses the Microsoft Azure Cognitive Services API to generate an Alternative Text for images when no Alternative Text has been provided by user.
  • Block ARIA Landmark Roles -Inspired by Block Class, this module adds additional elements to the block configuration forms that allow users to assign a ARIA landmark role to a block.
  • CKEditor Abbreviation - Adds a button to CKEditor for inserting and editing abbreviations. If an existing abbr tag is selected, the context menu also contains a link to edit the abbreviation.
  • CKEditor Accessibility Checker - The CKEditor Accessibility Checker module enables the Accessibility Checker plugin from CKEditor.com in your WYSIWYG.
  • High contrast - Provides a quick solution to allow the user to switch between the active theme and a high contrast version of it. (Still in beta)
  • htmLawed  - The htmLawed module uses the htmLawed PHP library to restrict and purify HTML for compliance with site administrator policy and standards and for security. Use of the htmLawed library allows for highly customizable control of HTML markup.
  • Siteimprove - The Siteimprove plugin bridges the gap between Drupal and the Siteimprove Intelligence Platform.
  • Style Switcher - The module takes the fuss out of creating themes or building sites with alternate stylesheets.
  • Text Resize - The Text Resize module provides your end-users with a block that can be used to quickly change the font size of text on your Drupal site.

At what point in a project or website development should I think about accessibility? 

Becky Cierpich: I got this one! Well, the short answer is always and forever. Always think about accessibility. I work a lot at the front end of a project doing strategy and design. So what we try to do is bake it in from the very beginning. We'll take analytics data and then wecan get to know the audience that way. That's how you can kind of plan and prioritize your features. If you want to do AAA features, you can figure out who your users are before you go ahead and plan that out. Another thing we do is look at personas. You can create personas that have limitations and that way when you go in and design. You can be sure to capture those people who might be challenged by even things like a temporary disability, slow Internet connection or colorblind - things that people don't necessarily even think of this as a disability.

I would also say don't worry if you already have a site and you know, it's definitely not compliant or you're not sure because Mediacurrent can come in and audit, using the testing tools to interpret and prioritize and slowly you can get up speed over time. It's not something that you have to necessarily do overnight. 

How often should I check for accessibility compliance? 

Tobias Williams: I’ll take this one - I also work on the front end, implementing Becky’s designs. When you're building anything new for a site, you should be accessibility testing. test work, During cross-browser testing, we should also be checking that our code meets the accessibility standards we are maintaining.

Now that's easy to do on a new cycle because you're in the process of building a product that currently exists. I would say anytime you make any kind of change or you're focused on any kind of barrier of the fight, I would run a quick accessibility check. And then even if you don't address the changes straight away or at least you're aware of that, you can document them and work on them later. As far as an in-production site where you have a lot of content creators, or where independent groups work on features it is also a good idea to run quarterly spot checks. 

I've seen these on a few sites, but what is an accessibility statement and do I need one?

Becky Cierpich: An accessibility statement is similar to something like a privacy agreement. It's a legal document and there are templates to do it. It basically states clearly what level of accessibility the website is targeting. If you have any areas that still need improvement, you can acknowledge those and outline your plan to achieve those goals and when you're targeting to have that done. It can add a measure of legal protection while you're implementing any fixes. And if your site is up to code, it's a powerful statement to the public that your organization is recognizing the importance of an inclusive approach to your web presence. 

What are the legal ramifications of not having an accessible website? 

Ben Robertson: I'll jump in here, but I just want to make a disclaimer that I'm not a lawyer. Take what I say with several grains of salt! This whole space is pretty new in terms of legal requirements. The landmark case was  Winn-Dixie, the grocery store chain — it was filed under title III of ADA Act and they lost. It was brought up by a blind customer who could not use their website. The court order is available online and it's an interesting read but basically, there were no damages sought in the case. The court ordered that

they had to have an accessibility statement that said that they would follow WCAG 2.0. That's a great refresh for site editors to make sure that they're following best practices. They also mandated quarterly automated accessibility testing. 

I really think if you have these things in place already, you're really gonna mitigate pretty much all your risk. You can get out in front of it if you have a plan. 

If I have an SEO expert, do I need an accessibility expert as well? 

Tobias Williams: I'll explain what we do at Mediacurrent. We don't have one person who is an expert. We have a group of people. We have several developers, designers and other people on the team who are just interested in the subject and we meet once a week, we have a Slack channel where you just talk about accessibility. There are people who are most familiar with different aspects of it and that allows us to be better rounded as a group. 

I can see somebody being hired to be an accessibility expert but I think that the dialogue within a company about this issue is most important. The more people who are aware of it, the better. You can prevent problems before they occur. So, if I'm aware of the accessibility requirements of an item building, I'm going to build it the right way as opposed to having to be reviewed by the expert and then making changes. The more people who are talking about it and were involved in it and I'm the general level of knowledge, it goes a long way. We don't need to have experts as much as we need to have interested people. 

As a content editor, what's my role in website accessibility?

Mark Casias: Your role is very important in website accessibility. All the planning and site building that I do [as a developer] won't mean a thing if you don't attach an image alt tag to your images or you use a bunch of H1 title tags because you want the font size to be bigger and things like that. Content editors need to be aware of what they're doing and its impact on accessibility. They need to know the requirements and they need to make sure that their information is. keeping the website rolling in the right direction. Check out Mediacurrent’s Go-To-Guide for Website Accessibility for more on how to do this. 

Some of the technical requirements for accessibility seem costly and complex. What are the options for an organization?

Ben Robertson: Yeah, I totally agree. Sometimes you will get an accessibility audit back and you just see a long list of things that are red and wrong. It can seem overwhelming. I think there's really a couple of things to keep in mind here is that one, you don't have to do everything all at once. You can create an accessibility statement and you can create a plan and start working through that plan. Two, it really helps to have someone with experience or an experienced team to help you go through this process. There can be things that are a very high priority that are very easy to fix and there can be things that may be a low priority.

You can also think about it this way: if you got a report from a contractor that something you're building was not up to code, you would want to fix that. And so this is kind of a similar thing. People aren't going to be injured from using your website if it's inaccessible but it's the right thing to do. It's how websites are supposed to be built if you're following the guidelines, and it's really good to help your business overall. 

How much does it cost to have and maintain an accessible site? Should I set aside budget just for this? 

Adam Kirby: You will want to set aside a budget to create an accessible site. It is an expense. You're going to have to do a little bit more for your website in order to make sure it's successful. You're going to have to make changes. How much does it cost? That will vary and depend on where you are with your site build; if it’s an existing site, if you're launching a new site, the amount of content on your site and the variability of content types. So, unfortunately, the answer is it just depends. 

If you need help with an accessibility audit, resolving some known issues on your site, or convincing your leadership to take action on website accessibility, we’re here for you.  

Webinar Slides and Additional Resources 

Aug 20 2018
Aug 20

I recently had a meeting with a client who has operated their website on Drupal 7 for the past few years, and they wanted to know why they should upgrade their site from Drupal 7 to Drupal 8. As I worked on answering their question, it became clear to me that there are 8 distinct reasons why it makes sense for businesses and organizations to make the leap from D7 to D8.

eight reasons for drupal 8
Photo by Franck V.

Here are those 8 reasons:

1 - Ease of use

Historically, users have complained about the back-end experience of managing and editing Drupal websites. Thanks to Drupal 8, those complaints are now a thing of the past. First off, D8 comes with a WYSIWYG editor designed specifically for Drupal’s use. Beyond the traditional basics — like buttons for bold, italic, hyperlinks, and so on — there are added extras, such as easily editable image captions that come with the editor’s new Widgets feature.

Site admins can also perform inline editing, meaning they can look at the front-end of their site and make edits in real-time so that they can instantly see what their edits look like. No longer do you need to exclusively go to the back-end to update or edit content. This gives admins far more flexibility than ever before.

Additionally, Drupal 8 features the Material Design Admin Theme that was designed using the rules Google developed for all of its products. This means that as Drupal admins manage their content, they can have an experience that resembles working with a Google product — which can make editing more familiar and more comfortable.

2 - Mobile first

Drupal 8 is fully responsive out-of-the-box. There is no customization needed to make that happen. After all, it’s 2018 — any web product should be fully responsive. That means that when your site is built on D8, elements like menus, blocks, and even images will automatically reshape to work and look good on any screen size.

Additionally, D8 features a mobile-friendly toolbar that has multiple benefits to it. First off, it makes it possible for admins to manage content from their smartphone or other mobile device. More importantly, the toolbar was designed with accessibility in mind, which means it makes it easier for screen readers to easily navigate to different parts of a site.


3 - Performance

If you have a website, you want it to perform well. That’s obvious. And good performance often refers to fast load times. What Drupal 8 has that Drupal 7 doesn’t is BigPipe technology, which was first developed at Facebook. What BigPipe does is it allows your site to have far better front-end perceived performance because of how it loads and caches content. Here is a video that our CTO Rich Lawson likes to use as an example of how BigPipe impacts a site.

BigPipe is part of Drupal core, so when you upgrade to Drupal 8, you will benefit from it immediately.


4 - Introducing TWIG

Drupal 8 introduces Twig, a widely adopted theme system in the PHP world, to Drupal. Twig’s syntax is simpler, and Twig is more secure than the PHP template-based theme system in Drupal 7 and below that it replaces. It allows designers and themers with HTML/CSS knowledge to modify markup without needing to be a PHP expert and with almost no risk of their actions causing security issues on your site.


5 - Migration plan

With Drupal 8, the process of updating the platform has changed, and it’s a change that benefits users, admins and Drupal itself. Historically, upgrading to a new version of Drupal (like Drupal 6 to Drupal 7), required an entire rebuild. It was time consuming and it was expensive. Starting with Drupal 8, that process changes. Upgrading from D7 to D8 still necessitates that total rebuild, but after that, you won’t need to rebuild when a new version of the platform comes out. So when Drupal 9 is unveiled, you want need the same type of overhaul as you’ve done in the past.

Beyond that, Drupal has changed to a new release cycle. Now, Drupal offers monthly bug fixes and security releases (for example, 8.0.1, 8.0.2, etc.) and semi-annual minor releases of Drupal core (so 8.1.0, 8.2.0 and so on). This means that if there is a bug with the platform, or a certain module isn’t working in the latest version, no longer do you have to wait years for the new version to be released. Instead, your wait time is only a couple of months, if that.

Additionally, modules are now compatible between immediate major versions, so you don’t ever have to say, “I cannot upgrade to Drupal 9 because my favorite module is only in Drupal 8.”


6 - Extensibility

Flexible content delivery is another key tenet of D8. That flexibility extends to integrations, one of the platform’s hallmarks and key differentiators. Drupal is a great foundation for web content management and digital experience management because it enables integrations with your best-of-breed technologies. Drupal 8 gives you the ultimate freedom and flexibility to choose what technologies you want to use.


7 - Future proofing

Drupal 8 makes it possible to access your content beyond the confines of a traditional screen. Drupal enables you to create and deliver content to any channel, device or application. That means Drupal makes it easier to share your content with mobile apps, voice recognition devices like Google Home or Amazon’s Alexa, or even Internet of Things devices (like your smart fridge, for example). You may not need that functionality now, but down the road, you may like the option.


8 - Commerce 2.X

The Drupal Commerce 2 module has been curated by commerce experts and its out of the box functionality can stand up against any proprietary platform. New features will be introduced into the core through micro-updates and major migrations will be a thing of the past, just as it is with Drupal 8 and beyond.

The Drupal community’s attention is on Drupal Commerce 2, just as the community is focused on D8 instead of D7. If you’re operating with Drupal Commerce 1, or a different e-commerce module, you’re not going to be guaranteed the same type of maintenance and security as you would in Drupal Commerce 2. When you’re focused on selling a product or service through your website, you want to make sure it is up to date with security and functionality settings.

Those are my eight reasons why I told our client they should upgrade to D8. Did I miss something? Is there something else about D8 that you want to know about? Shoot me a note and let me know.

Let's Talk

Jul 26 2018
Jul 26
July 26th, 2018

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name":"visreg",  "version":"1.0.0",  "description":"Website with visual regression testing",  "scripts":{    "test":"echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); constvisreg=require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); constvisreg=require('../../../../../../../../tests/config/globalHides.js');describe('Search Block',function(){  it('should look good',function(){    browser      .url('./')      .checkElement('.search-block-form',{hide:visreg.hide,remove:visreg.remove})      .forEach((item)=>{        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jul 11 2018
Jul 11
July 11th, 2018

Someone recently asked the following question in Slack. I didn’t want it to get lost in Slack’s history, so I thought I’d post it here:

Question: I’m setting a CSS background image inside my Pattern Lab footer template which displays correctly in Pattern Lab; however, Drupal isn’t locating the image. How is sharing images between PL and Drupal supposed to work?

My Answer: I’ve been using Pattern Lab’s built-in data.json files to handle this lately. e.g. you could do something like:

footer-component.twig:

... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %} ... {%footer_background_image=footer_background_image|default('/path/relative/to/drupal/root/footer-image.png')%}

This makes the image load for Drupal, but fails for Pattern Lab.

At first, to fix that, we used the footer-component.yml file to set the path relative to PL. e.g.:

footer-component.yml:

footer_background_image: /path/relative/to/pattern-lab/footer-image.png footer_background_image:/path/relative/to/pattern-lab/footer-image.png

The problem with this is that on every Pattern Lab page, when we included the footer copmonent, we had to add that line to the yml file for the page. e.g:

basic-page.twig:

... {% include /whatever/footer-component.twig %} ... {%include/whatever/footer-component.twig%}

basic-page.yml:

... footer_background_image: /path/relative/to/pattern-lab/footer-image.png ... footer_background_image:/path/relative/to/pattern-lab/footer-image.png

Rinse and repeat for each example page… That’s annoying.

Then we realized we could take advantage of Pattern Labs global data files.

So with the same footer-component.twig file as above, we can skip the yml files, and just add the following to a data file.

theme/components/_data/paths.json: (* see P.S. below)

{ "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png" }     "footer_background_image":"/path/relative/to/pattern-lab/footer-image.png"

Now, we can include the footer component in any example Pattern Lab pages we want, and the image is globally replaced in all of them. Also, Drupal doesn’t know about the json files, so it pulls the default value, which of course is relative to the Drupal root. So it works in both places.

We did this with our icons in Emulsify:

_icon.twig

paths.json

End of the answer to your original question… Now for a little more info that might help:

P.S. You can create as many json files as you want here. Just be careful you don’t run into name-spacing issues. We accounted for this in the header.json file by namespacing everything under the “header” array. That way the footer nav doesn’t pull our header menu items, or vise-versa.

example homepage home.twigthat pulls menu items for the header and the footer from data.json files

header.json

footer.json

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jun 06 2018
Jun 06
June 6th, 2018

I recently had the privilege of helping PRI.org launch a new React Frontend for their Drupal 7 project. Although I was fairly new to using React, I was able to lean on Four Kitchens’ senior JavaScript engineering team for guidance. I thought I might take the opportunity to share some things I learned along the way in terms of organization, code structuring and packages.

Organization

As a lead maintainer of Emulsify, I’m no stranger to component-driven development and building a user interface from minimal, modular components. However, building a library of React components provided me with some new insights worth mentioning.

Component Variations

If a component’s purpose starts to diverge, it may be a good time to split the variations in your component into separate components. A perfect example of this can be found in a button component. On any project of scale, you will likely have a multitude of buttons ranging from actual <button> elements to links or inputs. While these will likely share a number of qualities (e.g., styling), they may also vary not only in the markup they use but interactions as well. For instance, here is a simple button component with a couple of variations:

const Button = props => { const { url, onClick } = props; if (url) { return ( <a href={url}> ... </a> ); } return ( <button type="button" onClick={onClick}> ... </button> ); }; constButton=props=>{  const{url,onClick}=props;  if(url){    return(      <ahref={url}>        ...      </a>    );  return(    <buttontype="button"onClick={onClick}>      ...    </button>

Even with the simplicity of this example, why not separate this into two separate components? You could even change this component to handle that fork:

function Button(props) { ... return url ? <LinkBtn {...props}/> : <ButtonBtn {...props} />; } functionButton(props){  returnurl?<LinkBtn{...props}/>:<ButtonBtn{...props}/>;

React makes this separation so easy, it really is worth a few minutes to define components that are distinct in purpose. Also, testing against each one will become a lot easier as well.

Reuse Components

While the above might help with encapsulation, one of the main goals of component-driven development is reusability. When you build/test something well once, not only is it a waste of time and resources to build something nearly identical but you have also opened yourself to new and unnecessary points of failure. A good example from our project is creating a couple different types of toggles.  For accessible, standardized dropdowns, we introduced the well-supported, external library Downshift.:

In a separate part of the UI, we needed to build an accordion menu:

Initially, this struck me as two different UI elements, and so we built it as such. But in reality, this was an opportunity that I missed to reuse the well-built and tested Downshift library (and in fact, we have a ticket in the backlog to do that very thing). This is a simple example, but as the complexity of the component (or a project) increases, you can see where reusage would become critical.

Flexibility

And speaking of dropdowns, React components lend themselves to a great deal of flexibility. We knew the “drawer” part of the dropdown would need to contain anything from an individual item to a list of items to a form element. Because of this, it made sense to make the drawer contents as flexible as possible. By using the open-ended children prop, the dropdown container could simply just concern itself with container level styling and the toggling of the drawer. See below for a simplified version of the container code (using Downshift):

export default class Dropdown extends Component { static propTypes = { children: PropTypes.node }; static defaultProps = { children: [] }; render() { const { children } = this.props; return ( <Downshift> {({ isOpen }) => ( <div className=”dropdown”> <Button className=”btn” aria-label="Open Dropdown" /> {isOpen && <div className=”drawer”>{children}</div>} </div> )} </Downshift> ); } } exportdefaultclassDropdownextendsComponent{  staticpropTypes={    children:PropTypes.node  staticdefaultProps={    children:[]  render(){    const{children}=this.props;    return(      <Downshift>        {({isOpen})=>(          <divclassName=dropdown>            <ButtonclassName=btnaria-label="Open Dropdown"/>            {isOpen&&<divclassName=drawer>{children}</div>}          </div>        )}      </Downshift>    );

This means we can put anything we want inside of the container:

<Dropdown> <ComponentOne> <ComponentTwo> <span>Whatever</span> </Dropdown> <Dropdown>  <ComponentOne>  <ComponentTwo>  <span>Whatever</span></Dropdown>

This kind of maximum flexibility with minimal code is definitely a win in situations like this.

Code

The Right Component for the Job

Even though the React documentation spells it out, it is still easy to forget that sometimes you don’t need the whole React toolbox for a component. In fact, there’s more than simplicity at stake, writing stateless components may in some instances be more performant than stateful ones. Here’s an example of a hero component that doesn’t need state following AirBnB’s React/JSX styleguide:

const Hero = ({title, imgSrc, imgAlt}) => ( <div className=”hero”> <img data-src={imgSrc} alt={imgAlt} /> <h2>{title}</h2> </div> ); export default Hero; constHero=({title,imgSrc,imgAlt})=>(  <divclassName=hero>    <imgdata-src={imgSrc}alt={imgAlt}/>    <h2>{title}</h2>  </div>exportdefaultHero;

When you actually need to use Class, there are some optimizations you can make to at least write cleaner (and less) code. Take this Header component example:

import React from 'react'; class Header extends React.Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReactfrom'react';classHeaderextendsReact.Component{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

In this snippet, we can start by simplifying the React.Component extension:

import React, { Component } from 'react'; class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReact,{Component}from'react';classHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

Next, we can export the component in the same line so we don’t have to at the end:

import React, { Component } from 'react'; export default class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } importReact,{Component}from'react';exportdefaultclassHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX

Finally, if we make the toggleOpen() function into an arrow function, we don’t need the binding in the constructor. And because our constructor was really only necessary for the binding, we can now get rid of it completely!

export default class Header extends Component { state = { isMenuOpen: false }; toggleOpen = () => { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); }; render() { // JSX } } exportdefaultclassHeaderextendsComponent{  state={isMenuOpen:false};  toggleOpen=()=>{    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX

Proptypes

React has some quick wins for catching bugs with built-in typechecking abilities using React.propTypes. When using a Class component, you can also move your propTypes inside the component as static propTypes. So, instead of:

export default class DropdownItem extends Component { ... } DropdownItem.propTypes = { .. propTypes }; DropdownItem.defaultProps = { .. default propTypes }; exportdefaultclassDropdownItemextendsComponent{DropdownItem.propTypes={  ..propTypesDropdownItem.defaultProps={  ..defaultpropTypes

You can instead have:

export default class DropdownItem extends Component { static propTypes = { .. propTypes }; static defaultProps = { .. default propTypes }; render() { ... } } exportdefaultclassDropdownItemextendsComponent{  staticpropTypes={    ..propTypes  staticdefaultProps={    ..defaultpropTypes  render(){    ...

Also, if you want to limit the value or objects returned in a prop, you can use PropTypes.oneOf and PropTypes.oneOfType respectively (documentation).

And finally, another place to simplify code is that you can deconstruct the props option in the function parameter definition like so. Here’s a component before this has been done:

const SvgLogo = props => { const { title, inline, height, width, version, viewBox } = props; return ( // JSX ) } constSvgLogo=props=>{  const{title,inline,height,width,version,viewBox}=props;  return(    // JSX

And here’s the same component after:

const SvgLogo = ({ title, inline, height, width, version, viewBox }) => ( // JSX ); constSvgLogo=({title,inline,height,width,version,viewBox})=>(

Packages

Finally, a word on packages. React’s popularity lends itself to a plethora of packages available. One of our senior JavaScript engineers passed on some sage advice to me that is worth mentioning here: every package you add to your project is another dependency to support. This doesn’t mean that you should never use packages, merely that it should be done judiciously, ideally with awareness of the package’s support, weight and dependencies. That said, here are a couple of packages (besides Downshift) that we found useful enough to include on this project:

Classnames

If you find yourself doing a lot of classname manipulation in your components, the classnames utility is a package that helps with readability. Here’s an example before we applied the classnames utility:

<div className={`element ${this.state.revealed === true ? revealed : ''}`} > <divclassName={`element${this.state.revealed===true?revealed:''}`}>

With classnames you can make this much more readable by separating the logic:

import classNames from 'classnames/bind'; const elementClasses = classNames({ element: true, revealed: this.state.revealed === true }); <div classname={elementClasses}> importclassNamesfrom'classnames/bind';constelementClasses=classNames({  element:true,  revealed:this.state.revealed===true<divclassname={elementClasses}>

React Intersection Observer (Lazy Loading)

IntersectionObserver is an API that provides a way for browsers to asynchronously detect changes of an element intersecting with the browser window. Support is gaining traction and a polyfill is available for fallback. This API could serve a number of purposes, not the least of which is the popular technique of lazy loading to defer loading of assets not visible to the user.  While we could have in theory written our own component using this API, we chose to use the React Intersection Observer package because it takes care of the bookkeeping and standardizes a React component that makes it simple to pass in options and detect events.

Conclusions

I hope passing on some of the knowledge I gained along the way is helpful for someone else. If nothing else, I learned that there are some great starting points out there in the community worth studying. The first is the excellent React documentation. Up to date and extensive, this documentation was my lifeline throughout the project. The next is Create React App, which is actually a great starting point for any size application and is also extremely well documented with best practices for a beginner to start writing code.

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Feb 21 2018
Feb 21

Selected sessions for Drupalcon Nashville have just been announced! Mediacurrrent will be presenting seven sessions and hosting a training workshop. 

From exploring new horizons in decoupled Drupal to fresh perspectives on improving editorial UX and achieving GDPR compliance, check out what the Mediacurrent team has in store for Drupalcon 2018:
 

Speakers: Matt Davis, Director of Emerging Technology at Mediacurrent and Jeremy Dickens, Senior Drupal Developer at The Weather Company / IBM
Session Track: Horizons

During the course of an ongoing decoupling project for weather.com, the team found that the lack of page configurability was a distinct pain point for site administrators and product owners. To meet this challenge, the weather.com team built Project Moonracer, a Drupal 8-based solution that allowed for the direct modification of page configuration on a completely decoupled front-end by developing a unique set of data models to move page configuration back into the hands of the site owners.  

Takeaways:

  • Gain a greater understanding of the decoupled UI problem space as a whole
  • See specific API and UI considerations and lessons learned from our experience
  • Catch a glimpse into some possible futures of editorial interfaces in an increasingly decoupled world

Speaker: Bob Kepford, Lead Drupal Architect at Mediacurrent
Session Track: Back End Development 

Wouldn’t it be nice if you could type one command that booted your vagrant box, started displaying watchdog logs, set up the correct Drush alias, and provided easy access to your remote servers? Or maybe you use tools like Grunt, Gulp, or Sass. What if you could launch all of your tools for a project with one command? In this session, attendees will see how to use the terminal every day to get work done efficiently and effectively

You’ll learn:

  • How to use free command line applications to get work done.
  • How to better use the command line tools you already know.
  • How to customize your command line to behave the way you want it to. I guarantee attendees will walk away with at least one new tip, trick, or tool.

Speaker: Jay Callicott, VP of Technical Operations at Mediacurrent 
Session Track: Site Building 

If you have ever googled to find “top Drupal modules” you probably have read Mediacurrent’s popular, long-running blog series on the top modules for Drupal, authored by our own Jay Callicott. In this session, follow him on a leisurely stroll through the best modules that Drupal 8 has to offer as Jay presents an updated list of his top picks. Like a guided tour of the Italian countryside, you can sit back and enjoy as your guide discusses the benefits of each module. By the end of this session, you will have been introduced to at least a few modules that will challenge the boundaries of your next project.

Speakers: Mediacurrent's Dawn Aly, VP of Digital Strategy and Mark Shropshire, Open Source Security Lead
Session Track: Business

Data security legislation like the GDPR (enforcement begins May 28th, 2018) allows users to control how and if their personal data is used by companies. This shift in control fundamentally changes how companies can collect, store, and use information about prospects and customers. While understanding and implementing privacy related regulation in web projects is a necessity, related knowledge and skill sets become a real business differentiator and a key part of a user’s privacy experience (PX).

Key Topics:

  • Practical interpretation of the GDPR 
  • How to determine if you are at risk for compliance 
  • Repeatable process for assessing security risks in Drupal websites 
  • Security by design
  • Impact to data, analytics, and personalization strategies 

Speakers: Kevin Basarab, Director of Development at Mediacurrent and Mike Priscella, Engineering Manager at Thrillist/ Group Nine Media. 
Session Track: Ambitious Digital Experiences 

In this session, we'll dive into how Group Nine Media (parent company of Thrillist.com, TheDodo.com, and others) are evolving the Drupal 8 editorial user experience and contributing that back to the community. We'll not only look into their use case but also explore what modules and options are out there for improving editorial UX without custom development work.

  • How is design/UX reversing to focus on the editorial experience?
  • What contrib modules currently enhance the editorial experience?
  • How can a better editorial experience be beneficial to your client? 

Speakers: Mediacurrent Senior Front End Developer Mario Hernandez; Cristina Chumillas, Designer and Frontend Developer at Ymbra; Lauri Eskola, Drupal Developer at Druid Oy
Session Track: Core Conversations 

The Out-of-the-Box initiative team is working on improving the first-time user experience of Drupal. The team is creating a new installation profile with the main goal of demonstrating how powerful Drupal is for creating beautiful websites for real life use cases.

The alpha version for The Out of the Box initiative has been committed to Drupal 8.6.x. But, what is it and what will it bring to core?
 

Speakers: A panel of community organizers, including Mediacurrent Senior Developer April Sides 
Session Track: Building Community

This conversation is a space for camp organizers (and attendees) to discuss all things event planning, from venue selection and budgeting to session programming and swag. 

Training Presenters: Mediacurrent Senior Front End Developers Mario Hernandez and Eric Huffman

With the component-based approach becoming the standard for Drupal 8 theming, we’re beginning to see some slick front end environments show up in Drupal themes. The promise that talented front enders with little Drupal knowledge can jump right in is much closer to reality.  However, before diving into this new front end bliss there are still some gotchas, plus lots of baked in goodies Drupal provides that one will need to have a handle on before getting started.

This training will focus on the UI_Patterns module, which although still in Release Candidate state, it already solves many problems originated from the Drupal integration process.

Additional Resources
Drupalcon Baltimore 2017 - SEO, I18N, and I18N SEO| Blog 
Drupalcon: Not Just for Developers| Blog 
The Real Value of Drupalcon | Blog 

Feb 01 2018
Feb 01
February 1st, 2018

Paragraphs is a powerful Drupal module that makes gives editors more flexibility in how they design and layout the content of their pages. However, they are special in that they make no sense without a host entity. If we talk about Paragraphs, it goes without saying that they are to be attached to other entities.
In Drupal 8, individual migrations are built around an entity type. That means we implement a single migration for each entity type. Sometimes we draw relationships between the element being imported and an already imported one of a different type, but we never handle the migration of both simultaneously.
Migrating Paragraphs needs to be done in at least two steps: 1) migrating entities of type Paragraph, and 2) migrating entities referencing imported Paragraph entities.

Migration of Paragraph entities

You can migrate Paragraph entities in a way very similar to the way of migrating every other entity type into Drupal 8. However, a very important caveat is making sure to use the right destination plugin, provided by the Entity Reference Revisions module:

destination: plugin: ‘entity_reference_revisions:paragraph’ default_bundle: paragraph_type destination:plugin:entity_reference_revisions:paragraphdefault_bundle:paragraph_type

This is critical because you can be tempted to use something more common like entity:paragraph which would make sense given that Paragraphs are entities. However, you didn’t configure your Paragraph reference field as a conventional Entity Reference one, but as an Entity reference revisions field, so you need to use an appropriate plugin.

An example of the core of a migration of Paragraph entities:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: 'feed.url/endpoint' ids: id: type: integer item_selector: '/elements' fields: - name: id label: Id selector: /element_id - name: content label: Content selector: /element_content process: field_paragraph_type_content/value: content destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: paragraph_type migration_dependencies: { } plugin:urldata_fetcher_plugin:httpdata_parser_plugin:jsonurls:'feed.url/endpoint'    type:integeritem_selector:'/elements'    name:id    label:Id    selector:/element_id    name:content    label:Content    selector:/element_contentfield_paragraph_type_content/value:contentdestination:plugin:'entity_reference_revisions:paragraph'default_bundle:paragraph_typemigration_dependencies:{  }

To give some context, this assumes the feed being consumed has a root level with an elements array filled with content arrays with properties like element_id and element_content, and we want to convert those content arrays into Paragraphs of type paragraph_type in Drupal, with the field_paragraph_type_content field storing the text that came from the element_content property.

Migration of the host entity type

Having imported the Paragraph entities already, we then need to import the host entities, attaching the appropriate Paragraphs to each one’s field_paragraph_type_content field. Typically this is accomplished by using the migration_lookup process plugin (formerly migration).

Every time an entity is imported, a row is created in the mapping table for that migration, with both the ID the entity has in the external source and the internal one it got after being imported. This way the migration keeps a correlation between both states of the data, for updating and other purposes.

The migration_lookup plugin takes an ID from an external source and tries to find an internal entity whose ID is linked to the external one in the mapping table, returning its ID in that case. After that, the entity reference field will be populated with that ID, effectively establishing a link between the entities in the Drupal side.

In the example below, the migration_lookup returns entity IDs and creates references to other Drupal entities through the field_event_schools field:

field_event_schools: plugin: iterator source: event_school process: target_id: plugin: migration_lookup migration: schools source: school_id field_event_schools:  plugin:iterator  source:event_school  process:    target_id:      plugin:migration_lookup      migration:schools      source:school_id

However, while references to nodes or terms basically consist of the ID of the referenced entity, when using the entity_reference_revisions destination plugin (as we did to import the Paragraph entities), two IDs are stored per entity. One is the entity ID and the other is the entity revision ID. That means the return of the migration_lookup processor is not an integer, but an array of them.

process: field_paragraph_type_content: plugin: iterator source: elements process: temporary_ids: plugin: migration_lookup migration: paragraphs_migration source: element_id target_id: plugin: extract source: '@temporary_ids' index: - 0 target_revision_id: plugin: extract source: '@temporary_ids' index: - 1 field_paragraph_type_content:  plugin:iterator  source:elements  process:    temporary_ids:      plugin:migration_lookup      migration:paragraphs_migration      source:element_id    target_id:      plugin:extract      source:'@temporary_ids'      index:        -0    target_revision_id:      plugin:extract      source:'@temporary_ids'      index:        -1

What we do then is, instead of just returning an array (it wouldn’t work obviously), use the extract process plugin with it to get the integer IDs needed to create an effective reference.

Summary

In summary, it’s important to remember that migrating Paragraphs is a two-step process at minimum. First, you must migrate entities of type Paragraph. Then you must migrate entities referencing those imported Paragraph entities.

More on Drupal 8

Top 5 Reasons to Migrate Your Site to Drupal 8

Creating your Emulsify 2.0 Starter Kit with Drush

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jan 18 2018
Jan 18
January 18th, 2018

What are Spectre and Meltdown?

Have you noticed your servers or desktops are running slower than usual? Spectre and Meltdown can affect most devices we use daily. Cloud servers, desktops, laptops, and mobile devices. For more details go to: https://meltdownattack.com/

How does this affect performance?

We finally have some answers to how this is going to affect us. After Pantheon patched their servers they released an article showing the 10-30% negative performance impact that servers are going to have. For the whole article visit: https://status.pantheon.io/incidents/x9dmhz368xfz

I can say that I personally have noticed my laptop’s CPU is running at much higher percentages than before the update for similar tasks.
Security patches are still being released for many operating systems, but traditional desktop OSs appear to have been covered now. If you haven’t already, make sure your OS is up to date. Don’t forget to update the OS on your phone.

Next Steps?

So what can we do in the Drupal world? First, you should follow up with your hosting provider and verify they have patched your servers. Then you need to find ways to counteract the performance loss. If you are interested in performance recommendations, Four Kitchens offers both frontend and backend performance audits.

As a quick win, if you haven’t already, upgrade to PHP7 which should give you a performance boost around 30-50% on PHP processes. Now that you are more informed about what Spectre and Meltdown are, help with the performance effort by volunteering or sponsoring a developer on January 27, 2018 and January 28, 2018 for the Drupal Global Sprint Weekend 2018, specifically on performance related issues: https://groups.drupal.org/node/517797

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Dec 20 2017
Dec 20
December 20th, 2017

One of the most common requests we get in regards to Emulsify is to show concrete examples of components. There is a lot of conceptual material out there on the benefits of component-driven development in Drupal 8—storing markup, CSS, and JavaScript together using some organizational pattern (à la Atomic Design), automating the creation of style guides (e.g., using Pattern Lab) and using Twig’s include, extends and embed functions to work those patterns into Drupal seamlessly. If you’re reading this article you’re likely already sold on the concept. It’s time for a concrete example!

In this tutorial, we’ll build a full site header containing a logo, a search form, and a menu – here’s the code if you’d like to follow along. We will use Emulsify, so pieces of this may be specific to Emulsify and we will try and note those where necessary. Otherwise, this example could, in theory, be extended to any Drupal 8 project using component-driven development.

Planning Your Component

The first step in component-driven development is planning. In fact, this may be the definitive phase in component-driven development. In order to build reusable systems, you have to break down the design into logical, reusable building blocks. In our case, we have 3 distinct components—what we would call in Atomic Design “molecules”—a logo, a search form, and a menu. In most component-driven development systems you would have a more granular level as well (“atoms” in Atomic Design). Emulsify ships with pre-built and highly flexible atoms for links, images, forms, and lists (and much more). This allows us to jump directly into project-specific molecules.

So, what is our plan? We are going to first create a molecule for each component, making use of the atoms listed above wherever possible. Then, we will build an organism for the larger site header component. On the Drupal side, we will map our logo component to the Site Branding block, the search form to the default Drupal search form block, the menu to the Main Navigation block and the site header to the header region template. Now that we have a plan, let’s get started on our first component—the logo.

The Logo Molecule

Emulsify automatically provides us with everything we need to print a logo – see components/_patterns/01-atoms/04-images/00-image/image.twig. Although it is an image atom, it has an optional img_url variable that will wrap the image in a link if present. So, in this case, we don’t even have to create the logo component. We merely need a variant of the image component, which is easy to do in Pattern Lab by duplicating components/_patterns/01-atoms/04-images/00-image/image.yml and renaming it as components/_patterns/01-atoms/04-images/00-image/image~logo.yml (see Pattern Lab documentation).

Next, we change the variables in the image~logo.yml as needed and add a new image_link_base_class variable, naming it whatever we like for styling purposes. For those who are working in a new installation of Emulsify alongside this tutorial, you will notice this file already exists! Emulsify ships with a ready-made logo component. This means we can immediately jump into mapping our new logo component in Drupal.

Connecting the Logo Component to Drupal

Although you could just write static markup for the logo, let’s use the branding block in Drupal (the block that supplies the theme logo or one uploaded via the Appearance Settings page). These instructions assume you have a local Drupal development environment complete with Twig debugging enabled. Add the Site Branding block to your header region in the Drupal administrative UI to see your branding block on your page. Inspect the element to find the template file in play.

In our case there are two templates—the outer site branding block file and the inner image file. It is best to use the file that contains the most relevant information for your component. Seeing as we need variables like image alt and image src to map to our component, the most relevant file would be the image file itself. Since Emulsify uses Stable as a base theme, let’s check there first for a template file to use. Stable uses core/themes/stable/templates/field/image.html.twig to print images, so we copy that file down to its matching directory in Emulsify creating templates/fields/image.html.twig (this is the template for all image fields, so you may have to be more specific with this filename). Any time you add a new template file, clear the cache registry to make sure that Drupal recognizes the new file. Now the goal in component-driven development is to have markup in components that simply maps to Drupal templates, so let’s replace the default contents of the image.html.twig file above ( <img{{ attributes }}> ) with the following:

{% include "@atoms/04-images/00-image/image.twig" with { img_url: "/", img_src: attributes.src, img_alt: attributes.alt, image_blockname: "logo", image_link_base_class: "logo", } %} {%include"@atoms/04-images/00-image/image.twig"with{  img_url:"/",  img_src:attributes.src,  img_alt:attributes.alt,  image_blockname:"logo",  image_link_base_class:"logo",

We’re using the Twig include statement to use our markup from our original component and pass a mixture of static (url, BEM classes) and dynamic (img alt and src) content to the component. To figure out what Drupal variables to use for dynamic content, see first the “Available variables” section at the top of the Drupal Twig file you’re using and then use the Devel module and the kint function to debug the variables themselves. Also, if you’re new to seeing the BEM class variables (Emulsify-specific), see our recent post on why/how we use these variables (and the BEM function) to pass in BEM classes to Pattern Lab and the Drupal Attributes object. Basically, this include statement above will print out:

<a class="logo" href="https://www.fourkitchens.com/"> <img class="logo__img" src=”/themes/emulsify/logo.svg" alt="Home"> </a> <aclass="logo"href="/">    <imgclass="logo__img"src=/themes/emulsify/logo.svg" alt="Home">

We should now see our branding block using our custom component markup! Let’s move on to the next molecule—the search form.

The Search Form Molecule

Component-driven development, particularly the division of components into controlled, separate atomic units, is not always perfect. But the beauty of Pattern Lab (and Emulsify) is that there is a lot of flexibility in how you markup a component. If the ideal approach of using a Twig function to include other smaller elements isn’t possible (or is too time consuming), simply write custom HTML for the component as needed for the situation! One area where we lean into this flexibility is in dealing with Drupal’s form markup. Let’s take a look at how you could handle the search block. First, let’s create a form molecule in Pattern Lab.

Form Wrapper

Create a directory in components/_patterns/02-molecules entitled “search-form” with a search-form.twig file with the following contents (markup tweaked from core/themes/stable/templates/form/form.html.twig):

<form {{ bem('search')}}> {% if children %} {{ children }} {% else %} <div class="search__item"> <input title="Enter the terms you wish to search for." size="15" maxlength="128" class="form-search"> </div> <div class="form-actions"> <input type="submit" value="Search" class="form-item__textfield button js-form-submit form-submit"> </div> {% endif %} </form> <form{{bem('search')}}>  {%ifchildren%}    {{children}}  {%else%}    <divclass="search__item">      <inputtitle="Enter the terms you wish to search for."size="15"maxlength="128"class="form-search">    </div>    <divclass="form-actions">      <inputtype="submit"value="Search"class="form-item__textfield button js-form-submit form-submit">    </div>  {%endif%}

In this file (code here) we’re doing a check for the Drupal-specific variable “children” in order to pass one thing to Drupal and another to Pattern Lab. We want to make the markup as similar as possible between the two, so I’ve copied the relevant parts of the markup by inspecting the default Drupal search form in the browser. As you can see there are two classes we need on the Drupal side. The first is on the outer <form>  wrapper, so we will need a matching Drupal template to inherit that. Many templates in Drupal will have suggestions by default, but the form template is a great example of one that doesn’t. However, adding a new template suggestion is a minor task, so let’s add the following code to emulsify.theme:

/** * Implements hook_theme_suggestions_HOOK_alter() for form templates. */ function emulsify_theme_suggestions_form_alter(array &$suggestions, array $variables) { if ($variables['element']['#form_id'] == 'search_block_form') { $suggestions[] = 'form__search_block_form'; } } * Implements hook_theme_suggestions_HOOK_alter() for form templates.functionemulsify_theme_suggestions_form_alter(array&$suggestions,array$variables){  if($variables['element']['#form_id']=='search_block_form'){    $suggestions[]='form__search_block_form';

After clearing the cache registry, you should see the new suggestion, so we can now add the file templates/form/form--search-block-form.html.twig. In that file, let’s write:
{% include "@molecules/search-form/search-form.twig" %} {%include"@molecules/search-form/search-form.twig"%}

The Form Element

We have only the “search__item” class left, for which we follow a similar process. Let’s create the file components/_patterns/02-molecules/search-form/_search-form-element.twig, copying the contents from core/themes/stable/templates/form/form-element.html.twig and making small tweaks like so:

{% set classes = [ 'js-form-item', 'search__item', 'js-form-type-' ~ type|clean_class, 'search__item--' ~ name|clean_class, 'js-form-item-' ~ name|clean_class, title_display not in ['after', 'before'] ? 'form-no-label', disabled == 'disabled' ? 'form-disabled', errors ? 'form-item--error', ] %} {% set description_classes = [ 'description', description_display == 'invisible' ? 'visually-hidden', ] %} <div {{ attributes.addClass(classes) }}> {% if label_display in ['before', 'invisible'] %} {{ label }} {% endif %} {% if prefix is not empty %} <span class="field-prefix">{{ prefix }}</span> {% endif %} {% if description_display == 'before' and description.content %} <div{{ description.attributes }}> {{ description.content }} </div> {% endif %} {{ children }} {% if suffix is not empty %} <span class="field-suffix">{{ suffix }}</span> {% endif %} {% if label_display == 'after' %} {{ label }} {% endif %} {% if errors %} <div class="form-item--error-message"> {{ errors }} </div> {% endif %} {% if description_display in ['after', 'invisible'] and description.content %} <div{{ description.attributes.addClass(description_classes) }}> {{ description.content }} </div> {% endif %} </div>   setclasses=[    'js-form-item',    'search__item',    'js-form-type-'~type|clean_class,    'search__item--'~name|clean_class,    'js-form-item-'~name|clean_class,    title_displaynotin['after','before']?'form-no-label',    disabled=='disabled'?'form-disabled',    errors?'form-item--error',  setdescription_classes=[    'description',    description_display=='invisible'?'visually-hidden',<div{{attributes.addClass(classes)}}>  {%iflabel_displayin['before','invisible']%}    {{label}}  {%endif%}  {%ifprefixisnotempty%}    <spanclass="field-prefix">{{prefix}}</span>  {%endif%}  {%ifdescription_display=='before'anddescription.content%}    <div{{description.attributes}}>      {{description.content}}    </div>  {%endif%}  {{children}}  {%ifsuffixisnotempty%}    <spanclass="field-suffix">{{suffix}}</span>  {%endif%}  {%iflabel_display=='after'%}    {{label}}  {%endif%}  {%iferrors%}    <divclass="form-item--error-message">      {{errors}}    </div>  {%endif%}  {%ifdescription_displayin['after','invisible']anddescription.content%}    <div{{description.attributes.addClass(description_classes)}}>      {{description.content}}    </div>  {%endif%}

This file will not be needed in Pattern Lab, which is why we’ve used the underscore at the beginning of the name. This tells Pattern Lab to not display the file in the style guide. Now we need this markup in Drupal, so let’s add a new template suggestion in emulsify.theme like so:

/** * Implements hook_theme_suggestions_HOOK_alter() for form element templates. */ function emulsify_theme_suggestions_form_element_alter(array &$suggestions, array $variables) { if ($variables['element']['#type'] == 'search') { $suggestions[] = 'form_element__search_block_form'; } }   * Implements hook_theme_suggestions_HOOK_alter() for form element templates.functionemulsify_theme_suggestions_form_element_alter(array&$suggestions,array$variables){    if($variables['element']['#type']=='search'){      $suggestions[]='form_element__search_block_form';

And now let’s add the file templates/form/form-element--search-block-form.html.twig with the following code:

{% include "@molecules/search-form/_search-form-element.twig" %} {%include"@molecules/search-form/_search-form-element.twig"%}

We now have the basic pieces for styling our search form in Pattern Lab and Drupal. This was not the fastest element to theme in a component-driven way, but it is a good example of complex concepts that will help when necessary. We hope to make creating form components a little easier in future releases of Emulsify, similar to what we’ve done in v2 with menus. And speaking of menus…

The Main Menu

In Emulsify 2, we have made it a bit easier to work with another complex piece of Twig in Drupal 8, which is the menu system. The files that do the heavy-lifting here are components/_patterns/02-molecules/menus/_menu.twig  and components/_patterns/02-molecules/menus/_menu-item.twig  (included in the first file). We also already have an example of a main menu component in the directory

themes/emulsify/components/_patterns/02-molecules/menus/main-menu themes/emulsify/components/_patterns/02-molecules/menus/main-menu

which is already connected in the Drupal template

templates/navigation/menu--main.html.twig templates/navigation/menu--main.html.twig

Obviously, you can use this as-is or tweak the code to fit your situation, but let’s break down the key pieces which could help you define your own menu.

Menu Markup

Ignoring the code for the menu toggle inside the file, the key piece from themes/emulsify/components/_patterns/02-molecules/menus/main-menu/main-menu.twig is the include statement:

<nav id="main-nav" class="main-nav"> {% include "@molecules/menus/_menu.twig" with { menu_class: 'main-menu' } %} </nav> <navid="main-nav"class="main-nav">  {%include"@molecules/menus/_menu.twig"with{    menu_class:'main-menu'

This will use all the code from the original heavy-lifting files while passing in the class we need for styling. For an example of how to stub out component data for Pattern Lab, see components/_patterns/02-molecules/menus/main-menu/main-menu.yml. This component also shows you how you can have your styling and javascript live alongside your component markup in the same directory. Finally, you can see a more simple example of using a menu like this in the components/_patterns/02-molecules/menus/inline-menu component. For now, let’s move on to placing our components into a header organism.

The Header Organism

Now that we have our three molecule components built, let’s create a wrapper component for our site header. Emulsify ships with an empty component for this at components/_patterns/03-organisms/site/site-header. In our usage we want to change the markup in components/_patterns/03-organisms/site/site-header/site-header.twig to:

<header class="header"> <div class="header__logo"> {% block logo %} {% include "@atoms/04-images/00-image/image.twig" %} {% endblock %} </div> <div class="header__search"> {% block search %} {% include "@molecules/search-form/search-form.twig" %} {% endblock %} </div> <div class="header__menu"> {% block menu %} {% include "@molecules/menus/main-menu/main-menu.twig" %} {% endblock %} </div> </header> <headerclass="header">  <divclass="header__logo">    {%blocklogo%}      {%include"@atoms/04-images/00-image/image.twig"%}    {%endblock%}  </div>  <divclass="header__search">    {%blocksearch%}      {%include"@molecules/search-form/search-form.twig"%}    {%endblock%}  </div>  <divclass="header__menu">    {%blockmenu%}      {%include"@molecules/menus/main-menu/main-menu.twig"%}    {%endblock%}  </div>

Notice the use of Twig blocks. These will help us provide default data for Pattern Lab while giving us the flexibility to replace those with our component templates on the Drupal side. To populate the default data for Pattern Lab, simply create components/_patterns/03-organisms/site/site-header/site-header.yml and copy over the data from components/_patterns/01-atoms/04-images/00-image/image~logo.yml and components/_patterns/02-molecules/menus/main-menu/main-menu.yml. You should now see your component printed in Pattern Lab.

Header in Drupal

To print the header organism in Drupal, let’s work with the templates/layout/region--header.html.twig file, replacing the default contents with:

{% extends "@organisms/site/site-header/site-header.twig" %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {%extends"@organisms/site/site-header/site-header.twig"%}{%blocklogo%}  {{elements.emulsify_branding }}{%endblock%}{%blocksearch%}  {{elements.emulsify_search }}{%endblock%}{%blockmenu%}  {{elements.emulsify_main_menu }}{%endblock%}

Here, we’re using the Twig extends statement to be able to use the Twig blocks we created in the component. You can also use the more robust embed statement when you need to pass variables like so:

{% embed "@organisms/site/site-header/site-header.twig" with { variable: "something", } %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {% endembed %} {%embed"@organisms/site/site-header/site-header.twig"with{  variable:"something",  {%blocklogo%}    {{elements.emulsify_branding }}  {%endblock%}  {%blocksearch%}    {{elements.emulsify_search }}  {%endblock%}  {%blockmenu%}    {{elements.emulsify_main_menu }}  {%endblock%}{%endembed%}

For our purposes, we can simply use the extends statement. You’ll notice that we are using the elements variable. This variable is currently not listed in the Stable region template at the top, but is extremely useful in printing the blocks that are currently in that region. Finally, if you’ve added the file, be sure and clear the cache registry—otherwise, you should now see your full header in Drupal.

Final Thoughts

Component-driven development is not without trials, but I hope we have touched on some of the more difficult ones in this article to speed you on your journey. If you would like to view the branch of Emulsify where we built this site header component, you can see that here. Feel free to sift through and reverse-engineer the code to figure out how to build your own component-driven Drupal project!

This fifth episode concludes our five-part video-blog series for Emulsify 2.x. Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Dec 14 2017
Dec 14
December 14th, 2017

When working with Pantheon, you’re presented with the three typical environments: Dev, Test and Live. This scheme is very common in major hosting providers, and not without a reason: it allows to plan and execute an effective and efficient development process that takes every client need into consideration. We can use CircleCI to manage that process.

CircleCI works based in the circle.yml file located at the root of projects. It is a script of all the stuff the virtual machine will do for you in the cloud, including testing and delivery. The script is triggered by a commit into the repository unless you have it configured to just react to commits to branches with open pull requests. It is divided into sections, each representing a phase of the Build-Test-Deploy process.

In the deployment section, you can put instructions in order to deploy your code to your web servers. A common deployment would look like the following:

deployment: dev: branch: master commands: - ./merge_to_master.sh deployment:    branch:master    commands:      -./merge_to_master.sh

This literally means, perform the operations listed under commands every time a commit is merged into the master branch. You may think there is not a real reason to use a deployment block like this to do an actual deployment. And it’s true, you can do whatever you want there. It’s ideal to perform deployments, but in essence, the deployment section allows you to implement conditional post-build subscripts that react differently depending on the nature of the action that triggered the whole build.

The Drops 8 Pantheon upstream for Drupal 8 comes with a very handy circle.yml that can help you set up a basic CircleCI workflow in a matter of minutes. It heavily relies on the use of Pantheon’s Terminus CLI and a couple of plugins like the excellent Terminus Build Tools Plugin, that provides probably the most important call of the whole script:

terminus build:env:create -n "$TERMINUS_SITE.dev" "$TERMINUS_ENV" --yes --clone-content --db-only --notify="$NOTIFY" terminusbuild:env:create-n"$TERMINUS_SITE.dev""$TERMINUS_ENV"--yes--clone-content--db-only--notify="$NOTIFY"

The line above creates a multidev environment in Pantheon, merging the code and generated assets in Circle’s VM into the code coming from the dev environment, and also clones the database from there. You can then use drush to update that database with the changes in configuration that you just merged in.

Once you get to the deployment section, you already have a functional multidev environment. The deployment happens by merging the artifact back into dev:

deployment: build-assets: branch: master commands: - terminus build:env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes deployment:    build-assets:        branch:master        commands:            -terminusbuild:env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes

This workflow assumes a simple git workflow where you just create feature branches and merge them into master where they’re ready. It also takes the deployment process just to the point code reaches dev. This is sometimes not enough.

Integrating release branches

When in a gitflow involving a central release branch, the perfect environment to host a site that completely reflects the state of the release branch is the dev environment, After all, development is only being actively done in that branch. Assuming your release branch is statically called develop:

deployment: dev: branch: develop commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh test: branch: master commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh # Deploy to TEST environment. - terminus env:deploy $TERMINUS_SITE.test --sync-content - ./rebuild_test.sh - terminus env:clone-content "$TERMINUS_SITE.test" "dev" --yes deployment:    branch:develop    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh    branch:master    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh        # Deploy to TEST environment.      -terminusenv:deploy$TERMINUS_SITE.test--sync-content      -./rebuild_test.sh      -terminusenv:clone-content"$TERMINUS_SITE.test""dev"--yes

This way, when you merge into the release branch, the multidev environment associated with it will get merged into dev and deleted, and dev will be rebuilt.

The feature is available in Dev right after merging into the release branch.

The same happens on release day when the release branch is merged into master, but after dev is rebuilt, it is also deployed to the test environment:

The release branch goes all the way to the test environment.

A few things to notice about this process:

  • The --sync-content option brings database and files from the live environment to test at the same time code is coming there from dev. By rebuilding test, we’re now able to test the latest changes in code against the latest changes in content, assuming live is your primary content entry point.
  • The last Terminus command takes the database from test and sends it back to dev. So, to recap, the database originally came from live, was rebuilt in test using dev’s fresh code, and now goes to dev. At this moment, test and dev are identical. Just until the next commit is thrown into the release branch.
  • This process facilitates testing. While the next release is already in progress and transforming dev, the client can take all the time to give the final approval for what’s in test. Once that happens, the deployment to live should occur in a semi-automatic way at most. But nothing really prevents you from using this same approach to automate also the deployment to live. Well, nothing but good judgment.
  • By using circle.yml to handle the deployment process, you contribute to keep workflow configuration centralized and accessible. With the appropriate system in place, you can trigger a complete and fully automated deployment just by doing a commit to Github, and all you’ll ever need to know about the process is in that single file.
Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Nov 13 2017
Nov 13
November 13th, 2017

Welcome to the fourth episode in our video series for Emulsify 2.x. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to best use a DRY Twig approach when working in Emulsify. This blog post accompanies a tutorial video, embedded at the end of this post.

DRYing Out Your Twigs

Although we’ve been using a DRY Twig approach in Emulsify since before the 2.x release, it’s a topic worth addressing because it is unique to Emulsify and provides great benefit to you workflow. After all, what drew you to component-driven development in the first place? Making things DRY of course!

In component-driven development, we build components once and reuse them together in different combination—like playing with Lego. In Emulsify, we use Sass mixins and BEM-style CSS to make our CSS as reusable and isolated as possible. DRY Twig simply extends these same benefits to the HTML itself. Let’s look at an example:

Non-DRY Twig:

<h2 class=”title”> <a class=”title__link” href=”/”>Link Text</a> </h2> <h2class=title><aclass=title__link” href=/>LinkText</a>

DRY Twig:

<h2 class=”title”> {% include "@atoms/01-links/link/link.twig" with { "link_content": “Link Text”, "link_url": “/”, "link_class": “title__link”, } %} </h2> <h2class=title>{%include"@atoms/01-links/link/link.twig"with{"link_content":LinkText,"link_url":/,"link_class":title__link”,

The code with DRY Twig is more verbose, but by switching to this method, we’ve now removed a point of failure in our HTML. We’re not repeating the same HTML everywhere! We write that HTML once and reuse it everywhere it is needed.

The concept is simple, and it is found everywhere in the components directory that ships in Emulsify. HTML gets written mostly as atoms and is simply reused in larger components using the default include, extends or embed functions built into Twig. We challenge you to try this in a project, and see what you think.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Oct 26 2017
Oct 26
October 26th, 2017

Welcome to the third episode in our video series for Emulsify 2.x. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how Emulsify works with the BEM Twig extension. This blog post accompanies a tutorial video, embedded at the end of this post.

Background

In Emulsify 2.x, we have enhanced our support for BEM in Drupal by creating the BEM Twig extension. The BEM Twig extension makes it easy to deliver classes to both Pattern Lab and Drupal while using Drupal’s Attributes object. It also has the benefit of simplifying our syntax greatly. See the code below.

Emulsify 1.x:

{% set paragraph_base_class_var = paragraph_base_class|default('paragraph') %} {% set paragraph_modifiers = ['large', 'red'] %} <p class="{{ paragraph_base_class_var }}{% for modifier in paragraph_modifiers %} {{ paragraph_base_class_var }}--{{ modifier }}{% endfor %}{% if paragraph_blockname %} {{ paragraph_blockname }}__{{ paragraph_base_class_var }}{% endif %}"> {% block paragraph_content %} {{ paragraph_content }} {% endblock %} </p> {%setparagraph_base_class_var=paragraph_base_class|default('paragraph')%}{%setparagraph_modifiers=['large','red']%}<pclass="{{ paragraph_base_class_var }}{% for modifier in paragraph_modifiers %} {{ paragraph_base_class_var }}--{{ modifier }}{% endfor %}{% if paragraph_blockname %} {{ paragraph_blockname }}__{{ paragraph_base_class_var }}{% endif %}">  {%blockparagraph_content%}    {{paragraph_content }}  {%endblock%}

Emulsify 2.x:

<p {{ bem('paragraph', ['large', 'red']) }}> {% block paragraph_content %} {{ paragraph_content }} {% endblock %} </p> <p{{bem('paragraph',['large','red'])}}>  {%blockparagraph_content%}    {{paragraph_content }}  {%endblock%}

In both Pattern Lab and Drupal, this function above will create p class=”paragraph paragraph--large paragraph--red”, but in Drupal it will use the equivalent of p{{ attributes.addClass('paragraph paragraph--large paragraph--red') }}, appending these classes to whatever classes core or other plugins provide as well. Simpler syntax + Drupal Attributes support!

We have released the BEM Twig function open source under the Drupal Pattern Lab initiative. It is in Emulsify 2.x by default, but we wanted other projects to be able to benefit from it as well.

Usage

The BEM Twig function accepts four arguments, only one of which is required.

Simple block name:
h1 {{ bem('title') }}

In Drupal and Pattern Lab, this will print:

h1 class="title"

Block with modifiers (optional array allowing multiple modifiers):

h1 {{ bem('title', ['small', 'red']) }}

This creates:

h1 class="title title--small title--red"

Element with modifiers and block name (optional):

h1 {{ bem('title', ['small', 'red'], 'card') }}

This creates:

h1 class="card__title card__title--small card__title--red"

Element with block name, but no modifiers (optional):

h1 {{ bem('title', '', 'card') }}

This creates:

h1 class="card__title"

Element with modifiers, block name and extra classes (optional, in case you need non-BEM classes):

h1 {{ bem('title', ['small', 'red'], 'card', ['js-click', 'something-else']) }}

This creates:

h1 class="card__title card__title--small card__title--red js-click something-else"

Element with extra classes only (optional):

h1 {{ bem('title', '', '', ['js-click']) }}

This creates:

h1 class="title js-click"

Ba da BEM, Ba da boom

With the new BEM Twig extension that we’ve added to Emulsify 2.x, you can easily deliver classes to Pattern Lab and Drupal, while keeping a nice, simple syntax. Thanks for following along! Make sure you check out the other posts in this series and their video tutorials as well!

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach| Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Oct 13 2017
Oct 13
October 13th, 2017

Welcome to the second episode in our new video series for Emulsify. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to create an Emulsify 2.0 starter kit with Drush. This blog post follows the video closely, so you can skip ahead or repeat sections in the video by referring to the timestamps for each section.

PURPOSE [00:15]

This screencast will specifically cover the Emulsify Drush command. The command’s purpose is to setup a new copy of the Emulsify theme.

Note: I used the word “copy” here and not “subtheme” intentionally. This is because the subtheme of your new copy is Drupal Core’s Stable theme, NOT Emulsify.

This new copy of Emulsify will use the human-readable name that your provide, and will build the necessary structure to get you on your way to developing a custom theme.

REQUIREMENTS [00:45]

Before we dig in too deep I recommend that you have the following installed first:

  • a Drupal 8 Core installation
  • the Drush CLI command at least major version 8
  • Node.js preferably the latest stable version
  • a working copy of the Emulsify demo theme 2.X or greater

If you haven’t already watched the Emulsify 2.0 composer install presentation, please stop this video and go watch that one.

Note: If you aren’t already using Drush 9 you should consider upgrading as soon as possible because the next minor version release of Drupal Core 8.4.0 is only going to work with Drush 9 or greater.

RECOMMENDATIONS [01:33]

We recommend that you use PHP7 or greater as you get some massive performance improvements for a very little amount of work.

We also recommend that you use composer to install Drupal and Emulsify. In fact, if you didn’t use Composer to install Emulsify—or at least run Composer install inside of Emulsify—you will get errors. You will also notice errors if npm install failed on the Emulsify demo theme installation.

AGENDA [02:06]

Now that we have everything setup and ready to go, this presentation will first discuss the theory behind the Drush script. Then we will show what you should expect if the installation was successful. After that I will give you some links to additional resources.

BACKGROUND [02:25]

The general idea of the command is that it creates a new theme from Emulsify’s files but is actually based on Drupal Core’s Stable theme. Once you have run the command, the demo Emulsify theme is no longer required and you can uninstall it from your Drupal codebase.

WHEN, WHERE, and WHY? [02:44]

WHEN: You should run this command before writing any custom code but after your Drupal 8 site is working and Emulsify has been installed (via Composer).

WHERE: You should run the command from the Drupal root or use a Drush alias.

WHY: Why you should NOT edit the Emulsify theme’s files. If you installed Emulsify the recommended way (via Composer), next time you run composer update ALL of your custom code changes will be wiped out. If this happens I really hope you are using version control.

HOW TO USE THE COMMAND? [03:24]

Arguments:

Well first it requires a single argument, the human-readable name. This name can contain spaces and capital letters.

Options:

The command has defaults set for options that you can override.

This first is the theme description which will appear within Drupal and in your .info file.

The second is the machine-name; this is the option that allows you to pick the directory name and the machine name as it appears within Drupal.

The third option is the path; this is the path that your theme will be installed to, it defaults to “themes/custom” but if you don’t like that you can change it to any directory relative to your web root.

Fourth and final option is the slim option. This allows advanced users who don’t need demo content or don’t want anything but the bare minimum required to create a new theme.

Note:

Only the human_readable_name is required, options don’t have to appear in any order, don’t have to appear at all, or you can only pass one if you just want to change one of the defaults.

SUCCESS [04:52]

If your new theme was successfully created you should see the successful output message. In the example below I used the slim option because it is a bit faster to run but again this is an option and is NOT required.

The success message contains information you may find helpful, including the name of the theme that was created, the path where it was installed, and the next required step for setup.

THEME SETUP [05:25]

Setting up your custom theme. Navigate to your custom theme on the command line. Type the yarn and watch as pattern lab is downloaded and installed. If the installation was successful you should see a pattern lab successful message and your theme should now be visible within Drupal.

COMPILING YOUR STYLE GUIDE [05:51]

Now that we have pattern lab successfully installed and you committed it to you version control system, you are probably eager to use it. Emulsify uses npm scripts to setup a local pattern lab instance for display of your style guide.

The script you are interested in is yarn start. Run this command for all of your local development. You do NOT have to have a working Drupal installation at this point to do development on your components.

If you need a designer who isn’t familiar with Drupal to make some tweaks, you will only have to give them your code base, have them use yarn to install, and yarn start to see your style guide.

It is however recommended the initial setup of your components is done by someone with background knowledge of Drupal templates and themes as the variables passed to each component will be different for each Drupal template.

For more information on components and templates keep an eye out for our soon to come demo components and screencasts on building components.

VIEWING YOUR STYLE GUIDE [07:05]

Now that you have run yarn start you can open your browser and navigate to the localhost URL that appears in your console. If you get an error here you might already have something running on port 3000. If you need to cancel this script hit control + c.

ADDITIONAL RESOURCES [07:24]

Thank you for watching today’s screencast, we hope you found this presentation informative and enjoy working with Emulsify 2.0. If you would like to search for some additional resources you can go to emulsify.info or github.com/fourkitchens/emulsify.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Oct 05 2017
Oct 05
October 5th, 2017

Welcome to the first episode in our new video series for Emulsify. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to get you up and running with Emulsify. This blog post accompanies a tutorial video, which you can find embedded at the end.

Emulsify is, at it’s core, a prototyping tool. At Four Kitchens we also use it as a Drupal 8 theme starter kit. Depending on how you want to use it, the installation steps will vary. I’ll quickly go over how to install and use Emulsify as a stand alone prototyping tool, then I’ll show you how we use it to theme Drupal 8 sites.

Emulsify Standalone

Installing Emulsify core as a stand alone tool is a simple process with Composer and NPM (or Yarn).

  1. composer create-project fourkitchens/emulsify --stability dev --no-interaction emulsify
  2. cd emulsify
  3. yarn install (or npm install, if you don’t have yarn installed)

Once the installation process is complete, you can start it with either npm start or yarn start:

  1. yarn start

Once it’s up, you can use the either the Local or External links to view the Pattern Lab instance in the browser. (The External link is useful for physical device testing, like on your phone or tablet, but can vary per-machine. So, if you’re using hosted fonts, you might have to add a bunch of IPs to your account to accommodate all of your developers.)

The start process runs all of the build and watch commands. So once it’s up, all of your changes are instantly reflected in the browser.

I can add additional colors to the _color-vars.scss file, Edit the card.yml example data, or even update the 01-card.twig file to modify the structure of the card component.

That’s really all there is to using Emulsify as a prototyping tool. You can quickly build out your components using component-driven design without having to have a full web server, and site, up and running.

Emulsify in a Composer-Based Drupal 8 Installation

It’s general best practice to install Drupal 8 via Composer, and that’s what we do at Four Kitchens. So, we’ve built Emulsify 2 to work great in that environment. I won’t cover the details of installing Drupal via Composer since that’s out of scope for this video, and there are videos that cover that already. Instead, I’ll quickly run through that process, and then come back and walk through the details of how to install Emulsify in a Composer-based Drupal 8 site.

Okay, I’ve got a fresh Drupal 8 site installed. Let’s install Emulsify alongside it.

From the project root, we’ll run the composer require command:

  • composer require fourkitchens/emulsify

Next, we’ll enable Emulsify and its dependencies:

  • cd web
  • drush en emulsify components unified_twig_ext -y

At this point, we highly recommend you use the Drush script that comes with Emulsify to create a custom clone of Emulsify for your actual production site. The reason is that any change you make to Emulsify core will be overwritten when you update Emulsify, and there’s currently no real good way to create a child theme of a component-based, Pattern Lab -powered, Drupal theme. So, the Drush script simply creates a clone of Emulsify and makes the file renaming process into a simple script.

We have another video covering the Drush script, so definitely watch that for all of the details. For this video though, I’ll just use emulsify core, since I’m not going to make any customizations.

  • cd web/themes/contrib/emulsify/ (If you do create a clone with the drush script, you’ll cd web/themes/custom/THEME_NAME/)
  • yarn install

  • yarn start

Now we have our Pattern Lab instance up and running, accessible at the links provided.

We can also head over to the “Appearance” page on our site, and set our theme as the default. When we do that, and go back to the homepage, it looks all boring and gray, but that’s just because we haven’t started doing any actual theming yet.

At this point, the theme is installed, and you’re ready to create your components and make your site look beautiful!

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Jul 13 2017
Jul 13
July 13th, 2017

When creating the Global Academy for continuing Medical Education (GAME) site for Frontline, we had to tackle several complex problems in regards to content migrations. The previous site had a lot of legacy content we had to bring over into the new system. By tackling each unique problem, we were able to migrate most of the content into the new Drupal 7 site.

Setting Up the New Site

The system Frontline used before the redesign was called Typo3, along with a suite of individual, internally-hosted ASP sites for conferences. Frontline had several kinds of content that displayed differently throughout the site. The complexity with handling the migration was that a lot of the content was in WYSIWYG fields that contained large amounts of custom HTML.

We decided to go with Drupal 7 for this project so we could more easily use code that was created from the MDEdge.com site.

“How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

The GAME website redesign greatly improved the flow of the content and how it was displayed on the frontend, and part of that improvement was displaying specific pieces of content in different sections of the page. The burning question that plagued us when tackling this problem was “How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

Before we could get deep into the code, we had to do some planning and setup to make sure we were clear in how to best handle the different types of content. This also included hammering out the content model. Once we got to a spot where we could start migrating content, we decided to use the Migrate module. We grabbed the current site files, images and database and put them into a central location outside of the current site that we could easily access. This would allow us to re-run these migrations even after the site launched (if we needed to)!

Migrating Articles

This content on the new site is connected to MDEdge.com via a Rest API. One complication is that the content on GAME was added manually to Typo3, and wasn’t tagged for use with specific fields. The content type on the new Drupal site had a few fields for the data we were displaying, and a field that stores the article ID from MDedge.com. To get that ID for this migration, we mapped the title for news articles in Typo3 to the tile of the article on MDEdge.com. It wasn’t a perfect solution, but it allowed us to do an initial migration of the data.

Conferences Migration

For GAME’s conferences, since there were not too many on the site, we decided to import the main conference data via a Google spreadsheet. The Google doc was a fairly simple spreadsheet that contained a column we used to identify each row in the migration, plus a column for each field that is in that conference’s content type. This worked out well because most of the content in the redesign was new for this content type. This approach allowed the client to start adding content before the content types or migrations were fully built.

Our spreadsheet handled the top level conference data, but it did not handle the pages attached to each conference. Page content was either stored in the Typo3 data or we needed to extract the HTML from the ASP sites.

Typo3 Categories to Drupal Taxonomies

To make sure we mapped the content in the migrations properly, we created another Google doc mapping file that connected the Typo3 categories to Drupal taxonomies. We set it up to support multiple taxonomy terms that could be mapped to one Typo3 category.
[NB: Here is some code that we used to help with the conversion: https://pastebin.com/aeUV81UX.]

Our mapping system worked out fantastically well. The only problem we encountered was that since we were allowing three taxonomy terms to be mapped to one Typo3 category, the client noticed some use cases where too many taxonomies were assigned to content that had more than one Typo3 category in certain use cases. But this was a content-related issue and required them to re-look at this document and tweak it as necessary.

Slaying the Beast:
Extracting, Importing, and Redirecting

One of the larger problems we tackled was how to get the HTML from the Typo3 system and the ASP conference sites into the new Drupal 7 setup.

The ASP conference sites were handled by grabbing the HTML for each of those pages and extracting the page title, body, and photos. The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

Grabbing the data from the Typo3 sites presented another challenge because we had to figure out where the different data was stored in the database. This was a uniquely interesting process because we had to determine which tables were connected to which other tables in order to figure out the content relationships in the database.

The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

A few things we learned in this process:

  • We found all of the content on the current site was in these tables (which are connected to each other): pages, tt_content, tt_news, tt_news_cat_mm and link_cache.
  • After talking with the client, we were able to grab content based on certain Typo3 categories or the pages hierarchy relationship. This helped fill in some of the gaps where a direct relationship could not be made by looking at the database.
  • It was clear that getting 100% of the legacy content wasn’t going to be realistic, mainly because of the loose content relationships in Typo3. After talking to the client we agreed to not migrate content older than a certain date.
  • It was also clear that—given how much HTML was in the content—some manual cleanup was going to be required.

Once we were able to get to the main HTML for the content, we had to figure out how to extract the specific pieces we needed from that HTML.

Once we had access to the data we needed, it was a matter of getting it into Drupal. The migrate module made a lot of this fairly easy with how much functionality it provided out of the box. We ended up using the prepareRow() method a lot to grab specific pieces of content and assigning them to Drupal fields.

Handling Redirects

We wanted to handle as many of the redirects as we could automatically, so the client wouldn’t have to add thousands of redirects and to ensure existing links would continue to work after the new site launched. To do this we mapped the unique row in the Typo3 database to the unique ID we were storing in the custom migration.

As long as you are handling the unique IDs properly in your use of the Migration API, this is a great way to handle mapping what was migrated to the data in Drupal. You use the unique identifier stored for each migration row and grab the corresponding node ID to get the correct URL that should be loaded. Below are some sample queries we used to get access to the migrated nodes in the system. We used UNION queries because the content that was imported from the legacy system could be in any of these tables.

SELECT destid1 FROM migrate_map_cmeactivitynode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_cmeactivitycontentnode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_conferencepagetypo3node WHERE sourceid1 IN(:sourceid) … SELECTdestid1FROMmigrate_map_cmeactivitynodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_cmeactivitycontentnodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_conferencepagetypo3nodeWHEREsourceid1IN(:sourceid)

Wrap Up

Migrating complex websites is rarely simple. One thing we learned on this project is that it is best to jump deep into migrations early in the project lifecycle, so the big roadblocks can be identified as early as possible. It also is best to give the client as much time as possible to work through any content cleanup issues that may be required.

We used a lot of Google spreadsheets to get needed information from the client. This made things much simpler on all fronts and allowed the client to start gathering needed content much sooner in the development process.

In a perfect world, all content would be easily migrated over without any problems, but this usually doesn’t happen. It can be difficult to know when you have taken a migration “far enough” and you are better off jumping onto other things. This is where communication with the full team early is vital to not having migration issues take over a project.

Web Chef Chris Roane
Chris Roane

When not breaking down and solving complex problems as quickly as possible, Chris volunteers for a local theater called Arthouse Cinema & Pub.

Jun 29 2017
Jun 29
June 29th, 2017

Recently I was working in a Drupal 8 project and we were using the improved Features module to create configuration container modules with some special purposes. Due to client architectural needs, we had to move the /features folder into a separate repository. We basically needed to make it available to many sites in a way we could keep doing active development over it, and we did so by making the new repo a composer dependency of all our projects.

One of the downsides of this new direction was the effects in CircleCI builds for individual projects, since installing and reverting features was an important part of it. For example, to make a new feature module available, we’d push it to this ‘shared’ repo, but to actually enable it we’d need to push the bit change in the core.extension.yml config file to our project repo. Yes, we were using a mixed approach: both features and conventional configuration management.

So a new pull request would be created in both repositories. The problem for Circle builds—given the approach previously outlined—is that builds generated for the pull request in the project repository would require the master branch of the ‘shared’ one. So, for the pull request in the project repo, we’d try to build a site by importing configuration that says a particular feature module should be enabled, and that module wouldn’t exist (likely not present in shared master at that time, still a pull request), so it would totally crash.

There is probably no straightforward way to solve this problem, but we came with a solution that is half code, half strategy. Beyond technical details, there is no practical way to determine what branch of the shared repo should be required for a pull request in the project repo, unless we assume conventions. In our case, we assumed that the correct branch to pair with a project branch was one named the same way. So if a build was a result of a pull request from branch X, we could try to find a PR from branch X in the shared repo and if it existed, that’d be our guy. Otherwise we’d keep pulling master.

So we created a script to do that: &lt;?php $branch = $argv[1]; $github_token = $argv[2]; $github_user = $argv[3]; $project_user = $argv[4]; $shared_repos = array( 'organization/shared' ); foreach ($shared_repos as $repo) { print_r("Checking repo $repo for a pull request in a '$branch' branch...\n"); $pr = <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch, $github_token, $github_user, $project_user, $repo); if (!empty($pr)) { print_r("Found. Requiring...\n"); exec("<strong class="markup--strong markup--pre-strong">composer require $repo:dev-$branch</strong>"); print_r("$repo:dev-$branch pulled.\n"); } else { print_r("Nothing found.\n"); } } function <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name, $github_token, $github_user, $project_user, $repo) { $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name"); curl_setopt($ch,CURLOPT_RETURNTRANSFER,true); curl_setopt($ch, CURLOPT_USERPWD, "$github_user:$github_token"); curl_setopt($ch, CURLOPT_USERAGENT, "$github_user"); $output=json_decode(curl_exec($ch), TRUE); curl_close($ch); return $output; } $branch=$argv[1];$github_token=$argv[2];$github_user=$argv[3];$project_user=$argv[4];$shared_repos=array(  'organization/shared'foreach($shared_reposas$repo){  print_r("Checking repo $repo for a pull request in a '$branch' branch...\n");  $pr=<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch,$github_token,$github_user,$project_user,$repo);if(!empty($pr)){    print_r("Found. Requiring...\n");    exec("<strong class="markup--strongmarkup--pre-strong">composer require $repo:dev-$branch</strong>");    print_r("$repo:dev-$branch pulled.\n");  else{    print_r("Nothing found.\n");function<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name,$github_token,$github_user,$project_user,$repo){  $ch=curl_init();    curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name");  curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);  curl_setopt($ch,CURLOPT_USERPWD,"$github_user:$github_token");  curl_setopt($ch,CURLOPT_USERAGENT,"$github_user");$output=json_decode(curl_exec($ch),TRUE);  curl_close($ch);  return$output;

As you probably know, Circle builds are connected to the internet, so you can make remote requests. What we’re doing here is using the Github API in the middle of a build in the project repo to connect to our shared repo with cURL and try to find a pull request whose branch name matches the one we’re building over. If the request returned something then we can safely say there is a branch named the same way than the current one and with an open pull request in the shared repo, and we can require it.

What’s left for this to work is actually calling the script:

- php scripts/require_feature_branch.php "$CIRCLE_BRANCH" "$GITHUB_TOKEN" "$CIRCLE_USERNAME" "$CIRCLE_PROJECT_USERNAME" -phpscripts/require_feature_branch.php"$CIRCLE_BRANCH""$GITHUB_TOKEN""$CIRCLE_USERNAME""$CIRCLE_PROJECT_USERNAME"

We can do this at any point in circle.yml, since composer require will actually update the composer.json file, so any other composer interaction after executing the script should take your requirement in consideration. Notice that the shared repo will be required twice if you have the requirement in your composer.json file. You could safely remove it from there if you instruct to require the master branch when no matching branch has been found in the script, although this could have unintended effects in other types of environments, like for local development.

Note: A quick reference about the parameters passed to the script:

$GITHUB_TOKEN: #Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a> $CIRCLE_*: #CircleCI vars, automatically available $GITHUB_TOKEN:#Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a>$CIRCLE_*:#CircleCI vars, automatically available

[Editor’s Note: The post “Running CircleCI Builds Based on Many Repositories” was originally published on Joel Travieso’s Medium blog.]

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 31 2017
May 31
May 31st, 2017

In the last post, we created a nested accordion component within Pattern Lab. In this post, we will walk through the basics of integrating this component into Drupal.

Requirements

Even though Emulsify is a ready-made Drupal 8 theme, there are some requirements and background to be aware of when using it.

Emulsify is currently meant to be used as a starterkit. In contrast to a base theme, a starterkit is simply enabled as-is, and tweaked to meet your needs. This is purposeful—your components should match your design requirements, so you should edit/delete example components as needed.

There is currently a dependency for Drupal theming, which is the Components module. This module allows one to define custom namespaces outside of the expected theme /templates directory. Emulsify comes with predefined namespaces for the atomic design directories in Pattern Lab (atoms, molecules, organisms, etc.). Even if you’re not 100% clear currently on what this module does, just know all you have to do is enable the Emulsify theme and the Components module and you’re off to the races.

Components in Drupal

In our last post we built an accordion component. Let’s now integrate this component into our Drupal site. It’s important to understand what individual components you will be working with. For our purposes, we have two: an accordion item (<dt>, <dd>) and an accordion list (<dl>). It’s important to note that these will also correspond to 2 separate Drupal files. Although this can be built in Drupal a variety of ways, in the example below, each accordion item will be a node and the accordion list will be a view.

Accordion Item

You will first want to create an Accordion content type (machine name: accordion), and we will use the title as the <dt> and the body as the <dd>. Once you’ve done this (and added some Accordion content items), let’s add our node template Twig file for the accordion item by duplicating templates/content/node.html.twig into templates/content/node--accordion.html.twig. In place of the default include function in that file, place the following:

{% include "@molecules/accordion-item/accordion-item.twig"
   with {
      "accordion_term": label,
      "accordion_def": content.body,
   }
%}

As you can see, this is a direct copy of the include statement in our accordion component file except the variables have been replaced. Makes sense, right? We want Drupal to replace those static variables with its dynamic ones, in this case label (the node title) and content.body. If you visit your accordion node in the browser (note: you will need to rebuild cache when adding new template files), you will now see your styled accordion item!

But something’s missing, right? When you click on the title, the body field should collapse, which comes from our JavaScript functionality. While JavaScript in the Pattern Lab component will automatically work because Emulsify compiles it to a single file loaded for all components, we want to use Drupal’s built-in aggregation mechanisms for adding JavaScript responsibly. To do so, we need to add a library to the theme. This means adding the following code into emulsify.libraries.yml:

accordion:
  js:
    components/_patterns/02-molecules/accordion-item/accordion-item.js: {}

Once you’ve done that and rebuilt the cache, you can now use the following snippet in any Drupal Twig file to load that library [NB: read more about attach_library]:

{{ attach_library('emulsify/accordion') }}

So, once you’ve added that function to your node–accordion.html.twig file, you should have a working accordion item. Not only does this function load your accordion JavaScript, but it does so in a way that only loads it when that Twig file is used, and also takes advantage of Drupal’s JavaScript aggregation system. Win-win!

Accordion List

So, now that our individual accordion item works as it should, let’s build our accordion list. For this, I’ve created a view called Accordion (machine name: accordion) that shows “Content of Type: Accordion” and a page display that shows an unformatted list of full posts.

Now that the view has been created, let’s copy views-view-unformatted.html.twig from our parent Stable theme (/core/themes/stable/templates/views) and rename it views-view-unformatted--accordion.html.twig. Inside of that file, we will write our include statement for the accordion <dl> component. But before we do that, we need to make a key change to that component file. If you go back to the contents of that file, you’ll notice that it has a for loop built to pass in Pattern Lab data and nest the accordion items themselves:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

In Drupal, we don’t want to iterate over this static list; all we need to do is provide a single variable for the  Views rows to be passed into. Let’s tweak our code a bit to allow for that:

<dl class="accordion-item">
  {% if drupal == true %}
    {{ accordion_items }}
  {% else %}
    {% for listItem in listItems.four %}
      {% include "@molecules/accordion-item/accordion-item.twig"
        with {
          "accordion_term": listItem.headline.short,
          "accordion_def": listItem.excerpt.long
        }
      %}
    {% endfor %}
  {% endif %}
</dl>

You’ll notice that we’ve added an if statement to check whether “drupal” is true—this variable can actually be anything Pattern Lab doesn’t recognize (see the next code snippet). Finally, in views-view-unformatted--accordion.html.twig let’s put the following:

{% set drupal = true %}
{% include "@organisms/accordion/accordion.twig"
  with {
    "accordion_items": rows,
  }
%}

At the view level, all we need is this outer <dl> wrapper and to just pass in our Views rows (which will contain our already component-ized nodes). Rebuild the cache, visit your view page and voila! You now have a fully working accordion!

Conclusion

We have now not only created a more complex nested component that uses JavaScript… we have done it in Drupal! Your HTML, CSS and JavaScript are where they belong (in the components themselves), and you are merely passing Drupal’s dynamic data into those files.

There’s definitely a lot more to learn; below is a list of posts and webinars to continue your education and get involved in the future of component-driven development and our tool, Emulsify.

Recommended Posts

  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to Part Three of our frontend miniseries on style guides! In this installment, we cover the bits and pieces of atomic design using Pattern Lab.
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 24 2017
May 24
May 24th, 2017

In the last post, we introduced Emulsify and spoke a little about the history that went into its creation. In this post, we will walk through the basics of Emulsify to get you building lovely, organized components automatically added to Pattern Lab.

Prototyping

Emulsify is at its most basic level a prototyping tool. Assuming you’ve met the requirements and have installed Emulsify, running the tool is as simple as navigating to the directory and running `npm start`. This task takes care of building your Pattern Lab website, compiling Sass to minified CSS, linting and minifying JavaScript.

Also, this single command will start a watch task and open your Pattern Lab instance automatically in a browser. So now when you save a file, it will run the appropriate task and refresh the browser to show your latest changes. In other words, it is an end-to-end prototyping tool meant to allow a developer to start creating components quickly with a solid backbone of automation.

Component-Based Theming

Emulsify, like Pattern Lab, expects the developer to use a component-based building approach. This approach is elegantly simple: write your DRY components, including your Sass and JavaScript, in a single directory. Automation takes care of the Sass compilation to a single CSS file and JavaScript to a single JavaScript file for viewing functionality in Pattern Lab.

Because Emulsify leverages the Twig templating engine, you can build each component HTML(Twig) file and then use the Twig functions include, embed and extends to combine components into full-scale layouts. Sound confusing? No need to worry—there are multiple examples pre-built in Emulsify. Let’s take a look at one below.

Simple Accordion

Below is a simple but common user experience—the accordion. Let’s look at the markup for a single FAQ accordion item component:

<dt class="accordion-item__term">What is Emulsify?</dt>
<dd class="accordion-item__def">A Pattern Lab prototyping tool and Drupal 8 base theme.</dd>

If you look in the components/_patterns/02-molecules/accordion-item directory, you’ll find this Twig file as well as the CSS and JavaScript files that provide the default styling and open/close functionality respectively. (You’ll also see a YAML file, which is used to provide data for the component in Pattern Lab.)

But an accordion typically has multiple items, and HTML definitions should have a dl wrapper, right? Let’s take a look at the emulsify/components/_patterns/03-organisms/accordion/accordion.twig markup:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

Here you can see that the only HTML added is the dl wrapper. Inside of that, we have a Twig for loop that will loop through our list items and for each one include our single accordion item component above. The rest of the component syntax is Pattern Lab specific (e.g., listItems, headline.short, excerpt.long).

Conclusion

If you are following along in your own local Emulsify installation, you can view this accordion in action inside your Pattern Lab installation. With this example, we’ve introduced not only the basics of component-based theming, but we’ve also seen an example of inheriting templates using the Twig include function. Using this example as well as the other pre-built components in Emulsify, we have what we need to start prototyping!

In the next article, we’ll dive into how to implement Emulsify as a Drupal 8 theme and start building a component-based Drupal 8 project. You can also view a recording of a webinar we made in March. Until then, see you next week!

Recommended Posts

  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to the final post of our frontend miniseries on style guides! In this installment, the Web Chefs talk through how we chose Pattern Lab over KSS Node for Four…
  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 17 2017
May 17
May 17th, 2017

Shared Principles

There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization. At the smaller agency level, being a frontend engineer typically involves a balancing act between a general knowledge of web development and keeping up with frontend best practices. This makes it all the more important for agency frontend teams to take a step back and determine some shared principles. We at Four Kitchens did this through late last summer and into fall, and here’s what we came up with. A system working from shared principles must be:

1. Backend Agnostic

Even within Four Kitchens, we build websites and applications using a variety of backend languages and database structures, and this is only a microcosm of the massive diversity in modern web development. Our frontend team strives to choose and build tools that are portable between backend systems. Not only is this a smart goal internally but it’s also an important deliverable for our clients as well.

2. Modular

It seems to me the frontend community has spent the past few years trying to find ways to incorporate best practices that have a rich history in backend programming languages. We’ve realized we, too, need to be able to build code structures that can scale without brittleness or bloat. For this reason, the Four Kitchens frontend team has rallied around component-based theming and approaches like BEM syntax. Put simply, we want the UI pieces we build to be as portable as the structure itself: flexible, removable, DRY.

3. Easy to Learn

Because we are aiming to build tools that aren’t married to backend systems and are modular, this in turn should make them much more approachable. We want to build tools that help a frontend engineer who works in any language to quickly build logically organized component-based prototypes quickly and with little ramp-up.

4. Open Source

Four Kitchens has been devoted to the culture of open-source software from the beginning, and we as a frontend team want to continue that commitment by leveraging and building tools that do the same.

Introducing Emulsify

Knowing all this, we are proud to introduce Emulsify—a Pattern Lab prototyping tool and Drupal 8 starterkit theme. Wait… Drupal 8 starterkit you say? What happened to backend agnostic? Well, we still build a lot in Drupal, and the overhead of it being a starterkit theme is tiny and unintrusive to the prototyping process. More on this in the next post.
[NB: Check back next week for our next Emulsify post!]

With these shared values, we knew we had enough of a foundation to build a tool that would both hold us accountable to these values and help instill them as we grow and onboard new developers. We also are excited about the flexibility that this opens up in our process by having a prototyping tool that allows any frontend engineer with knowledge in any backend system (or none) to focus on building a great UI for a project.

Next in the series, we’ll go through the basics of Emulsify and explain its out-of-the-box strengths that will get you prototyping in Pattern Lab and/or creating a Drupal 8 theme quickly.

Recommended Posts

Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Mar 07 2017
Mar 07

This weekend’s DrupalCamp London wasn’t my first Drupal event at all, I’ve been to 3 DrupalCon Europe, 4 DrupalCamp Dublin, and a few other DrupalCamps in Ireland and lots of meetups, but in this case I experienced a lot of ‘first times’ that I want to share.

This was the first time I’d attended a Drupal event representing a sponsor organisation, and as a result the way I experienced it was completely different.

Firstly, you focus more on your company’s goals, rather than your personal aims. In this case I was helping Capgemini UK to engage and recruit people for our open positions. This allowed me to socialise more and try to connect with people. We also had T-shirts so it was easier to attract people if you have something free for them. I was also able to have conversations with other sponsors to see why did they sponsor the event, some were also recruiting, but most of them were selling their solutions to prospective clients, Drupal developers and agencies.

The best of this experience was the people I found in other companies and the attendees approaching us for a T-shirt or a job opportunity.

New member of Capgemini UK perspective

As a new joiner in the Capgemini UK Drupal team I attended this event when I wasn’t even a month old in the company, and I am glad I could attend this event at such short notice in my new position, I think this tells a lot about the focus on training and career development Capgemini has and how much they care about Drupal.

As a new employee of the company this event allowed me to meet more colleagues from different departments or teams and meet them in a non-working environment. Again the best of this experience was the people I met and the relations I made.

I joined Capgemini from Ireland, so I was also new to the London Drupal community, and the DrupalCamp gave me the opportunity to connect and create relationships with other members of the Drupal community. Of course they were busy organising this great event, but I was able to contact some of the members, and I have to say they were very friendly when I approached any of the crew or other local members attending the event. I am very happy to have met some friendly people and I am committed to help and volunteer my time in future events, so this was a very good starting point. And again the best were the people I met.

Non-session perspective

As I had other duties I couldn’t attend all sessions. But I was able to attend some sessions and the Keynotes, with special mention to the Saturday keynote from Matt Glaman, it was very motivational and made me think anyone could evolve as a developer if they try and search the resources to get the knowledge. And the closing keynote from Danese Cooper was very inspirational as well about what Open Source is and what should be, and that we, the developers, have the power to make it happen. And we could also enjoy Malcom Young’s presentation about Code Reviews.

Conclusion

Closing this article I would like to come back to the best part of the DrupalCamp for me this year, which was the people. They are always the best part of the social events. I was able to catch up with old friends from Ireland, engage with people considering a position at Capgemini and introduce myself to the London Drupal community, so overall I am very happy with this DrupalCamp London and I will be happy to return next year. In the meantime I will be attending some Drupal meetups and trying to get involve in the community, so don’t hesitate to contact me if you have any question or you need my help.

Mar 02 2017
Mar 02
March 2nd, 2017

You might have heard about high availability before but didn’t think your site was large enough to handle the extra architecture or overhead. I would like to encourage you to think again and be creative.

Background

Digital Ocean has a concept they call a floating IPs. A Floating IP is an IP address that can be instantly moved from one Droplet to another Droplet in the same data center. This idea is great, it allows you to keep your site running in the event of failure.

Credit

I have to give credit to BlackMesh for handling this process quite well. The only thing I had to do was create the tickets to change the architecture and BlackMesh implemented it.

Exact Problem

One of our support clients had the need for a complete site relaunch due to a major overhaul in the underlying architecture of their code. Specifically, they had the following changes:

  1. Change in the site docroot
  2. Migration from a single site architecture to a multisite architecture based on domain access
  3. Upgrade of PHP version that required a server replacement/upgrade in linux distribution version

Any of these individually could have benefited from this approach. We just bundled all of the changes together to delivering minimal downtime to the sites users.

Solution

So, what is the right solution for a data migration that takes well over 3 hours to run? Site downtime for hours during peak traffic is unacceptable. So, the answer we came up with was to use a floating IP that can easily change the backend server when we are ready to flip the switch. This allows us to migrate our data on a new separate server using it’s own database (essentially having two live servers at the same time).

Benefits

Notice that we won’t need to change the DNS records here which meant we didn’t have to wait for DNS records to propagate all over the internet. The new site was live instantly.

Additional Details

Some other notes during the transition that may lead to separate blog posts:

  1. We created a shell script to handle the actual deployment and tested it before the actual “go live” date to minimize surprises.
  2. A private network was created to allow the servers to communicate to each other directly and behind the scenes.
  3. To complicate this process, during development (prelaunch) the user base grew so much we had to off load the Solr server on to another machine to reduce server CPU usage. This means that additional backend servers were also involved in this transition.

Go-Live (Migration Complete)

After you have completed your deployment process, you are ready to switch the floating ip to the new server. In our case we were using “keepalived” which responds to a health check on the server. Our health check was a simple php file that responded with the text true or false. So, when we were ready to switch we just changed the health checks response to false. Then we got an instant switch from the old server to the new server with minimal interruption.

Acceptable Losses

There were a few things we couldn’t get around:

  1. The need for a content freeze
  2. The need for a user registration freeze

The reason for this was that our database was the database updates required the site to be in maintenance mode while being performed.

A problem worth mentioning:

  1. The database did have a few tables that would have to have acceptable losses. The users sessions table and cache_form table both were out of sync when we switched over. So, any new sessions and saved forms were unfortunately lost during this process. The result is that users would have to log in again and fill out forms that weren’t submitted. In the rare event that a user changed their name or other fields on their preferences page those changes would be lost.

Additional Considerations

  1. Our mail preferences are handled by third parties
  2. Comments aren’t allowed on this site

Recommended Posts

  • Engineers find solving complex problems exciting, but as I’ve matured as an engineer, I’ve learned that complexity isn’t always as compelling as simplicity.
  • Cloudflare Bug May Have Created Security Leak Cloudflare, a major internet host, had some unusual circumstances that caused their servers to output information that contained private information such as HTTP…
  • When you already have a design and are working with finalized content, high fidelity wireframes might be just what the team needs to make decisions quickly.
Chris Martin
Chris Martin

Chris Martin is a junior engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web