Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 21 2021
Sep 21

Whoever said "comparison is the death of joy" was onto something. Comparing ourselves to others can create all kinds of problems, whether we think we are worse, better, or equal. Most of us probably know to avoid comparisons, and yet we can't seem to help ourselves. We do it in our personal lives and in professional settings.

This article focuses on three attempts to compare people (and organizations) in the Drupal community: Certified to Rock, DrupalCores, and the Drupal Marketplace page. It explores how these methods were useful, where they might have been lacking, and what we can learn from related academic research into leaderboards.

Rockstars Are Cool Because They Have Lots of Fans

CertifiedToRock.com, which launched on April 19, 2010, rated Drupal.org users on a 0-11 scale. This site used a secret algorithm to assign value to individuals in the Drupal community. People could speculate what activities would affect a person's score, but for the most part, the algorithm was secret. Finger-picking skills on the guitar almost certainly did not factor in, but contributing a Drupal module probably did.

The name might suggest this was a joke, but it was not. Quite the contrary, this site claimed to be "serious about certification."

Indeed, many people in the Drupal community took these scores seriously. For instance, a decade ago, Lullabot's application process asked developers to provide their Certified to Rock score. Seth Brown, who is now the CEO of Lullabot, explained why in a 2011 blog post:

"Our application form asks for a developer's Certified to Rock score; it's a web application that attempts to track each Drupal.org user's activity in issue queues, their activity with local user groups, how long they've been participating on Drupal.org, and so on: the very things we look at when we're examining how engaged with Drupal an applicant is. Although it obviously can't tell us everything about an applicant's involvement with Drupal, a reasonable CTR score immediately tells us they are involved. In my experience, anything above a 2 or 3 is a great sign."

The site claimed to offer an "answer to the question of certification for the Drupal community," perhaps a bit like how Acquia certification appeals to people that desire to "validate and promote" their Drupal skills. Certified to Rock served a role for some people involved in hiring decisions as well as developers who liked to think of themselves as rock stars.

Nevertheless, the site did not last. By 2013 the data had become stale, and the owners were looking to sell.

People over Profit

If Certified to Rock served a purpose, it likely helped businesses more than it helped people. People working in Drupal agencies might have occasionally uttered phrases like, "well, I'm not sure about this applicant, but they have a formidable CTR score." Less often said were phrases like, "this gal I know is really mean to other people, but I still consider her a friend because she has a Certified to Rock score of ELEVEN!" In other words, someone's score should not factor into their human relationships.

Proponents of secret evaluative systems believe these tools make more information public (anyone can look up another person's score), that they motivate people to get more involved, and that they provide important unbiased feedback. However, academic researchers who have looked into such systems have found they more likely become "engines of anxiety," eventually hurting both individuals and organizations.

The influential meditation teacher, Luang Por Sumedho, often tells his students that the kindest thing to do for the people they care about is to not create them. Meaning, do not create versions of people in your mind with which you compare the actual, real-life people. By letting go of preconceived ideas about people, we can interact with them in a more kind and human way. The same advice holds true for free software communities. Treating people — both yourself and others — according to a ranking is not especially friendly. Perhaps Certified to Rock did not persist because the Drupal community agreed.

DrupalCores, Leaderboards, the Leftovers

The DrupalCores project offered a slightly different method to feed people's desire to compare themselves to others by providing a ranked list of contributors. Hosted on GitHub and available under an open-source license, DrupalCores.com offered a seemingly unbiased list — "a very basic table of all contributors to Drupal 9 Core." The list could be filtered to a list of contributors, companies, or countries. DrupalCores was essentially a "leaderboard." DrupalCores.com, like CertifiedToRock.com, no longer exists on the web.

Extensive research into leaderboards has revealed their pros and cons. Kraut and Resnick, for instance, demonstrated how leaderboards discourage contribution when "leaderboards elevate the top ten or twenty-five participants in populations of tens of thousands" and the "vast majority of members ... perceive that they have no chance of making the list" (50). DrupalCores seemingly avoids this pitfall by listing everyone, except that it omits just about everyone who doesn't contribute code.

It may seem like lists such as DrupalCores are harmless, but many academic researchers have been compiling evidence disputing this view. For his Princeton University Press book, The Tyranny of Metrics, Jerry Muller examined various ranking systems and found "that while they are a potentially valuable tool, the virtues of accountability metrics have been oversold, and their costs are often underappreciated."

Communities tend to measure what is easiest to measure, which often leads to problems. For instance, some people begin to fixate on the so-called unbiased systems and question their experience-based personal judgments. Others spend the majority of their time trying to game the system rather than contributing something of value.

While leaderboards can benefit some people, they should be used with caution.

The Storied History of the Drupal Marketplace Page

The final category of comparison for discussion is the type of leaderboards that rank organizations — rather than individuals — such as Drupal's Marketplace page (sometimes called the "Drupal Services" page and available at drupal.org/services), which lists/ranks organizations that provide Drupal services. This page has been well-used for over 15 years, and the Drupal community has generally accepted it as a useful tool.

The "Drupal Services" page was not always a leaderboard. In 2005 it listed less than a dozen "individuals and companies that have contributed to Drupal," presumably with people who contributed the most at the top of the list.

By the 2010s, the "Drupal Services" section of the "Marketplace" page on Drupal.org listed Drupal Services providers alphabetically. It stated so near the top of the page: "Please note that the directory is sorted alphabetically, and not in an order that would imply preference or relative importance of one listing over others." Yet another claim of non-bias.

By the later half of 2012, the "Drupal Services" page had added a separate category for "Featured providers," though still offered an alphabetical listing.

Drupal's issue credit system launched in July 2015, and by the fall of 2015, the Drupal Services page officially became a leaderboard. From 2006 to 2015, the Canadian Drupal consulting company 2bits.com had enjoyed the top spot on the Drupal Services page because it always came first alphabetically. In August 2015, Zyxware Technologies was on page 12, but by 2016 they were on page 1.

The credit system made the Drupal Services page seem more equitable because counting commit credits allowed for the sorting of organizations by contributions rather than alphabetically. As Dries Buytaert and I wrote in 2017:

"Credits are a powerful motivator for both individuals and organizations. Accumulating credits provides individuals with a way to showcase their expertise. Organizations can utilize credits to help recruit developers or to increase their visibility in the Drupal.org marketplace."

We used data from the credit system to produce a wide variety of lists and graphs, including lists such as "the top 30 contributors" (and Dries has updated the list annually). We felt that such lists provided a way to highlight the work of these prolific individuals as well as offer a lens through which to investigate the role of sponsorship in the Drupal community. However, the Drupal Association has intentionally avoided creating lists of the top individual contributors to the Drupal project, and there is wisdom in the choice to keep the focus on organizations.

The credit system marked a significant improvement, but it was not perfect. According to Campbell's Law, "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Lots of people and companies found ways to game the system, and the contribution system has needed a few "tune-ups."

And then there are the sad stories about people in the Drupal community negatively affected by changes to the algorithm that determines the order on the Drupal Services page. Nobody in the Drupal community wants contributors to the project to feel worse about their contributions to the Drupal project as a result of a leaderboard, but that was exactly the experience for some people. For instance, one change resulted in a company dropping from page 2 to page 6 in the list, and a long-time Drupal contributor who had worked hard to get his company to page 2 found that drop to be "incredibly demoralizing." Another well-known community member-owned a company that dropped on the leaderboard, which led him to feel that his "contributions are now devalued." Unfortunately, these are the guaranteed results of any comparison system.

In reviewing the (unpublished) findings of Drupal's Contribution Recognition Committee — which interviewed the leaders from various companies and offered a survey about how the Drupal community recognizes contributions to the Drupal project — we found similar stories. People who filled out the survey and supplied their Drupal.org username received contribution credit (seems kind of meta, doesn't it?), so there were a lot of responses. Some respondents specifically called out "leaderboards," saying they "are not the way to go." Others clearly appreciated the attempts in the Drupal community to give credit appropriately. In fact, many people in other free software are envious of Drupal's contribution recognition system, and we are currently trying to port it to GitLab so that others can benefit from it.

We know that such systems can be valuable. Individuals and organizations that evaluate free software and open-source projects might like to know which organizations are involved in a project and the extent of their contributions. Drupal's Marketplace page can provide detailed information about the companies most involved in the project. In fact, the lessons we've learned in the Drupal community have directly influenced a new "Contribution Attribution" candidate metric in the Community Health Analytics Open Source Software, an organization focused on creating analytics and metrics to help define community health. In other words, leaderboards can be used wisely to help understand and grow a project in a healthy way.

When we look beyond the world of open-source and free software, ample evidence suggests that something like Drupal's credit system must be used with caution. For instance, Wendy Nelson Espeland and Michael Sauder's research into the effects of the U.S. News and World Report rankings on law schools revealed some problems with leaderboards. Espeland and Sauder found law school administrators who consistently felt pressure to choose between what is good for the law school and what is good for rankings. They encountered administrators who lost their job after a small decline in rankings. Schools offered scholarships to certain populations, altered their definitions of what it means to be "employed," and engaged in other activities to "game" the system. While some of these activities may sound foreign to the Drupal community, there are a wide variety of people who have noticed attempts to game Drupal's credit system, and the Drupal Association has had to monitor these attempts to game the algorithm.

To Err Is Human, to Blame Someone Else Shows Management Potential

We should not conclude that something was inherently wrong about these attempts to compare people in the Drupal community. They all appear to have been rooted in a genuine desire to help. Let's assume positive motivation. We also must be mindful that comparison, though it may be a natural human tendency, must be undertaken with care.

Thich Nhat Hanh, the peace activist Martin Luther King Jr. nominated for the Nobel Peace Prize, believes, "Where there is perception, there is deception." All attempts to single out people for their hard work in the Drupal community necessarily divide the community and risk alienating others. This isn't a radical proposal for the Drupal community to abandon all tools that single out individuals who contribute to the project, such as MAINTAINERS.txt or the Aaron Winborn Award.

Rather, we should continue to proceed carefully in our stewardship of the Drupal Marketplace page but also that we exercise a healthy level of skepticism the next time someone creates a leaderboard or related system that claims to provide a non-biased measure of individuals in the Drupal community.

If you are interested in leaderboards, Matthew Tift will be speaking about them on September 29 at an Open Source Summit panel entitled "Contributor Leaderboards to Incentivize Good Community Citizenship."

Sep 16 2021
Sep 16

It might feel like it's brand new, but Drupal 8 will reach its end-of-life on November 2, 2021. Matt and Mike get together with fellow Lullabots Cathy Theys, David Burns, and Matthew Tift who have each been involved with upgrading to Drupal 9 on various projects. They discuss what a site administrator should be doing about it now.

Sep 14 2021
Sep 14

Too many design handoffs are treated like a relay race. The designers hand off the baton to the developers and then go home because they consider their part of the race done. This can create uncertainty, which means developers have to make assumptions about how things should work and look. 

Many of these assumptions will be inaccurate, leading to frustration, mismatched expectations, and wasted time. The bigger the project, the more these problems manifest.

But design handoffs shouldn’t be like relay races. In fact, they shouldn’t really be “handoffs.” The very name suggests that the designers should now be “hands-off.”

DesignOps depends on a healthy workflow between designers and developers. How they work together cannot be an afterthought. It is one of the foundational requirements for a successful web project.

How do we ensure the process is painless? Let’s go over some pain points developers experience, and then we’ll discuss solutions along with the two things that support these solutions: documentation and communication.

Front-end pain points

These pain points all have one big thing in common: they introduce uncertainty. When developers have to guess anything about intention or desired outcome, it causes delays. First, they need to decide what to do. Second, what they decide might be inadequate or wrong.

Lack of consistency

Lack of consistency is the biggest pain point. This is where a component in the design mocks has multiple versions in various mocks or doesn’t match the documentation. Take font size, for example. Lots of things go through a developer’s mind when they come across this mismatch:

  • Is it a mistake? Everyone is human.
  • Is it a pattern? Should it be extracted to a mixin or utility class that has this additional option?
  • Should a one-off be created? Just override it this one time.
  • Should the design be modified to fit the documentation or vice versa?

Common places where inconsistencies crop up: 

  • Typography — if the heading in the newsletter component is very similar to the heading in the card component — should they be the same or different?
  • Spacing — the horizontal gutter in the carousel component is 21px. Should we normalize this to 20px? What about the spacing below the menu, which is 18px. Should this be normalized? Should it be abstracted to a variable?
  • Inconsistencies between mobile and desktop mockups — the button is offset from the form element by 25px at desktop and by 20px at mobile. Is this is a mistake? If so, which value should be chosen?

These seem like small problems, but like a pebble in your shoe, they can cause inordinate discomfort. As they multiply, they can act as a ball and chain and slow down the entire development process.

Design mocks have optimal content

Sometimes, design mocks create this utopian vision where everyone’s name is a similar length. They make assumptions about the content and might not account for the variety of life. In the real world, lots of stuff can get thrown at a website.

For example, what should happen if the text is longer than that shown in the mock?

What might go through a developer's mind when the content breaks out of its prescribed bounds?

  • Should this conform to the grid? Can it wrap?
  • Should the CMS enforce a character limit?
  • Should it be clamp with JavaScript? Add an ellipsis?

There might not be a right or wrong answer. But what if there is a similar component that a different developer is working on, and they choose a different answer? It might cause discrepancies in the design.

Or worse, what if this is ignored and it gets all the way to production?

Multiple, and sometimes unknown, font providers 

Fonts can come from a lot of different places. A design mock might dictate several different fonts, and for each one, a responsible developer will ask these questions:

  • What license do we have for this font? 
  • Is there a cost to use this font, and do we pass that cost on to the client?
  • How can this font be hosted? Is there a CSS file, or is there a way we can host the files ourselves?

All of these affect font optimizations and performance. Having to hunt down the answers to these questions over and over again can waste a lot of time.

Lack of documentation

When someone is presented with just “mockups,” that’s a lot of information at once that might not have context. It can feel like you are drowning. Where do you start?

This might not be a big deal if you are creating a single landing page. But what if you are implementing an entire design system for CMS with hundreds of components? Things get complex very quickly.

When presented with a complex design that has no documentation, responsible developers will need to create the documentation themselves.

 This means:

  • An inventory of all the components.
  • An inventory of all typography styles.
  • An inventory of spacing. Is there a grid? If so, what should it look like?

Which means spreadsheets. Lots of spreadsheets.

This takes time. And if you really hate spreadsheets, it takes blood, sweat, and tears. Factor in 2-3 days for small sites and 1-2 weeks for larger sites. Inevitably, it won’t be right the first time, which means later refactoring.

Documentation is not optional, especially if you are part of a larger front-end developer team. But even if you are solo, you need to create it to validate assumptions, communicate properly with designers, and have something to pass off if/when you roll off the project.

Mocks that do not meet WCAG AA accessibility guidelines

Lullabot creates accessible websites, but there are situations where we work with designers who give us mockups that do not meet WCAG AA guidelines.

  • The color palette doesn’t have sufficient contrast between text and background color. 
    • WCAG AA guidelines state that text must have a 4.5:1 contrast ratio or be larger than 19px and bold with a 3:1 contrast ratio. 
    • Icons, button borders, etc., must have a 3:1 contrast ratio.
  • Links that are only indicated by the color. Best accessibility practices dictate that links should also have a non-color indicator for people that are color blind or have other visual impairments.
  • Focus states that only change color. Once again, we need to think of how color-blind people will use the website.

Solutions - easing the pain

Creating consistency

This starts with team size. The larger the team, the more opportunity for inconsistency to creep in. Every little change has to be communicated to every other member of the design team. We have found that 1-2 dedicated designers are good for medium-sized projects when creating a design system. It allows faster workflows, communication, and iterations.

If a team cannot be fed by two pizzas, then that team is too large.

Jeff Bezos

What are ways a design team can work in order to create consistency?

Naming systems

Designers have to name many things: typography, colors, components that appear in the design itself, and more. This is the first step toward clear communication. 

The naming system should be clear for everyone on the team, from the designers to the developers to the project managers to the stakeholders. Ideally, this naming system is part of, or helps define, a ubiquitous language. When you call something a “hero,” does everyone understand what that is?

This means designers cannot be the final arbiter of what things are named. A naming system is about communicating properly, and creating a naming system should involve communication with the larger team. Be sure to involve a front-end developer because they might want to extend the system into the CSS.

Creating a naming system should start before the visual design itself. You don’t want to be renaming things at the last minute.

Naming things is hard. It can help to have some guidelines to start the conversation. You might try BEM naming, which names things based on where they are and what they do. For example, btn-primary-blue.

Grid systems - consistency in spacing

These should also be started before the visual design. These define your horizontal spacing (columns) and vertical rhythm (rows). Overall, you are defining the rhythm and proportion for your design, which helps implementation and answers many questions before they need to be asked.

As part of the grid system, you’ll also want to define the width and max-width. How far will the grid extend before it stops extending? How will it change at certain breakpoints?

Type systems

Set up your basic styles before starting the visual design. This means styles that you know you are going to use, like body copy and headers. These are easy to update as the project continues, so don’t assume they are set in stone. It will evolve as the design evolves.

But you need someplace to start.

For help with setting up a type system, use Modular Scale. It allows you to start with a base font and create a ratio for how it increases and decreases. This scale also helps determine line heights.

Creating components/symbols, styles, and shared libraries

Now that you have these systems set up, you need a way to share them. All UX programs (Figma, Sketch, Adobe, etc.) have a similar way to share these things, though they might use different terminology. Figma, for example, uses the term components. Sketch and Adobe use symbols.

Components and symbols are reusable elements. You can create an element and add it to a symbol/component library, which other members of your team can access. These can be dragged and dropped onto a page.

These become the central source of truth. If you need navigation on your mockup, grab it from the component library and place it on the page. They can be changed all at once, no matter how many places they appear. You can also create overrides if there is a one-off need. For example, maybe the hero for the About page doesn’t need a CTA.

Here is a video showing the creation of a one-off using Figma, pulling in different components.

Styles are combinations of colors and typography that can be reused. Front-end developers can access a style library and see all the typography being used.

Shared libraries can be shared across documents. Brining in a library brings in all components, styles, colors, typography. These can be accessed by other designers and developers.

Figma allows revision notifications after a library is updated. After updates are published, anyone using the library can pull in the update. This makes it easy to keep things consistent from page to page, document to document. The following video shows the sharing of a library and then pushing a revision.

Documentation

Style guide

The main deliverable for documentation is a style guide. It should be seen as the holy design bible for the design system. Style guides can range in complexity, from intricate to simplistic. It can be hosted as an interactive website, or it could be a Google doc. 

What should be in a style guide?

  • Grid system and spacing
  • Colors
  • Type system
  • Design principles, resources, and personas that we gathered during the discovery phase

The last item is important if you are handing the style guide off to a client who may be extending the design system in the future. You can browse an example style guide here.

Patterns

Patterns are different ways components can look on a page—variations based on context. For example, a Hero component could have 4 different color options.

  • Colors
  • Existence (or number) of CTAs
  • Typography

This video shows the different patterns of a hero component.

Functionality

Show how the navigation should work or what hover effect should take place for a button. The best way to do this is to create a prototype. If you are not handy with code, find examples on Codepen that match what you want to do.

You can also create basic prototypes in Invision/Figma, and for functionality that cannot be modeled, leave detailed comments.

The more traditional way to document functionality is with a functionality spec document. The design is marked with numbers, and each number has a corresponding row in a spreadsheet. These still work, and you can see an example below.

Accessibility

Learn and know accessibility, including WCAG AA criteria and best practices.

For designers, accessibility shouldn’t be an afterthought.  It should be worked into the overall design process from the very beginning. This will help save time and rework later down the line and make developers and stakeholders happy. 

All user interfaces, marketing websites, components, etc., should be inclusive and able to be used by everyone. 

Not sure where to start? First, review the WCAG Guidelines.

There’s a lot to learn, and bookmarking them can be a handy way to quickly bring them up if you need to check the accessibility guidelines of something you’re designing. There are also browser plugins and design tools that can help guide you throughout the design process. Below are just a few that we’ve used in the past.

  • Contrast - MacOS app that checks WCAG color ratios
  • Stark - Figma and Sketch plugin for WCAG ratios and color blindness simulation
  • AXE: Browser plugin that tests for accessibility

If you’re working on a complex component and not sure the best way to make it more accessible, collaborate with a front-end or back-end developer on the team to brainstorm ideas. They can often help prototype an idea to be tested for accessibility and give feedback throughout the process. 

Communication

Traditional design handoffs, especially when working with an external design agency, can be like throwing a package over the fence and walking off. Communication is vital. It is the lifeblood of any successful design handoff.

Documentation is necessary but usually not sufficient. Even with great documentation, questions arise, and clarifications are needed. Documentation makes future communication easier and more efficient. Don’t expect it to eliminate it.

Communication needs to happen during the design process and during the development process. 

Have regular check-ins to make sure things make sense or if things are doable. Bringing developers in during design reviews - even during the wireframe process - will make the eventual handoff much smoother. Be open to more questions as they pop up. 

When to push back

It’s important to know when developers should push back against design decisions, or at least get more clarity on the “why.” 

Some examples that should trigger some additional communication:

  • Accessibility (especially in regards to contrast ratio)
  • Similar components - can we have just one component instead?
  • Similar functionality can be provided in Drupal with almost no effort, and it gets us 80% there. Is doing something custom worth the cost or compromise?
  • Minor design element that may impact the timeline. It doesn’t impact core functionality but could involve a lot of development hours.

All of these are opportunities to improve the design and increase the chances of a successful project.

Tools

Accomplishing successful design handoffs is not a technology problem but a process and people problem. But there are tools that can help.

We’ve already talked about taking advantage of all the capabilities of UX tools like Figma, Invision, and Adobe. These allow front-end developers to access design values: color, type, spacing, etc. 

And they don’t have to use the program something was designed in.

There are also services that allow design/developer collaboration across any design platform, like Zeplin, Zeroheight, and avocode. Zeroheight works great for smaller projects where a pattern library isn't being created.

For documentation, we use two tools based on how the documentation will be used.

  • Dropbox Paper - Used for more informal documentation. Copying and pasting CSS values, for example.
  • Google Docs - Used for formal documentation like official style guides that clients will see.

Conclusion

Design handoffs don’t have to be painful. They don’t have to be where communication breaks down. Instead, they can be opportunities to foster better relationships between design and development, which leads to an improved final product.

No DesignOps process is complete without a healthy workflow between designers and developers. Ensure consistency, maintain good documentation, and keep communication lines open and flexible.

If you have any questions about design systems and how our designers and developers work together, please reach out.

Sep 03 2021
Sep 03

Mike and Matt are joined by Lullabots Cathy Theys, Greg Dunlap, Cristina Chumillas, and Albert Hughes to talk about hiring people into different Drupal roles. 

Aug 18 2021
Aug 18

You Might Also Like

One of the great joys of front-end development is being able to wrestle a bunch of rectangular elements into different shapes and arrangements to create beautiful, intuitive layouts.

One of the great frustrations of front-end development is the unexpected interaction and overlapping of those same elements. Struggling to arrange elements along the z-axis, which extends perpendicularly through the computer screen towards and away from the viewer, is such a shared front-end experience that an element’s z-index can sometimes be used as a frustrate-o-meter gauging the developer’s mood.

The key to maintainable z-index values is understanding that z-index values can’t always be directly compared. They’re not an absolute measurement along an imaginary ruler extending out of the viewport; rather, they are a relative order between elements within the same stacking context.

This means that to truly understand how two elements will overlap, we need to understand the stacking contexts in which those elements are contained.

What is a stacking context?

A stacking context is a three-dimensional conceptualization of space on a two-dimensional screen and a boundary outside of which an element’s z-index does not matter. You can think of stacking contexts as folders full of paper on a desk. Each folder can have multiple pages inside, and the orders of those pages can change, but no matter how hard you try, no pages in the bottom folder can be on the top of the entire stack. Each folder is its own stacking context, and each page’s order is its z-index. This analogy can be carried further by putting the folders into different boxes and organizing them on a shelf. Similarly, stacking contexts can be nested and reordered themselves.

With no additional CSS, each webpage would have a single stacking context created by the document object in your browser’s CSS Object Model. Elements within that single stacking context will overlap with each subsequent element stacked above the first like a deck of cards with the first element child at the bottom and the last element child at the top. (See default stacking context on codepen)

So how can we make a new stacking context? The easiest way is to position an element with either absolute or relative and to give that element a valid z-index value other than auto. We create a new stacking context by telling an element precisely how it should be positioned along the z-axis. We can then expect all of that element’s children to arrange themselves within a subset of that imaginary three-dimensional space.

Besides z-index, CSS properties that force the browser to consider how elements should be laid out and painted will create new stacking contexts. Elements that have declared any transformation create a new stacking context, as all transforms are calculated and applied as a single operation. Since elements can transform along the z-axis, they create a new stacking context. Elements that have a clip-path, or mask value, or an opacity of less than 1 require the browser to compute their order in the stacking context to know what to paint below them. Elements that have their contain or will-change properties set create a new stacking context to minimize the costly operations of re-calculating the layout and re-painting the canvas.

How do I debug stacking context?

As you may have anticipated, with so many different properties potentially setting stacking contexts, it can be difficult to conclude why elements may stack the way they are. Luckily, there’s the CSS Stacking Context Inspector browser extension available for Chrome and Firefox.

This browser extension creates two new features in your browser’s devtools. First, a new “Stacking Contexts” panel allows you to expand each stacking context to see what is inside it. Secondly, an elements sidebar pane to view an element’s parent context, its sibling sub-contexts, and if this element creates a new stacking context, why.

If you’d like to play around with stacking contexts or try out the browser extension, I’d invite you to check out my Stacking Context Playground on CodePen.

Can I see a real-life example?

Sure! Below is a example of a front-end component with lots of overlapping elements. When viewing a page authored in a left-to-right language like English, here’s what the page band looks like:

We have the band’s content on the left-hand side, and the right side is a big block link that launches a modal to play a video (That link has a semi-transparent background to help show the layering of the elements). This band also has a background video, and we give users the option to pause the background video with the small round button in the bottom right corner.

This works as intended right now, but as soon as we set the page’s direction to right-to-left for languages like Arabic, the button gets buried and is no longer clickable!

Your first instinct here might be to look for a z-index disparity between the LTR and RTL style rules, but there aren’t any. Something else is causing the layer order to change - so let’s open the Stacking Context Inspector in our devtools.

When looking at the Stacking Contexts panel, the left side shows a nested list of all elements that create a new stacking context, in the order the elements appear in the DOM. At the top of this list is the document, which always creates the initial stacking context, and then nested below it are the three elements on this page that create new stacking contexts. If we were to expand each of those elements, we’d see any further nested stacking contexts. When we select a stacking context on the left-hand side, in this case #document, we can see child stacking contexts on the right-hand side listed in z-index order. 

Here, I can see that the background video container creates a new stacking context and is layered beneath both the written content and blocklink containers. Because this element creates a new context and is layered beneath its sibling contexts, there is no way to show the play/pause button above the content and blocklink container elements. We saw in the LTR version of the band that the button did appear above the containers.

So why is the background wrapper suddenly creating a new stacking context?

By inspecting the element that’s created the stacking context we are interested in, we can then use the Stacking Contexts sidebar to get more information. In this case, “The element has one of the following properties set: transform, filter, perspective, clip-path, mask, maskImage, [or] maskBorder.”

Suddenly, it makes sense; this background video container is given a transform rule to mirror the video for RTL pages! Moving this transformation from the container to the actual video iframe prevents forming a new context here, and my button’s z-index rules will be scoped to the document root again!

With only a few other minor styling adjustments, I can now display a fully mirrored version of this band while maintaining the proper stacking order of the background video, the content, the blocklink, and my background video controls.

Conclusion

Debugging how your page’s elements stack doesn’t have to be difficult. Understanding how new stacking contexts are formed is based on a fairly simple set of rulesets, and the CSS Stacking Context Inspector tool makes it even simpler to find and sort through those contexts. Now that we can decipher exactly how and why our elements are layered, we can bring an end to the practice of setting a sky-high z-index and praying that it works. To review, stacking contexts are most commonly formed by:

  • elements that have position values fixed or sticky
  • elements that have position values absolute or relative with a z-index other than auto
  • elements that have a non-default value in a CSS property that requires the browser to do additional layout or paint operations such as opacity, transform, clip-path, mask, or filter

For a complete description of the properties that cause new stacking contexts to form, see the MDN page on Stacking Contexts

Aug 17 2021
Aug 17

You Might Also Like

The Type Tray and the Page Templates contributed modules can make your editors happier when creating content in Drupal. The goal is to make the Content → Add page more friendly to editors on sites with a large number of content types.

Type Tray

This module improves the list of content types that editors see when they visit the Content → Add page. It does that by providing:

  •  The ability to indicate icons and thumbnails that visually represent the content types
  • A more user-friendly layout, which can be switched between Grid and List
  • An in-place full-text search
  • The ability to display quick links to existing content of a particular type
  • The ability to add a content type to a “Favorites” group per Drupal user.

Start by installing and enabling the module:

$ composer require 'drupal/type_tray:^1.1'
$ drush en type_tray

While not mandatory, defining a few Categories to group your content types is highly useful. 

Navigate to Configuration → Content Authoring → Type Tray Settings ( /admin/config/content/type-tray/settings ) to create them: 

Now edit each of your content types, and select the “Type Tray” vertical tab, to define additional information on each one:

Optionally, you can specify an icon and a thumbnail, which help users quickly visualize the kind of content this type is destined to.

After configuring this information on your content types, you are ready to navigate to the “Content → Add” ( /node/add ) and see the difference:

Demonstration of the Add Content page using Type Tray

Page Templates

This module allows you to convert any existing node into a “Template,” which editors can use to kickstart their real content.

Similar to Type Tray, download and install using your preferred method.

$ composer require 'drupal/pate:^1.0'
$ drush en pate

We start by enabling the functionality in the content types we are interested in. Edit the content type again, and in the “Page Templates” vertical tab, mark the corresponding checkbox. 

Now let’s create some templates.

To avoid unintended content disclosure, all templates are required to be unpublished at all times. This means we need to create a node of this type, make sure it’s unpublished, and then click the “Page Template” tab:

Again, to prevent accidental edits, while a node is converted into a template, further modifications (for example, publishing it) are not allowed. You can always “de-templatize” it at any point if you need to adjust how your template looks.

Once a content type has at least one template, we now have a new page available at /node/{type}/templates, where we can:

  • See a list of all templates for that content type
  • Click on a preview to check how the template looks when rendered
  • Use the CTA button to start creating a node from the template

 Clicking on the “Preview” button will open a modal where the template is rendered:

This renders the entire page in an iframe, where clicks are disabled to prevent accidentally navigating editors away. If you feel having all elements on the page is distracting, look at hook_pate_template_elements_remove_alter()which you can use to remove elements from the previewed content easily.

When editors click the “Use this template” button (inside the modal or from the main templates list), a new node is generated where field values were copied from the template node: 

Note that editors are always offered the alternate operation depending on the context.

When visiting /node/add/article a button at the top offers to create from a template instead:

When visiting /node/{type}/templates a button at the top offers to create from a blank node form instead:

Bonus: Integration with the Admin Toolbar module

If your site uses the drop-down administration menu provided by the admin_toolbar_tools module (part of the Admin Toolbar project), some adjustments are made automatically to make it easier to navigate the editorial pages where templates exist:

Using Type Tray and Page Templates together

These are two completely independent modules that can be used individually. It’s also possible to integrate a link to the Type Tray cards to the templates list if any templates exist. All you need to do is override the type-tray-teaser.html.twig template in your admin theme and print a link to the templates page. The result will be something like this:

If you don’t want to display the "Create from template" link when there are no templates for that content type, you can preprocess this in PHP and only send to the template the link if it needs to be visible. This is the code: 

/**
 * Implements hook_preprocess_HOOK() for type_tray_teaser.
 */
function foobar_preprocess_type_tray_teaser(&$variables) {
  if (!empty($variables['content_type_entity']) && ($variables['content_type_entity'] instanceof NodeTypeInterface)) {
    // If this is a "templatable" content type, add the "Create from template"
    // link to type tray teasers.
    if ($variables['content_type_entity']->getThirdPartySetting('pate', 'is_templatable')
      && \Drupal::currentUser()->hasPermission('use page templates')) {
      $variables['create_from_template_link'] = Link::fromTextAndUrl(t('Create from template'), Url::fromRoute('pate.templates_for_type', [
        'node_type' => $variables['content_type_entity']->id(),
      ]));
    }
  }
}

Join us in improving the Drupal editorial experience!

Have you tried these modules and would like to provide feedback? Please let us know in the issue queues:

Thanks for contributing!

Aug 11 2021
Aug 11

Decoupling separates the system that stores the content from how that content is displayed on other independent systems. This can come with many benefits but also some downsides and tradeoffs. Go in with both eyes open as you decide whether to decouple or not.

With progressive decoupling, you can get some of the benefits of decoupling while avoiding some of the downsides. 

There are several ways to decouple a website progressively, but this article makes the case that widgets provide the most flexibility.

What are widgets?

Widgets are stand-alone JavaScript (JS) applications that are framework-agnostic and are designed to be embedded and configured by CMS editors.

Widgets can be vanilla JS or use frameworks like Vue or React.

Why JS over server-generated HTML?

Better reactivity and interactivity

The pages can be static or served from cache (very fast), and JS can be sprinkled on top.  The server can provide the unchanging parts, while the JS application adds interactivity. 

This reduces the load on your servers while increasing website performance. You keep the benefits of built-in CMS performance tooling.

Distributed delivery

Different development teams can write software independently. They can publish software on the same platform without coordinating complex deployment efforts.

  • Teams write the JS code in isolation
  • The browser executes the JS
  • Different deployment pipelines and servers can be used.

One team works on the navigation, one team works on the main feature set, and one team works on a price calculator.

Biggest talent pool

According to extensive surveys, JS and TypeScript (a superset of JS) are the most commonly used languages, based on Stackoverflow’s yearly survey.

By building pages and experiences in JS, you can pull talent from a bigger pool. You have more options.

Better developer experience

Since JS is so popular, your developers can leverage many tools, services, and frameworks. Jest, Storybook, Husky, Gulp, for things like unit testing, component management, setting githooks, etc. Many services integrate with the technology.

Many platforms will give you better support, which leads to better workflows, which hopefully leads to better code—things like visual diffs, code quality analysis,  and code deployment. Popularity leads to a flourishing ecosystem.

In addition, frameworks like Vue can take care of some of the rough edges.

Should we just build JS applications then?

Yes and no. We still care about the content. Content is the heart of the web. You can have a great user experience, but without content, your project is doomed to fail.

To manage content, you need a CMS. Especially if content is your product or is central to your business. A CMS provides many features that are hard to build from scratch.

  • Managing pages and setting up URLs
  • Users and access restrictions
  • SEO metadata
  • Media library
  • Security patches
  • Editorially controlled layouts
  • Moderation and previews

Why widgets?

We have a CMS. We know we want to use some JS. Why not put JS in our CMS templates?

This works. You can certainly go that route. But widgets have some advantages over JS in the template.

They require no CMS deployments

A developer creates a new widget in the registry, and it appears in the CMS editorial interface for embedding. No additional effort. Bug fixes and enhancements are also instantaneous.

Here is what a traditional deployment might look like:

  1. Develop JS app
  2. Integrate it with a CMS template (and with the content model if you want the app to receive editorial input)
  3. Deploy both in conjunction since they are coupled together
  4. Editors can expose the JS app to end-users

Widgets allow you to skip the two middle steps. When you use the existing CMS integrations, development is only done in JS, and it can be deployed on its own. No need to call in a CMS developer to add new widgets or update existing widgets.

A widget deployment looks like this:

Embedded and controlled by editors

JS developers can create flexible applications that allow for tweaked experiences and configuration. A single widget can act as multiple similar widgets.

JS developers define the input data they expect from editors, and the CMS creates a form for the editors to input that data. This allows many instances of the same type of widget to be embedded with different configurations: different content, color palettes, external integrations, etc.

The following example defines a customizable button that the editor can configure.

settingsSchema: {
  type: 'object',
  additionalProperties: false,
  properties: {
    fields: {
      type: 'object',
      properties: {
        'button-text': {
          type: 'string',
          title: 'Button text',
          description:
            'Some random string to be displayed.',
          examples: ['I am a button', 'Please, click me'],
        },
      },
    },
  },
},
title: 'Example Widget',
status: 'stable',

The CMS integration, which can be defined up-front, reads the definition and presents the proper form elements to the editor.

Embedded anywhere

Since widgets are not embedded at build time, but editorially, they can be placed anywhere. If the JS is in the template, you can’t choose, for example, to insert the JS app between two paragraphs of the body field. And changing the position would require a CMS deployment.

With widgets, editors can insert them anywhere.

  • Using layout building tools
  • Using WYSIWYG integrations
  • Using content modeling tools (entity reference field that points to a widget instance)
  • Using 3rd party JavaScript

And the same widget can work for any CMS. As long as the CMS subscribes to the registry and can read the schema, it can embed the JS application. When you change or fix something in the JS app, it is distributed to all CMSs. Widgets can also work in static HTML pages and Optimizely pages. Anywhere.

When are widgets a good fit?

Structured content is still the way to go. You don’t have to use widgets everywhere, but they are useful in several contexts.

  • Interacting with 3rd party APIs - reviews sites (g2crowd), commenting
  • Interactive tools - pricing calculators, checklists saving progress
  • Data visualizations - maps, charts of COVID data
  • Adding some pop to a page - you can do some things with JS that may be difficult to achieve when limited to HTML and CSS

How to get started

Create a widget

From a technical perspective, a widget is a function that takes a DOM id and renders JS in it. A widget can also receive arguments as HTML data.

Here is an example of rendering a React component:

window.renderExampleWidget = function(instanceId) {
  const element = document.getElementById(instanceId);
  const title = element.getAttribute('data-button-text');
  ReactDOM.render(
    ,
    element,
  );
};

It is very easy to port existing components and re-use them.

Upload the app code

The code needs to live somewhere accessible to the internet (Github pages, Amazon s3 bucket, etc.). The CMS can use this to either download the files or serve them from there. We don’t want to bundle the files within the CMS because that introduces coupling again.

Publish the metadata

This is the tricky part. Without the metadata, this is just another JS application in some repo. 

We need a registry, which is just a JSON document containing the metadata about all the available apps that can be downloaded from the internet. An array of objects. This includes the “directoryUrl,” which defines exactly where the files live. You can also see the “settingsSchema” property, which defines the shape of the input data this widget will accept.

[
  {
    "repositoryUrl": "https://github.com/js-widgets/example-widget",
    "shortcode": "example-widget",
    "version": "v1.0.4",
    "title": "Example Widget"
    "description": "This is a widget example that showcases some of the features of the JS Widgets project."
    "directoryUrl": "https://static.mateuaguilo.com/widgets/sandbox/example-widget/v1",
    "files": [
      "css/main.css",
      "js/main.js",
      "media/logo.png",
      "thumbnail.svg"
    ],
    "availableTranslations": [
      "en",
      "de",
      "es",
      "pl",
      "pt",
      "zh-tw"
    ],
    "settingsSchema": {
      "type": "object",
      "additionalProperties": false,
      "properties": {
        "fields": {"type": "object"...}
      },
    },
  },
]

This file will need to be uploaded somewhere that is accessible via HTTP.

The CMS pulls that JSON object and knows about all the widgets and where to grab their assets. You’ll start seeing widgets appear in your editorial tools.

Ok…but where do I actually start?

There are lots of existing tooling and examples at https://github.com/js-widgets. It includes a registry boilerplate and catalog, widget examples, and CI/CD integration.

If you fork it, you’ll get a lot of nice things out of the box.

Stakeholder-ready catalog

The same registry that provides information to the CMS can provide information to a single-page application that is browsable and searchable. This requires zero effort. Everyone involved can see what is available: editors, developers, stakeholders, etc.

The catalog can also render a widget as a live sample, even if the widget requires editorial inputs. Examples utilize the “examples” key as shown in the widget definition above.

Governance like you need it

All of this might seem like a governance nightmare. Do you really want JavaScript updated in a remote location and immediately deployed to your live site?

Of course not. 

You decide what registries to accept into your CMS. You decide what widgets and updates go into your registry. You decide who has access to change those widgets and registries.

Production-ready dependencies

We want these widgets as light as possible. What if there was a way not to bundle big dependencies in every single JS app? We don’t want to download React for every widget, for example.

Shared dependencies are possible with this paradigm. Widgets can be configured to pull certain dependencies from the parent container. This requires some Webpack configuration and telling the CMS where to find the excluded libraries. Read the documentation for external dependencies here.

Conclusion

We hope this makes you excited to start taking advantage of widgets and progressive decoupling. For more videos on the specifics of setting this up, take a look at these additional videos:

Lullabot has been helping companies take advantage of progressive decoupling for years and on websites that get a lot of traffic. This paradigm is battle-tested. If you want help getting started, please contact us.

Jul 28 2021
Jul 28

You Might Also Like

Drupal 8’s EOL (end of life) is just three months away on November 2, 2021. What does this mean?

No security patches will be provided for Drupal 8 after this date. There is also no extended support for Drupal 8 like there will be for Drupal 7.

You will truly be on your own. And with such a mission-critical piece of software, you do not want to be on your own. 

The good news is that you can avoid being left in the lurch. Audit your site for Drupal 9 readiness and start planning your roadmap now. Right now is the easiest it will ever be. It is better to upgrade on your own terms rather than scrambling at the last minute. 

Why November 2, 2021?

Drupal 8 has many third-party dependencies, and if one of these dependencies updates to break backward compatibility with previous versions of Drupal 8, then Drupal 8 cannot update that dependency. Eventually, the older version of the dependency will stop being supported by the authors. 

This leaves Drupal 8 in a precarious position. The longer it exists, the more fragile it becomes. The longer it exists, the more the foundations crack. Eventually, the whole thing would need to be demolished.

Drupal 8’s largest dependency is Symfony. Symfony 3, the version Drupal 8 uses, reaches EOL in November of 2021, and moving to Symfony 4 would break backward compatibility with previous versions of Drupal 8.

Supported versions of Drupal must use supported versions of Symfony, which means Drupal must move to a new major version. This is the main reason for the cut-off date.

But wait! Why can’t the Drupal community fork Symfony 3 and maintain that alongside Drupal? Couldn’t we stay on Drupal 8 forever?

Well…no.

Symfony is used precisely because the Drupal community did not want to maintain certain features common to all websites and content management systems. Instead, the community wanted more freedom to focus on what makes Drupal unique.

A lot of work goes into building and maintaining Symfony. Duplicating that effort is not only unattainable but also undesirable.

The Drupal community has supported Drupal 8.9 as long as possible, but now it is time to move on.

Read more about the rationale behind the release schedule and chosen dates.

What happens if I don’t upgrade by that time?

Everything will still work…for a while. Everything might still be secure…for a while. Neither the Drupal nor the Symfony communities will be updating the code your website relies upon.

And eventually, something will break.

Or worse. A vulnerability will be discovered, and you will have no way to fix it.

Or even worse than that. A vulnerability is discovered by nefarious actors and never announced publicly, and you go on using your Drupal 8 website as if everything is fine. But your website has been compromised, or it can be compromised at any moment. You might never know.

The longer you wait to upgrade to Drupal 9, the more effort it will be to accomplish the upgrade. Right now, Drupal 9 and Drupal 8.9 are still very close to being the same piece of software. But that distance will grow with every new Drupal 9 release.

Is there any wiggle room or grace period that is “safe”?

No.

You can go skydiving without a parachute, and you’re technically safe the entire way down. You can even close your eyes and pretend you’re flying. That might even be fun.

But eventually, you have to deal with the sudden stop at the end.

How long will Drupal 9 Last?

Until November 2023.

Read the official page on Drupal 9 support.

This is not something to be wary of. Thanks to the new minor release schedule, upgrades to new major releases of Drupal no longer require a complicated migration or upgrade process. Upgrading from Drupal 9 to Drupal 10 should be no more complex than upgrading from Drupal 8 to Drupal 9.

And you’ve already done that, right?

Or at least you’ve already started planning to do that, right?

How do I get started?

Start by making a plan and then start implementing that plan. The best time to start planning is yesterday. The next best time to start planning is today.

If you are busy or are feeling a little overwhelmed, we can help. We are planning and implementing the transition to Drupal 9 for many other organizations. The best place to start is to contact us for an audit of your Drupal 8 site which will you give you insight into the level of effort that will be required to upgrade to Drupal 9.

Jun 23 2021
Jun 23

You Might Also Like

Quicklink is an open-source JavaScript library created by Google that can dramatically speed up your site’s perceived page speed. It does this by detecting when hyperlinks enter the viewport and then instructs the browser to prefetch each link and store it in its cache. Then, when the user navigates to the linked page, the resulting page load will be nearly instantaneous. 

The Drupal Quicklink module integrates this library by exposing several configuration options while also making some sensible Drupal-specific defaults (such as not fetching links to admin pages, AJAX links, etc.). 

The Quicklink module’s first stable version was released in April 2019. Since then, the Google ChromeLabs team has released a new version of the library with several new features and bug fixes.

New features

Various hosting companies will sort you into a paid tier depending on how many page requests are made for the HTML document. In its default configuration, the Quicklink library can cause these requests to increase dramatically. Quicklink 2.0’s new features allow you to control better how links will be prefetched.

  • Request limit - this ensures that the page will only prefetch up to the configured amount of links. Added onto this is a delay, which sets the minimum amount of time that a link needs to be within the viewport before it can be fetched.  Both of these options will decrease the number of requests to the host.
  • Concurrency throttle - this limits the number of concurrent prefetches. This is useful if Quicklink is configured to prefetch pages for authenticated users because it can help ensure that the origin server isn’t overwhelmed with requests.
  • Idle timeout value - this sets the minimum amount of time that the browser needs to be idle before initiating prefetches. Increasing this from its default value can also decrease the number of requests that Quicklink initiates.

Additional features

The new version of the Quicklink module also includes a very useful feature to ignore configured selectors. This is effective if you want to prevent Quicklink from prefetching links within a certain block, view, or page.

Polyfill now disabled by default

The Quicklink module contains an option to load a third-party JavaScript polyfill so older browsers can support the Intersection Observer API. Without Intersection Observer, the 1.0 version of the Quicklink library would fail and potentially break other functionality within the bundled script. However, this is now resolved and coupled with the fact that all modern browsers support Intersection Observer,  we now disable the option for the polyfill on new installations.

Testing coverage

All of the new features (and most of the old features) are now covered by Nightwatch functional JavaScript tests. Warning: writing tests can be slightly addicting because of the warm fuzzy feeling that comes knowing that regressions will now be much more difficult to introduce.

You can view Quicklink’s tests here. In the example below, you can see that I’m ensuring that specific links are not ignored. Then I’m verifying that the configuration passed into the Quicklink library matches what comes from the module’s administrative UI.

Please note: the Quicklink module’s test coverage doesn’t actually verify that the links are prefetched but rather tests that the data passed into the API matches what was entered into the user interface. 

      // Verify that the following custom selectors are currently prefetched.'
      .assert.not.elementIgnored('drupal')
      .assert.not.elementIgnored('drupal')
      .assert.not.elementIgnored('drupal')
      .assert.not.elementIgnored('drupal')
      // Verify "Override Parent Selector" is default.
      .execute(
        function () {
          return drupalSettings.quicklink.quicklinkConfig.el === document;
        },
        [],
        (result) => {
          browser.assert.ok(result.value, 'Verify "Override Parent Selector" is default.');
        },
      )
      // Verify "Override allowed domains" is empty.
      .execute(
        function () {
          return drupalSettings.quicklink.quicklinkConfig.origins === false;
        },
        [],
        (result) => {
          browser.assert.ok(result.value, 'Verify "Override allowed domains" is empty.');
        },
      )

Beta release and roadmap

Quicklink 2.0.0-beta1 is available right now and is being used here on Lullabot.com.

The next step in the roadmap is to update the Quicklink documentation and make sure that users don’t report any regressions. Once that is complete, a stable version will be released for Drupal 8 and 9. Also coming soon is a 7.x-2.0 release for the Drupal 7 version of the module.

Try Quicklink now!

The Quicklink module has very logical defaults, and for most websites, it can be installed and used without any additional configuration. For more information on Quicklink, see our Introducing the Quicklink Module for Drupal article, as well as Quicklink’s documentation on Drupal.org.

Jun 16 2021
Jun 16

You Might Also Like

Migrations from Drupal 7 can be as varied and diverse as humanity itself. Goals, audiences, servers, content models, and more all come together to form a unique fingerprint. No two migrations are ever really the same.

But despite the uniqueness of each, there are some commonalities. You can take steps to ensure your migration will be a success, no matter how complex or simple.

You just need to start planning it out.

Map out your Source

You need to know where you are coming from. This is how you begin to determine the length of the journey and how many supplies you’ll need along the way. 

Start mapping out your current Drupal 7 website.

Fields and content types

Make a list of all of your current fields and the content types they are attached to. Include the field type, machine name, and the complexity of the data stored in that field. You can come up with your own complexity scale that makes sense to your team. Some examples:

  • Basic - A single-value textfield
  • Intermediate - A formatted textfield. These fields can range in complexity depending upon what is allowed. Some allow only links and basic formatting, which are not complex, but others might allow embedded snippets and images. Move the scale accordingly.
  • High - Multi-value fields with multiple columns of data almost always need extra work to migrate. Field Collections and Paragraphs might fall in this category.
  • File - Sometimes, categorizing complexity by the type of data can be helpful. It depends on your eventual workflow or how many fields of a certain type you may have.

Spend some extra time thinking about your entity reference fields. More on this later.

For a quick start on getting all of your content types and fields listed in a spreadsheet, check out the Migration Planner module. View a sample spreadsheet created by the module.

Taxonomy

Taxonomy vocabularies can also have fields attached, so you’ll want to make a list similar to what you have done with content types and their fields. But taxonomies have their own challenges.

Often, they help organize a Drupal site. By design, they are referenced by other entities and can have all the migration issues that come with that.

Taxonomy terms can also have their own hierarchy. If you have terms that are parents of other terms, find a way to record this relationship. Make a note of the depth.

Files

Drupal 7 has many ways to manage files, so you need to document exactly what you have. You are likely using the Media and File Entity modules. Document which file types you are using, how many of each file type you have, and get an estimate of the storage space being used. As you did with content types and taxonomies, list the fields for each file type and categorize them.

If you are using native file handling, you will still want to map out what you have. Your developers will thank you.

Views

There is no way to migrate your Views reliably, so each will need to be recreated on your new site. Each one represents some work. Some might rely on extra business logic in custom code or on plugins provided by contrib modules. Do an audit and determine any dependencies.

Architecture

You will start to map out the contours of this as you take stock of your content types, taxonomies, and fields, but other things make up your overall architecture.

  • Menus - Not just the main menu and footer menus. Make sure you pull in contextual menus that might be embedded conditionally or placed via a block.
  • Other entities - Do you have other content entities besides nodes and taxonomies? Don’t forget about them. These could be custom components or something like Paragraphs.
  • Hosting - Detail out your current hosting platform. Resources, apps, integrations, etc. This might be something else you need to upgrade to ensure the smooth running of Drupal 8/9. Or you might be migrating onto a different hosting provider, and if that is the case, you want to at least keep parity with your current solution.

Pay special attention to your entity reference fields that you identified in previous steps. These can mask hidden domain knowledge and act as pillars for your entire site architecture. Dig into them. Make sure you know their purpose.

Contrib modules

Make a list of all Drupal 7 contrib modules you have installed. Contrib modules can represent a lot of effort in a migration, so you’ll want to explore each one in-depth. Is there a Drupal 8/9 equivalent? If so, is the new module stable? How much work is left? 

Alternatively, a module could have been rolled into Drupal core. If so, you’ll want to see if there are any differences so you can take those into account during the migration.

If no current module exists, what is the estimated level of effort to recreate this functionality?

Custom functionality

Make a list of all of your custom modules. Be sure also to make a note of business logic that might be in your custom themes. A lot can be embedded in template files.

You should note the complexity of the functionality and what it is used for. You’ll want to check the Drupal 8/9 contrib ecosystem to see if any modules can replace this custom functionality. This might entail conversations with your stakeholders and team about priorities and goals. A contrib module might replace 80% of your custom module, and you need to know if that is acceptable or not.

If you have limited resources, you might also want to mark custom functionality on a spectrum of “mission-critical” to “nice-to-have.”

These conversations will continue as you map out your destination.

What do you not want to migrate?

Now is a good time to do a site audit. As you are compiling all of this information and start having conversations around your goals and content model, come up with criteria for what shouldn’t be migrated. The less you have to migrate, the less work you’ll need to undertake.

Is there a cutoff date for old articles? A certain taxonomy that is no longer used? Are some fields for cosmetic purposes, and they won’t be relevant for your new design? Do all users need to be migrated over? What about unpublished content? Or revisions? Cutting out revisions will reduce the size of your database drastically, but then editors will not be able to view past changes. 

This is where a little bit of work can pay big dividends later.

Some additional questions to ask

  • How will you handle files? Files can be transferred during the migration itself, but sometimes it’s better to do one big transfer, so your migrations run more quickly. This usually means some cleanup of unused files beforehand. If you have a lot of files not managed by Drupal, moving the filesystem in bulk might be necessary.
  • Will you keep the same paths for your content or change them? If your paths are changing, be sure you have a good 301 redirect solution in place.

Map out your Destination

You need to know where you are going. This can be done in parallel to mapping out your source because what you determine here will help answer some of the questions above.

Start by figuring out the literal destination for your new Drupal 8/9 site: the servers and hosting. Whatever your setup, you’ll need to plan some way for it to access your Drupal 7 site. This can be as simple as copying the database and files over to the new platform.

But if your site is huge, or you want to do a continuous migration until a final cutoff date, you’ll want access to the live production data. Or at least a replica that stays up to date.

You’ll also want to make plans for a development or testing environment that will mimic your final destination server as much as possible.

Evaluate your content model

You can keep everything exactly the way it is. Same content types. Same fields. Same everything. Drupal will do a good job of re-creating these for you on the new site. If you are limited in time and budget or your site is not complex, this might be the best way to go.

Be warned. If you have relied on the body field for creating complex content, you will still have a lot of work to do, even if you don’t plan on reworking your content model. And regardless, you still need to understand the gaps created by your custom functionality and reliance on certain contrib modules.

For these reasons and many others, a migration is a good opportunity to rework your content model to better align with your strategy and the needs of your audiences. Especially editors. Don’t forget them! It’s also a good opportunity to structure your content for the needs of a“publish everywhere” ecosystem.

Run some workshops that clarify your priorities. Document a new content model that takes those priorities into account. Add additional columns to the spreadsheet you created for your source content types and fields. These columns should contain the machine names of your new content types and fields.

Sometimes you’ll find a need to consolidate content types and/or fields. Two content types funnel into one new content type. Or three different fields are cleaned up into one single field. That should be expected. And Drupal’s migration tools make it simple to do this.

Clarifying your new content model will also help answer questions about your Drupal 7 site, like what content you need to leave out of the migration.

WYSIWYG cleanup

Your body fields (and other formatted text fields) might have accumulated a lot of cruft in the years your Drupal 7 site has existed. Different ways of displaying images, iframes, custom embed codes, etc. You’ll have to deal with each of these.

Cleaning up a formatted body field can balloon into an entire project by itself. Be sure you inventory everything that occurs in a body field, document what needs to be migrated as-is, document what needs to be transformed, and document what can be ignored or deleted. If you are changing your text filters and formatting rules, you’ll want to make sure your content meets those new requirements.

The SQueaLer module helps you find these issues, among other things. This is still in development and might need some tweaking to work with your particular Drupal site.

Again, having conversations around your new content model will pay heavy dividends here. You can come out of this migration with not just a new website but a better website that better accomplishes your goals.

And a website that your editors actually like to use. 

WYSIWYG cleanup goes a long way. Don’t limit cleanup to code, either. Sometimes, manual cleanup makes more sense.

Development Workflow

Once you have mapped your source and destination, you’ll be able to start estimating the level of effort involved. 

Team planning

A lot of migration work can happen in parallel, but the schedule will depend upon your budget and the team you have in place.

We have seen migrations be successful with one backend developer, and we have seen migrations be successful with four back-end developers. There is a point of diminishing returns. You don’t want too many cooks in the kitchen. If you have experienced developers on your team, they should be able to help you determine when you might reach that point.

Splitting up migration work based on field types has worked well. We have found it helpful to start with more basic fields to get momentum. You get the overall processes hashed out without worrying about complex data sets and transformations.

An example way of breaking up responsibilities between developers might look like the following:

  • Basic fields and simple formatted fields
  • File/image fields
  • Paragraphs, field collections, and other entity references
  • WYSIWYG cleanup
  • Contrib module updates and replacements
  • Custom functionality

There will be some overlap and bleeding across these boundaries, but these are a good starting point in terms of spheres of initial responsibility. Keep in mind that the complexity of your content model can have big ramifications for your team planning.

Solution preferences

When configuring a particular migration, you want a clear order of preference for the solutions you use. This will help save you from extra work and unnecessary technical debt.

The simplest migration entails mapping one field to another in a configuration file. Even basic transformations can be accomplished with this. These will usually invoke plugins that are included in Drupal core or contrib modules. Start here first.

If core and contrib fail you, move to writing your own source, process, or destination plugins. This should cover most of your use cases.

For certain edge cases, you can invoke hooks and react to events at different stages of the process.

QA and testing

Migrations require QA and testing, so be sure and budget time for that. Having a good development server or environment builder like Tugboat will allow migrations to be run and issues surfaced. 

Stakeholders can check migrated content to make sure everything shows up as expected. Other developers can validate results and look at the underlying data structure. 

This is also where you’ll want to grab any logs generated by the process.

Logging and exceptions

In our experience, migrations create a lot of noise. Warnings, errors, notices, etc. Some of these can be safely ignored, but to figure that out, you’ll want to pay attention to the logs. This is more important if you have several developers working in parallel or you are migrating several sites, each of which might have different edge cases.

But even if you have just one backend developer on the task, you’ll want to make a habit of going over the logs regularly.

  • Some can be ignored. Ignore them.
  • Some need greater clarification from a developer. Make those clarifications and see if a new ticket needs to be created.
  • Some need the attention of a stakeholder. Circulate these and discuss them in your status updates. If you aren’t sure how to handle that rogue iframe, ask.
  • Some need fixing. Though some might also be a low priority. If so, make a ticket, put it in the backlog, and keep moving.

When logging issues, be sure to record the current row id and the relevant migration id. This practice can help you find edge cases. Core and contrib migration plugins will provide logs, but if you end up writing custom plugins, be sure to add logging with clear messaging wherever issues might happen. Write custom plugins defensively.

Nested fields (Paragraphs and Field Collections)

If you are dealing with nested fields of structured data, pay special attention to how they are structured and how you will deal with them on the new site.

Paragraphs and Field Collections are the most common, but you might also have a custom solution build with entity references and content types. There are a lot of different ways you can go, and each way has its challenges. It also depends on how your editors best like to work.

Paragraphs to Paragraphs? If so, are you changing the structure?

Field Collections to Paragraphs?

Paragraphs or Field Collections to embedded entities in the body field?

Paragraphs or Field Collections to a custom structure implemented via entity references?

Or maybe you have embedded entities and want to migrate them to Paragraphs?

This is why evaluating your content model is so important. Each path has implications. You don’t want to simply choose the default path. You want to choose a path with intention, with your eyes open, understanding the challenges you need to overcome to get to the other side.

Summary

Planning properly can help you budget properly for both team size and timing. You can get a bigger picture and map out the potential minefields before you even start on your journey.

  • Map your source
  • Map your destination
  • Pay attention to WYSIWYG cleanup
  • Think about your development workflow to maximize resources

If you want an experienced partner that can help you through every stage of the migration process, reach out. We would love to help.

Jun 09 2021
Jun 09

Do you ever stop to think about how many "things" make up the internet? 

Not necessarily websites and social media networks, but instead the individual pieces of information. Every button. Every callout. Every image. Every teeny-tiny item description in your shopping cart.

A long time ago, if you wanted to write about something on the internet, you had to create it and publish it. And if you wanted to write about it again somewhere else, you had to create it again and publish it again. 

But with "create once, publish everywhere" the manual weight of creating multiple things, publishing multiple places, and tweaking little bits of HTML across a landscape of pages is lifted. You get the most out of your content creation efforts.

What is create once, publish everywhere (COPE)?

Simply put, COPE is structured content. Rather than creating content multiple times across multiple pages, you instead create it and manage it in one place, whether you’re publishing it for the first time or the thousandth.

COPE was pioneered by National Public Radio (NPR) in a bold redesign to make it easier to share their multimedia content across devices, social media, email marketing, and more. For instances of this discussion, we’re focusing on publishing across your website.

Let’s start with a small example. Pretend you’re opening a new shoe store in your town. You really want people to visit, so you decide you’ll publish your business address and hours on every page of your website. 

But your website has 20 pages. And what if you change your business hours to be open later during the holiday season? That means you have to change that information 20 times. 

Twenty. 

But with COPE, you create a block or panel that holds your address and hours, and you tell your content management system (CMS) that you want that block to appear on every page of your website. Now when your business hours change, you only have to change that one block, and it updates across all the pages of your website with a single click.

And maybe today your website is 20 pages, but it’ll grow with more products, to 100 or 150 pages. That block means it will publish across every new page without having to be written from scratch or manually added, time and time again.

Where you’ve seen COPE

You may not realize it, but you’re running into this type of content across many websites you engage such as:

  • Entertainment websites: Upcoming episodes, shows, or movies that might interest you
  • Health care websites: Related doctors, clinic locations, or blog articles
  • News websites: Related headlines or news 
  • Recipe websites: Related recipes or blog posts
  • Retail websites: Recently viewed products or products that may interest you

COPE allows website administrators to set certain content types and attributes, or fields, which appear to your web visitors at different points in their journey. The same content or product can be seen in search listings, on cart pages, as a featured product on the front page, or in the "customers also bought" section. Different contexts, but the same content.

For example, if you’re looking online for a pair of red shoes, you may want to find something in your size (size 10) and the color you prefer (red). A website that offers these filters on its search page has categorized its content with taxonomy.

So, with these filters selected, you find a pair of retro-style Converse sneakers. You open the page to learn more about the shoes and find a description, price, and reviews. All of those things are part of the structure of that content. The website can display the same shoe on your “size 10, red” search, as well as someone’s “size 7, casual” search. 

That red pair of Converse lives on a product page that’s categorized and built in a way so that it only needs to exist once. Wouldn’t it be terrible if you had to publish and maintain an entirely new “Converse shoe” page for each size? 

COPE separates content and design

COPE doesn’t only mean you have far fewer things to keep track of when a piece of content needs to be updated or maintained, but it also separates content from design. 

Wait, what?

Yes. The process of structuring your content means you’re not reliant on how it looks on the website. It can, in fact, make internal content governance easier because you’re taking the subjective visual “feelings” away from the substance of the words. 

Let’s go back to the Converse shoes example. As a website administrator for the shoe store, you’re tasked to create a page for every shoe you offer. In setting up the page, you have to include: 

  • Shoe brand
  • Shoe name or model
  • Picture of the shoe
  • Bulleted description of the shoe’s features
  • Price of the shoe
  • Rating of the shoe, with reviews from customers

No matter if your shoe is a red Converse or a blue pair of high-heels, each product should be able to fill out all of these attributes. 

Now enter a stakeholder who says there needs to be a field for the shoe material because some of the nicest shoes are leather. But can a materials field be used across all the shoes you offer? It sure can!

And while historically, things like this always focused around “what it would look like” before “what it would say,” using structured content means the words hold just as much power as the design.

Structured content and a COPE model make it possible for you to grow and expand the content you offer across your site as the needs of your organization or audience change. A consistent experience makes it easy for people to find what they need quickly and the information to help them make their decision (or, in this case, purchase).

COPE makes cross-platform possible

You’ve built your website, and all your pages are tagged with taxonomy and are ready to share with the world. Time to launch.

Great! But your website being a destination for your brand means you want to build roads for people to reach you. Marketing has entered the chat.

With COPE, you don’t only structure content in a way that makes it easy to govern and manage that content in your CMS, but you also create a structured way to share that content across other channels.

Maybe your social posts use the product's short description and a thumbnail image of the shoes. Or maybe your email newsletters tease the top-rated shoes of the week, with a thumbnail and their star rating shown. 

COPE makes it easy to keep all of this information stored in one place. It also makes it easier to share your content across different channels in any way you need.

This marketing approach isn’t only less of a headache for your website administrators but also your whole organization. It makes your business and brand more agile, and in the long run, more efficient and successful.

Before you start COPE-ing

If you’re jazzed to get started, we’re jazzed for you, too. But before you go rushing off to your web team with this new idea in hand, let’s talk about what you should have in place before you get started. 

  1. Know what you have and what you need. A content inventory is a perfect way to start. You must understand your current digital structure and presence for both what you have and what you need. 
  2. Identify internal (and external) resources. COPE content becomes a piece of cake over time, but it might take a lot of decision-making and creation to give it legs upfront. Know if you have the team ready to take the project on or if you can outsource to an agency or partner with a vendor to get started.
  3. Get stakeholders on board early. You’ll need their input when the project kicks off and gets moving, so have conversations early and show how COPE can eliminate waste, drive sales, and encourage cost-efficiency. Stakeholders and internal experts can also help you start defining your ubiquitous language, which is integral to successful COPE.
  4. Set some high-level goals. Mostly, identify where your analytics are and where you want to go, but don’t make them your only focus. Set a SMART (specific, measurable, achievable, relevant, and timely) goal around your web traffic, conversions, and sales as a starting point.
  5. Document your plan forward. You’re not done with COPE when you hit ‘Publish’ on your website. The web is never done, and neither is your work. If you have a team, document how often you’ll meet to plan social posting, blogs, or other marketing efforts. Identify when and how content will be governed across the site, including how often and with what stakeholders or approvers. Answer these questions and write them down.

Most importantly, these discovery steps will (and should) lead your team or partner toward a domain model and, eventually, content models. Both of these help map the entire ecosystem of your website and plan for what type of pages, layouts, and templates are needed to present the information to the end-user. 

Get more content-first tips for preparing for COPE with the Content Marketing Institute.

Use COPE to master your destiny

It’s a long, winding road. Sometimes you’ll hit dead ends. Sometimes it’ll be a downhill breeze. But COPE is a worthwhile adventure for you and your team to explore as a way to more efficiently and effectively manage your website and user experience. 

If you’re ready to get started with a COPE model and need a digital partner who can lend a hand, reach out to our Lullabot team to get started.

May 26 2021
May 26

You Might Also Like

Enabling WebP images on your website can save millions of bytes per page load! That might sound like a bit of an exaggeration, or maybe a little tacky, but it’s true. On slower connections, that can be the difference between a visitor viewing your page or pressing the back button in frustration.

What is WebP?

WebP is a new(ish) image format that renders higher-quality images with drastically smaller file sizes. It also supports several cool features such as transparency (generally handled with PNG images) and animations (generally handled with animated GIFs or videos). 

Browser support

All modern browsers support WebP, including Chrome, Edge, Safari, and Firefox. However, Safari support is limited to machines running macOS 11 Big Sur (released November 2020) or newer. 

You can use WebP images just like a standard JPG, PNG, or GIF. 

Awesome

Fallback support for older browsers

Do you still have to support older versions of Safari and/or Internet Explorer? You can automatically create JPG fallbacks using the HTML tag. If you’re not familiar with , it’s used for responsive images (serving different images based on your screen width), but you can also use it to serve different images based on what MIME type the user’s browser supports. It’ll look something like this.

Awesome

Older browsers will look at the type attribute on the element and fallback to the element.

Drupal core support

Drupal core 9.2 (due June 2021) supports WebP! You can use Drupal core’s built-in image styles to convert your image easily.

Unfortunately, the integration between core’s responsive images module and WebP is lacking. If you’re interested, follow along in the issue queue

Start using WebP today

However, you can use WebP today by using either the WebP or ImageAPI Optimize WebP modules. Both modules support Drupal 8 and 9, integrate with Drupal core’s Responsive Image module, and generate WebP derivative images for each image style. 

For the ImageAPI Optimize WebP module, note that only the dev version supports Drupal 9 integration with the Responsive Image module.

Support for the Stage File Proxy module

If you use the Stage File Proxy module to pull production images to your local environments automatically, you’ll need to download a patch to support WebP images. This issue was brought up at Lullabot’s weekly engineering roundtable, and we subsequently created and submitted the patch. 

WebP on Lullabot.com

We’re using the WebP module on Lullabot.com. You can see the difference yourself using Chrome Developer Tools.

On Lullabot’s Our Work page, the WebP module saves over 1MB of data at wide viewports. Note that these images were already fairly optimized JPGs. In situations where users are uploading PNGs, the savings are even more substantial!

WebP is only one piece of your site’s image serving strategy

Serving WebP images is not a panacea to speeding up images on your website, but it is a great start. 

Other important best practices for serving images include

  • Using Responsive Images
  • Lazy loading images (Drupal 9.1 and newer does this by default)
  • Ensuring that images contain width and height attributes to prevent content shifts

Remember, images are only one factor of your site’s overall performance. If you’re in doubt, defer to measurement tools rather than hard and fast rules.  

May 18 2021
May 18

You Might Also Like

With Drupal’s new versioning and release planning came the promise of easy upgrades between major versions. No more major database overhauls. No more rewriting business logic just to keep things working. No more major investments in expensive migrations to maintain feature parity.

Upgrading to Drupal 9 from Drupal 8 provides the first test of these promises. Have the promises been fulfilled? 

Yes.

Is it as simple as flipping a switch?

No.

When moving to Drupal 9 from Drupal 8 (which reaches end-of-life on November 2, 2021), planning is required. The larger your codebase, the more you need to take into account. Here is a roadmap to get you started.

Create a module inventory

The Drupal 9 codebase is very similar to the Drupal 8.9 codebase, but it removes the code marked as deprecated. This deprecated code can range in impact depending on what you are using and what APIs you have taken advantage of. 

You need to look at three things: custom modules, contrib modules, and themes. Create a list in whatever format will work best for your team.

To get started, we suggest installing the Upgrade Status module. Enable it in a development environment. This will give you a good baseline of the information you want to track. To get this information into a spreadsheet, install this patch for the module.

Organize the sheet to easily filter your contrib and custom modules because the work process will differ for each category.

Custom modules

Get drupal-check installed and running on a development environment or as part of your CI workflow. This will show you where the work is to get your custom codebase compatible with Drupal 9. 

The benefit of adding it to your CI workflow is to make sure you aren’t introducing any new incompatibilities. A VS Code extension is also available.

For each module that has a Drupal 9 compatibility issue, create a ticket in your ticketing system. Enter the details. Some things will need minor changes, like a different function call. Others might need deeper refactoring. Identify those as early as possible. Now is also a great time to check the contrib space and see if you even need these custom modules anymore.

Contrib Modules

Organize the contrib modules on which you are dependent into groups that represent levels of effort. Ask the following questions:

  • Which have a D9 release, and which does not?
  • For modules with a D9 release, are they minor updates or major updates?
  • Which modules are we currently forking/patching that may need more hand-holding to get to their D9 release?
  • Will this contrib module even have a D9 release? Is there a different module that will take its place?

Your groups might look like this:

  • Group 1: D9 minor tagged release available
  • Group 2: D9 major tagged release available
  • Group 3: D9 minor dev release available, not forked
  • Group 4: D9 major dev release available, not forked
  • Group 5: D9 major or minor release available, forked modules
  • Group 6: D9 release is not yet available

Themes

If you use any contrib base or sub-themes, keep them on your list to track and treat them as modules. 

Twig has its own deprecations moving from Twig 1 to Twig 2. Make sure your templates are not using functions in Twig templates that are not supported in version 2. The Upgrade Status module should flag these for you. Many projects won’t require any changes, but it is best to check early so you are not surprised.

Add any flags as tickets to your ticketing system.

Updating modules and code

Fixing your custom codebase is all work you are going to have to do sooner or later. Start adding some of the tickets you created into your sprints or general workload. Pick away at them. There is no substitute for rolling up your sleeves and getting to work.

For contrib modules, the latest releases will often support both Drupal 8 and Drupal 9, so they can go ahead and be updated. That represents the lowest level of effort. This is where you want to start.

Focus on your custom modules and the easy contrib updates first. This allows time for some of the more difficult contrib work to be completed by the module maintainers and the wider community. By the time you get to them later, it may be a trivial update. Keep your contrib module inventory up to date during your transition period.

If you are done with all of your custom code and low-hanging fruit, dedicate some time inside the issue queues of contrib modules your website depends on. There are a lot of ways to help out to speed up the overall process. Review patches and leave feedback. Submit your own patches. The work you do not only helps your own organization but potentially countless others.

While doing this work, be sure and stay up to date with the latest Drupal 8 minor releases. When the time comes and all of your code is ready, you’ll be able to upgrade to Drupal 9 with minimum hassle.

What to avoid

As you go through this process with each custom module, you’ll inevitably find other things that need improvement—features that were left half-baked, technical debt, cleaner comments, newer solutions, etc.

It will be tempting to fix everything since you’ll be touching the code anyway. But take heed. Avoid. You will be stumbling down a rabbit hole that will delay the completion of your true goal. 

Take note of these things. Create tickets. But stay focused on Drupal 9 readiness. If each code change and pull request becomes huge, full of unrelated changes, it will slow things down.

Summary

When you compare the process for upgrading from Drupal 8 to Drupal 9 to the process for upgrading from Drupal 7 to Drupal 8, it looks like night and day. 

D7 to D8 required a host of dedicated resources and time. The amount of work required inevitably made it a big project. In contrast, D8 to D9 can be done alongside regular work. Your developer team might be able to slot it into their roadmap without additional overhead.

  • Create an inventory of your code
  • Use drupal-check and Update Status to start updating your code
  • Keep Drupal 8 up-to-date during this transition period
  • Upgrade to Drupal 9

Drupal 8 reaches end-of-life on November 2, 2021, which means that Drupal 8.9 will no longer receive security updates. This will sneak up on you quickly if you aren’t tracking it. At this point, there is no value in delaying. Start your move to Drupal 9.

May 03 2021
May 03

Drupal projects can be challenging. You need to have a lot of framework-specific knowledge or Drupalisms. Content types, plugins, services, tagged services, hook implementations, service subscribers, and the list goes on. You need to know when to use one and not the other, and that differs from context to context.

It is this flexibility and complexity that allows us to build complex projects with complex needs. Because of this flexibility, it is easy to write code that is hard to maintain.

How do we avoid this? How do we better organize our code to manage this complexity?

Framework logic vs. business logic

To start, we want to keep our framework logic separate from our business logic. What is the difference?

  • Framework logic - this is everything that comes in Drupal Core and Drupal contrib. It remains the same for every project.
  • Business logic - this is what is unique to every project—for example, the process for checking out a book from a library.

The goal is to easily demarcate where the framework logic ends and the business logic begins, and vice-versa. The better we can do this, the more maintainable our code will be. We will be able to reason better about the code and more easily write tests for the code.

Containing complexity with Typed Entity

Complexity is a feature. We need to be able to translate complex business needs to code, and Drupal is very good at allowing us to do that. But that complexity needs to be contained.

Typed Entity is a module that allows you to do this. We want to keep logic close to the entity that logic affects and not scattered around in hooks. You might be altering a form related to the node or doing with access or operate on something related to an entity with a service.

In this example, Book is not precisely a node, but it contains a node of type Book in its $entity property. All the business logic related to Book node types will be contained in this class.

final class Book implements LoanableInterface {
  private const FIELD_BOOK_TITLE = 'field_full_title';
  private $entity;

  public function label(): TranslatableMarkup {
    return $this->entity
      ->{static::FIELD_BOOK_TITLE}
      ->value ?? t('Title not available');
  }

  public function author(): Person {...}
  public function checkAvailability(): bool {...}

}

Then, in your hooks, services, and plugins, you call those methods. The result: cleaner code. 

// This uses the 'title' base field.
$title = $book->label();

// An object of type Author.
$author = $book->owner();

// This uses custom fields on the User entity type.
$author_name = $author->fullName();

//Some books have additional abilities and relationships
if ($book instanceof LoanableInterface) {
  $available = $book->checkAvailability() === LoanableInterface::AVAILABLE;
}

Business logic for books goes in the Book class. Business logic for your service goes in your service class. And on it goes.

If you are directly accessing field data in various places ($entity->field_foo->value), this is a big clue you need an entity wrapper like Typed Entity.

Focusing on entity types

Wrapping your entities does not provide organization for all of your custom code. In Drupal, however, entity types are the primary integration point for custom business logic. Intentionally organizing them will get you 80% of the way there.

Entities have a lot of responsibilities.

  • They are rendered as content on the screen
  • They are used for navigation purposes
  • They hold SEO metadata
  • They have decorative hints added to them
  • Their fields are used to group content, like in Views
  • They can be embedded

Similar solutions

This concept of keeping business logic close to the entity is not unique. There is a core patch to allow having custom classes for entity bundles.

When you call Node::load(), the method will currently return an instance of the Node class, no matter what type the node is. The patch will allow you to get a different class based on the node type. Node::load(12) will return you an instance of the Book class, for example. This is also what the Bundle Override module was doing.

There are some drawbacks to this approach.

  • It increments the API surface of entity objects. You will be able to get an instance of the Book class, but that class will still extend from the Node class. Your Book class will have all of the methods of the Node class, plus your custom methods. These methods could clash when Drupal is updated in the future. Unit testing remains challenging because it must carry over all the storage complexity of the Node class.
  • It solves the solution only partially. What about methods that apply to many books? Or different types of books, like SciFiBook or HistoryBook. An AudioBook, for example, would share many methods of Book but be composed differently.
  • It perpetuates inheritance, even into the application space. Framework logic bleeds into the application and business logic. This breaks the separation of concerns. You don’t want to own the complexity of framework logic, but this inheritance forces you to deal with it. This makes your code less maintainable. We should favor composition over inheritance.

Typed Entity’s approach

You create a plugin and associate it to an Entity Type and Bundle. These are called Typed Repositories. Repositories operate at the entity type level, so they are great for methods like findTaggedWith(). Methods that don’t belong to a specific book would go into the book repository. Bulk operations are another good example.

Typed Entity is meant to help organize your project’s custom code while improving maintainability. It also seeks to optimize the developer experience while they are working on your business logic.

To maximize these goals, some tradeoffs have been made. These tradeoffs are consequences of how Drupal works and a desire to be pragmatic. While theory can help, we want to make sure things work well when the rubber meets the road. We want to make sure it is easy to use.

Typed Entity examples

Your stakeholder comes in and gives you a new requirement: “Books located in Area 51 are considered off-limits.”

You have started using Typed Entity, and this is what your first approach looks like: 

/**
 * Implements hook_node_access().
 */
function physical_media_node_access(NodeInterface $node, $op, AccountInterface $account) {
  if ($node->getType() !== 'book') {
    return;
  }

  $book = \Drupal::service(RepositoryManager::class)->wrap($node);
  assert($book instanceof FindableInterface);
  $location = $book->getLocation();
  if ($location->getBuilding() === 'area51') {
    return AccessResult::forbidden('Nothing to see.');
  }

  return AccessResult::neutral();
}

You already have a physical_media module, so you implement an access hook. You are using the global repository manager that comes with Typed Entity to wrap the incoming $node and then call some methods on that Wrapped Entity to determine its location. 

This is a good start. But there are some improvements we can make.

We want the entity logic closer to the entity. Right now, we have logic about “book” in a hook inside physical_media.module. We want that logic inside the Book class.

This way, our access hook can check on any Wrapped Entity and not care about any internal logic. It should care about physical media and not books specifically. It certainly shouldn’t care about something as specific as an “area51” string.

  • Does this entity support access checks?
  • If so, check it.
  • If not, carry on.

Here is a more refined approach:

function physical_media_node_access(NodeInterface $node, $op, AccountInterface $account) {
  try {
    $wrapped_node = typed_entity_repository_manager()->wrap($node);
  }  
  catch (RepositoryNotFoundException $exception) {
    return AccessResult::neutral();
  }

  return $wrapped_node instanceof AccessibleInterface
    ? $wrapped_node->access($op, $account, TRUE)
    : AccessResult::neutral();
}

If there is a repository for the $node, wrap the entity. If that $wrapped_entity has an access() method, call it. Now, this hook works for all Wrapped Entities that implement the AccessibleInterface.

This refinement leads to better:

  • Code organization
  • Readability
  • Code authoring/discovery (which objects implement AccessibleInterface)
  • Class testability
  • Static analysis
  • Code reuse

How does Typed Entity work?

 So far, we’ve only shown typed_entity_repository_manager()->wrap($node). This is intentional. If you are only working on the layer of an access hook, you don’t need to know how it works. You don’t have to care about the details. This information hiding is part of what helps create maintainable code.

But you want to write better code, and to understand the concept, you want to understand how Typed Entity is built.

So how does it work under the hood?

This is a declaration of a Typed Repository for our Book entities:

/**
 * The repository for books.
 *
 * @TypedRepository(
 *    entity_type_id = "node",
 *    bundle = "book",
 *    wrappers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\WrappedEntities\Book",
 *      variants = {
 *        "Drupal\my_module\WrappedEntities\SciFiBook",
 *      }
 *    ),
 *   description = @Translation("Repository that holds business logic")
 * )
 */
final class BookRepository extends TypedRepositoryBase {...}

The "wrappers" key defines which classes will wrap your Node Type. There are different types of books, so we use ClassWithVariants, which has a fallback that refers to our main Book class. The repository manager will now return the Book class or one of the variants when we pass a book node to the ::wrap() method. 

More on variants. We often attach special behavior to entities with specific data, and that can be data that we cannot include statically. It might be data entered by an editor or pulled in from an API. Variants are different types of books that need some shared business logic (contained in Book) but also need business logic unique to them.

We might fill out the variants key like this:

variants = {
  "Drupal\my_module\WrappedEntities\SciFiBook",
  "Drupal\my_module\WrappedEntities\BestsellerBook",
  "Drupal\my_module\WrappedEntities\AudioBook",
}

How does Typed Entity know which variant to use? Via an ::applies() method. Each variant must implement a specific interface that will force the class to implement ::applies(). This method gets a $context which contains the entity object, and you can check on any data or field to see if the class applies to that context. An ::applies() method returns TRUE or FALSE. 

For example, you might have a Taxonomy field for Genre, and one of the terms is “Science Fiction.” 

Implementing hooks

 We can take this organization even further. There are many entity hooks, and Typed Entity can implement these hooks and delegate the logic to interfaces. The logic remains close to the Wrapped Entity that implements the appropriate interface.

The following example uses a hypothetical hook_entity_foo().

/**
 * Implements hook_entity_foo().
 */
function typed_entity_entity_foo($entity, $data) {
  $wrapped = typed_entity_repository_manager()->wrap($entity);
  if (!$wrapped instanceof \Drupal\typed_entity\Fooable) {
    // if the entity not fooable, then we can't foo it
    return;
  }
  $wrapped->fooTheBar($data);
}

This type of implementation could be done for any entity hook.  

Is this a good idea? Yes and no. 

No, because Typed Entity doesn’t want to replace the hook system. Typed Entity wants to help you write better code that is more efficient to maintain. Reimplementing all of the hooks (thousands of them?) as interfaces doesn’t further this goal.

Yes, because you could do this for your own codebase where it makes sense, keeping it simple and contained. And yes, because Typed Entity does make an exception for hooks related to rendering entities.

Rendering entities

The most common thing we do with entities is to render them. When rendering entities, we already have variants called “view modes” that apply in specific contexts.

This is starting to sound familiar. It sounds like a different type of wrapped object could overlay this system and allow us to organize our code further. This would let us put everything related to rendering an entity type (preprocess logic, view alters, etc.) into its own wrapped object, called a renderer. We don’t have to stuff all of our rendering logic into one Wrapped Entity class.

Typed Entity currently supports three of these hooks:

  • hook_entity_view_alter()
  • hook_preprocess()
  • hook_entity_display_build_alter()

Renderers are declared in the repositories. Taking our repository example from above, we add a "renderers" key: 

/**
 * The repository for books.
 *
 * @TypedRepository(
 *    entity_type_id = "node",
 *    bundle = "book",
 *    wrappers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\WrappedEntities\Book",
 *      variants = {
 *        "Drupal\my_module\WrappedEntities\SciFiBook",
 *      }
 *    ),
 *    renderers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\Renderers\Base",
 *      variants = {
 *        "Drupal\my_module\Renderers\Teaser",
 *      }
 *    ),
 *   description = @Translation("Repository that holds business logic")
 * )
 */
final class BookRepository extends TypedRepositoryBase {...}

If you understand wrappers, you understand renderers.

The TypedEntityRendererBase has a default ::applies() method to check the view mode being rendered and select the proper variant. See below:

These renderers are much easier to test than individual hook implementations, as you can mock any of the dependencies.

Summary 

Typed Entity can help you make your code more testable, discoverable, maintainable, and readable. Specifically, it can help you: 

  • Encapsulate your business logic in wrappers
  • Add variants (if needed) for specialized business logic
  • Check for wrapper interfaces when implementing hooks/services
  • Use renderers instead of logic in rendering-specific hooks
  • Add variants per view mode.

All of this leads to a codebase that is easier to expand and cheaper to maintain.  

Apr 16 2021
Apr 16

Matt and Mike have Front-end core committer Lauri Eskola on to talk about the new Drupal core theme starterkit that can be used to generate new themes. We'll talk about what's been done, and what's in store for this new Drupal core feature.

Apr 02 2021
Apr 02

Mike and Matt discuss various aspects of Lullabot's Support & Maintenance Department, and how it differs from Client Services, with Director, David Burns, and Project Manager, Cathy Theys.

Mar 10 2021
Mar 10

You Might Also Like

Drupal has been around for a long time. Officially, it’s been around for at least 20 years.

Even if you jumped on the wagon around Drupal 5 or 6, that is still over 12 years of Drupal. You might have started to look longingly at the greener grasses of other ecosystems, new technology stacks, and polished products that feel fresh and new. 

Burn-out happens. Disillusionment happens. Boredom happens.

If you stare at something long enough, you start magnifying all of its flaws and forgetting some of its virtues. Familiarity breeds contempt. We get it.

But there are reasons to stick with Drupal, and there are ways to get excited about it again. You might not be able to relive the honeymoon phase back when you first installed it, but you might be able to have something better: a mature, trusted partnership.

It’s not you, it’s me - get realistic

It can be frustrating to feel like you are in a rut. Drupal is old. PHP is even older. Neither one is the new kid on the block anymore. Everyone else looks like they are having the time of their lives playing with new stuff. 

There is always something exciting happening in the Javascript ecosystem. Products like Contentful and other JAMStack solutions come along that feel fresh. Even WordPress looks like it’s having more fun.

But let’s be realistic. 

Drupal can help you solve real problems. In many cases, Drupal will help you solve them better than any other toolset on the market. There is still excitement to be found in helping solve these complicated, thorny problems.

Problems like:

While it sometimes feels like all cool kids have moved on to greener pastures, Drupal has matured and found a steady groove. It is content to sit by the fire on a Friday night with a good book and a hot cup of tea. It can look back on its youth with fondness, maybe a little embarrassment, and be confident about the future. It doesn’t need to chase after invitations to the hottest parties.

The Drupal community remains solid. It does a great job watching out for issues related to backward-compatibility, stability, and security, and all of this work is easy to take for granted. Virtually any customization can be built on its shoulders.

Yes, Drupal has changed. But haven’t we all? You might need to shift your mindset. Why did you start using Drupal in the first place? Was it to chase the newest technology? Was it to help you solve particular problems? Clarify and think deeply about your “why.” 

Even if you leave the Drupal ecosystem, one thing will remain a constant no matter where you go: yourself. Be sure the disillusionment you feel isn’t baggage you will be carrying with you.

Even if you leave the Drupal ecosystem, one thing will remain a constant no matter where you go: yourself.

Remember that newer technologies are built by people. Other communities are made up of people. No matter what toolset you are using, you will be solving problems for other people, defined by other people.

And whenever people are involved, you get people problems. Some problems are more obvious than others, but you could find yourself jumping to another ship already full of holes, taking on water.

To summarize:

  • Be realistic.
  • Know yourself.
  • Set proper expectations.

If you do these things, you might find that Drupal isn’t so bad after all. In fact, it’s still pretty great.

Double-down and dig deep - lean into craftsmanship

Come for the code; stay for the community. For many excellent reasons, that’s the motto that many people still recite and believe in. 

But what if you’re burned out on the community?

You can always stay for the code.

Drupal’s codebase indeed feels bigger and more complicated since the switch to Drupal 8, but at the same time, the code has never been more transparent. Code organization and typehints allow for better IDE (Integrated Development Environment) autocompletion. It has never been easier to discover how things work.

By making it a habit of going down rabbit holes, you can gain knowledge of Drupal’s internals, and this knowledge comes with a fulfilling sense of capacity, flexibility, and usefulness. This level of introspection allows you to build features that are as extendable as Drupal itself.

Solving problems with confidence is addicting. It may feel unintuitive to cure your disillusionment by diving deeper. It feels like the opposite of what you want to do.

But it can spark a new passion. So try doubling down. Set up your debugger and step through some code.

Start with one of the subsystems, like cache invalidation. It has a lot of moving pieces, touches different parts of the Drupal codebase, and is a big infrastructure improvement from previous versions of Drupal. Not many people have a deep knowledge of its inner workings, and knowing how all the joints fit together can enhance the solutions you build.

Or maybe you are curious about the full render pipeline. Or the CKEditor integration? Whatever it is, get curious and find some focus.

Gaining this deeper expertise and understanding can help you solve deeper problems an organization may have. You can start creating deeper solutions that have a more long-lasting impact. These can come in the form of performance improvements, better code maintainability, or better automation. You will be able to tackle more complicated problems and architect more elegant solutions.

Some other things to do to improve your craft while also directly benefiting your projects: 

  • Focus on maintainability by mapping custom code to S.O.L.I.D. principles.
  • Extract and segregate business logic into typed entities. This also means learning how to ask better questions.

Embrace the archipelago - trade and share

Since the advent of Drupal 8, we are fond of saying that Drupal finally moved off of the island. We embraced Object-oriented best practices. Symfony components now power many of the underlying subsystems that Drupal depends upon. We shed more of our “not invented here” approach.

Except it’s not entirely true that we left the island. There are still many things unique to Drupal, and to implement Drupal at scale, you need to have deep Drupal expertise. Not just Symfony or PHP expertise. Drupal expertise.

However, we did move our island closer to all of the others. Our ports participate in more shipping lanes and trade routes. We became more conscientious members of the open-source archipelago.

If you are feeling antsy to get off the Drupal island, lean into this.

Don’t just build things for Drupal. Look for opportunities to extract logic into separate libraries with full test coverage, then loop them back into Drupal. Centarro did this with currency formatting for PHP applications. Lullabot did this with an AMP PHP library that anyone can use. Thinking this way presents new challenges and introduces you to new conversations.

Drupal’s commitment to content APIs, like JSON:API, opens up more opportunities to interface with various Javascript technologies and their communities.  Building a front-end for Drupal no longer means you have to work exclusively in Drupal. React, Vue.js, and many other frameworks are waiting for you to explore.

Even without committing to a fully decoupled front-end, you can still flex these muscles in creative ways. For example, you could create a widget registry that contains reusable pieces of content. These widgets can be served on Drupal or anywhere else.

It has never been easier to island-hop while maintaining a home in Drupal. Likewise, it has never been easier to grow skillsets while working in Drupal that will serve you well if you do eventually decide to leave. 

This leads to our final point.

Focus on transferable skillsets

We have already hinted at this. When you double-down on becoming a great Drupal developer and getting familiar with the internals, focusing on object-oriented best practices, you will find yourself right at home if you start developing a Symfony application.

Likewise, the Javascript you learn to consume Drupal APIs doesn’t care where the content comes from. The skills you learn will transfer.

If you feel disillusioned with Drupal, focusing on these transferable skills can help you get excited again because they can only open more opportunities for you. You won’t have a voice in the back of your mind whispering that you are stuck. With this comes confidence, and with confidence comes excitement.

As Drupal has matured, so have the tools surrounding Drupal. Package management, automated testing, and DevOps are all things you can learn about. Not only will they help your current projects, but they each present marketable skills outside of Drupal.

Deep knowledge of Vagrant, Docker, and some of the continuous deployment platforms have an immediate payoff and will pay dividends in the future. 

Drupal doesn’t have to be your final destination. It is a useful partnership of opportunity. It can be a springboard to other things. Sometimes, recognition of this fact alone is enough to spark some excitement. 

Knowing you can leave any time you want makes it easier to stick around for a while.

Conclusion

Drupal has been around a long time, and sometimes it can be hard to maintain enthusiasm and excitement for it. But it can still help you solve difficult problems. Sometimes, you just need to shift your mindset a little.

Dig deeper and find joy in craftsmanship, try to package up libraries to trade with other communities, and recognize your ability to build a transferable skill set.

It’s hard to overcome burn-out, and you might need to step away. But if you leave, be sure you do it with clarity and intention, making sure you don’t leave anything valuable on the table.

Thanks to Mateu Aguiló Bosch, Marcos Cano, Andrew Berry, Mike Herchel, and David Burns for contributing ideas to this article

Feb 15 2021
Feb 15

You Might Also Like

In our many years of helping clients, large and small, build and optimize their websites, Lullabot has seen a common pattern in digital projects: recurring cycles ranging from energy and enthusiasm to disillusionment and destruction.

This wave pattern is common, and you can see so many instances of it. There’s the Dunning-Kruger Curve, where you swing from the peak of Mount Stupid to the Valley of Despair when you learn a new skill. There’s the Economic Life Cycle of Boom, Bust, and Recovery that our economy grinds through over and over and over. There are the relentless cycles of high tide and low tide, and of course, the actual, endless waves of the ocean. The pattern is common because that’s the way nature and humans operate.

The Digital Destruction Wave

Let’s take a hypothetical, but pretty typical, example. Acme Corporation has a popular website that’s been in operation for nearly 10 years. They have hundreds of pages of content and thousands of images and a small editorial team that is increasingly frustrated with the work it takes to create new content. They’re using an old CMS, and they’ve seen others that appear to have nicer features. Their site looks tired anyway, and they’re ready for something new.

There’s enthusiasm for the idea of starting over.  When comparing something they know intimately (with all its problems) to alternative solutions, Acme relies on demos and salespeople and marketing information that highlight the best features of the new solutions and mention little, if anything, about their limitations. 

Acme stakeholders settle on a new solution. Everyone is optimistic and very ready to get rid of the current CMS. They tackle the months-long process of designing the new solution, building it out, and populating it with all the old content, and they finally launch it.

But after launch, cracks start to emerge. Editors find that some things are harder to do than they anticipated, and some things that used to work just fine now require compromises.  Some things are easier to do, but others are harder. Editors want to be able to do “anything,” but the new implementation focused on the need for simplicity, consistency, and stability by limiting editorial options. Political realities within the organization rather than the site's actual needs drove other decisions that further complicated the solution. 

The list of requested changes and improvements grows as users and stakeholders actually use the site and realize they need more features or a different experience. Every new requirement complicates the UX and UI, adds hundreds of lines of new code, and increases the frustration. Sooner or later, the new solution is weighed down with just as many problems as the original. And the next wave of digital destruction begins as everyone wonders whether there is some other, better solution out there.

Building for the Long Term

Lullabot primarily builds Drupal sites, sometimes monolithic stacks where Drupal delivers the entire digital experience, sometimes decoupled sites with a Drupal back end and React (or something similar) on the front end.  And every year, we see this relentless wave pattern of clients reworking their digital properties in a quest for better digital experience and content management tools. Sometimes they’re coming from another CMS hoping that Drupal will finally solve their problems. Other times they have a Drupal site that has become bloated and inefficient, and they’re looking for a new solution.

Destroying everything and starting over every few years is enormously expensive and disruptive. 

But is it inevitable?

Below are three reasons why organizations get disillusioned with Drupal, each with ways to combat the problem.

No Priorities - Everything for Everyone

One of the main reasons organizations get disillusioned with Drupal is not unique to Drupal. It has to do with a lack of clearly defined goals.

Either they cannot articulate those goals in a meaningful way, or there are so many competing goals that there is a resistance to prioritizing. Prioritizing means agreeing on precise definitions. It means being locked in on specific directions and making some tradeoffs.

It means giving up on some grand panacea of building something that is everything for everyone. Many organizations do not want to give up on this vision. But that vision is never realistic. Trying to reach for that vision will mean disappointing everyone. You cannot say “yes” to everything.

In our experience, defining and prioritizing goals is the big problem to solve when implementing a new CMS. Failing to solve this problem upfront results in failure. Period. Though you might not realize the failure until more cracks have started to appear. Like playing against a Chess grandmaster when you are a novice, you lost after the first move, even though the game wasn’t technically over for another 27 turns.

And this type of failure has nothing to do with Drupal.

For example, the marketing team’s goal might be to shrink the bounce rate and increase time on the website. That is what “success” is to them. But the sales team keeps hearing complaints from their prospective customers that spending so much time on the website before speaking to a human was frustrating.

This type of mismatch is a recipe for disillusionment, no matter what technology you choose to use.

How do you avoid this problem?

Solving this problem begins long before the project kickoff. In some cases, it starts before you even know you need the help of an outside vendor.

You must have the organizational governance to be able to make decisions and enforce those decisions. All stakeholders must feel as though they are heard. There must be clear and open communication and the ability to have difficult conversations.

Sometimes an outside party, like Lullabot, can help you have these conversations. An outside voice can cut through some of the noise and tease out information from shy stakeholders hesitant to speak up. The sooner you uncover potential mismatches and risks, the better off you will be when deciding priorities.

This can feel like an extended phase one. This is foundational work that sometimes has to do with your organization’s very identity. Why do you exist? Who are you serving? What does success look like overall? What does success look like for each department? Do they conflict? Why? What is most important?

Only after you can answer these questions should you move on to additional details. Presentation modeling workshops and other requirements gathering exercises will run so much smoother, and as a result, your new CMS has a higher chance of lasting for a long time.

Drupal is a Framework, Not a Product

Drupal is flexible, and that has been touted as one of its strengths. And it is a strength. You can do a lot with Drupal that you can’t do with other enterprise content management systems. It has a wide breadth, where it can be used for things beyond landing pages, such as blogs, event systems, listing pages, contact forms, CRMs, e-commerce, and more.

But do enterprise content management systems really need that type of breadth? In the age of micro-services, a CMS product can do a few things really, really well and then integrate with other services to fill in the gaps. Most modern enterprise CMSes are polished products geared toward solving a limited set of problems.

Drupal can also integrate with other services, but if you ask what problems Drupal is designed to solve, you will get different answers depending on who you ask and what day of the week you ask them.

A Lack of Polish

Drupal has no set of polished functionality that is highly targeted to solve a limited set of problems. It is not a product. It is a framework.

Drupal’s structured content capabilities are first class, rival many enterprise systems, and are better than any other free tool you will find on the market…but this advantage will not be visible to most stakeholders. A marketing team that wants to spin out beautiful-looking landing pages cares about certain success metrics, and the elegance of the underlying content model probably isn’t one of them.

To make up for this lack of initial focus, Drupal requires significant investment if you want to use it for a big project. This is offset a little by having no licensing fees to pay (which, for some enterprise CMSs, can run into the millions of dollars per year). This type of customization and investment comes with its own risks, like missed timelines and blown budgets.

The lack of constraints and limitations also means that other products will almost always have a more polished editorial experience. With Drupal, you often have to settle with “good enough,” and, outside of some core features, polished functionality is rare. Or at least, the cost of that final 10% of polish is both time and cost-prohibitive.

This can all contribute to disillusionment, especially if expectations have not been set correctly. The grass will look greener on the other side. The spit and polish of a product demo can be alluring, especially when that demo doesn’t have to contend with your organization's underlying complexities and competing priorities.

Flexibility: Strength and Weakness

Because it is a flexible framework, Drupal is also complex. It is a set of tools that can be used in many different ways. The problem is that a screwdriver is not opinionated in how you use it. It will totally let you swing it and use it as a hammer. Sometimes, it might even work as a hammer.

Many Drupal installations end up looking like they were built by using screwdrivers as hammers. They work. Barely. But the lack of craftsmanship shows, making the system fragile and difficult to work on. Any attempts to change certain parts can cause everything to collapse.

When customized for mission-critical functionality, serving hundreds of editors across different internal teams, Drupal implementations require deep expertise. If this expertise is not present, you run the risk of building a brittle codebase that is hard to maintain and hard to extend. An extensible, flexible framework that becomes rigid and fragile is a perfect recipe for disillusionment.

How do you avoid these problems?

You need to be unrelentingly honest. You need to be clear about your goals and priorities (as discussed above), and you need to measure any polished functionality against those goals and priorities. It can be tempting to fall for the siren song of slick interfaces and smooth aesthetics.  But maintain clear eyes on what is important and impactful.

A new product may be what you need. A completely revamped editorial experience may be what you need.

But are you sure? Is the need more than skin deep?

Some other ways to ensure your Drupal implementation lasts for the long-term:

  • Have deep expertise. If you are trying to implement Drupal at scale, where it is mission-critical to your business, you need true Drupal experts. Not just PHP experts or Symfony experts. Drupal experts. If you can’t have them on your staff, you need to hire one of the top Drupal agencies (like Lullabot). You also need an architect with Drupal expertise who has the ability to see the big picture and can help be a bridge between your organization’s domain expertise and its technical implementation in Drupal.
  • Do not expect the slick editorial experience of a focused, polished product. Instead, you should manage expectations and keep the focus on what is actually important. Under the proper care, Drupal can provide you with an editorial experience that matches your business needs, but you need to reserve your budget for things that actually matter. Understand what makes for a good editorial experience, and Drupal will hold up very well when measured against that standard. Just don’t expect it to fly you to the moon. A pick-up truck doesn’t need the best stereo system in the world to do what it does best.
  • Make sure you need the flexibility of Drupal. You need to ask if Drupal is really suitable for your organization. Drupal’s strength is in its flexibility. That means, to really get gains versus other systems, Drupal needs to be used for situations that have unique needs and require extensive customization. Do you have multiple audiences seeking out content in multiple formats? Are you a large organization with various departments, each with its own editorial team, and need to maintain both consistency and flexibility?

Growth, Maintenance, and Technical Debt

This spins out of the previous section on Drupal being a flexible framework. These problems are not unique to Drupal, but because of Drupal’s unique strengths, these problems can manifest in unique ways.

Each installation has its quirks. Each implementation will make different decisions that map better to an organization’s needs.

This can lead to frustration in the future. Even if Drupal has been deployed in an expert fashion, problems will start to sprout. Organizations are not static, and the software that serves them cannot remain static.

Teams roll off, and new teams roll on. Turnover can cause gaps in knowledge. As new features are added, the potential for performance problems increases. Forms get longer and longer and more cluttered. Technical debt attaches to projects like barnacles on a ship. Drupal, and the many dependencies it relies on, must be updated and maintained.

Since Drupal isn’t opinionated, similar functionality can be implemented in many different ways. Different teams have different styles, and these styles can start to clash if there is no proper continuity plan. This leads to less code reuse and a breakdown in the overall organization.

All of these things can start to cascade like a slow-moving avalanche, eventually ending the honeymoon period of a successful launch and leaving a wide swath of disillusionment in its wake. 

How do you avoid these problems?

  • Maintain continuity after team roll-offs. This can come in many forms. Good documentation is always encouraged, though that requires effort on its own, and it requires the next team to care about it. It is better to focus on people. You can have at least one developer overlap both teams, and they work to continue the same standards. Or you can have the same company pick up support and maintenance for the project. The concept of “office hours” can work. These are set times when members of the new team work with some members of the old team who helped launch the project.
  • Be explicit when mapping business logic. Settling on a ubiquitous language and modeling that language with classes in your code can limit technical debt and make future maintenance smoother. Using something like Typed Entities to help keep your custom business logic separate can make it easier for new team members to jump in. These build fences around how developers extend Drupal, which leads to more explicit expectations and reigns in some of the disadvantages of Drupal’s radical flexibility.
  • Maintain a clear view of your priorities. Managing your priorities doesn’t stop after the project launch. This takes us back to the very beginning. Your organizational governance must hold the line. New features must be filtered. New priorities and requirements must be surfaced in an organized manner. Unregulated growth is not good for any organism, and it remains true for your CMS implementation.

Conclusion: Noble Retirement

Your CMS investment can last a long time. And now, with Drupal’s new major releases not requiring migrations, your investment should last a long time. You don’t need to be subject to the waves of disillusionment and destruction.

Your investment won’t last forever. But it should last long enough for you to retire it on your own terms, long after you have seen an ROI. Send it off into the sunset with full honors, knowing that it did its duty and served you well.

Early destruction and re-creation are not inevitable. Manage your priorities, set reasonable expectations, ensure you have deep expertise at your disposal, and maintain continuity and consistency. These are easier said than done, but they can be done. Instead of being beaten down by the heavy waves of destruction, you can surf them with grace.

Contributed to by Karen Stevenson, Greg Dunlap, Mike Herchel, Mateu Aguiló Bosch, Chris Albrecht, Nate Lampton, Sean Lange, Marcos Cano, Monica Flores

Nov 05 2020
Nov 05

Matt and Mike talk with Sascha Eggenberger about the Gin admin theme, including its editorial interface changes, relationship to the Claro theme, future and more!

Gin admin theme showing the node edit page
Oct 21 2020
Oct 21

You Might Also Like

The Drupal 8 to Drupal 9 upgrade path represents a big change for the Drupal world. The big change is that…your organization’s website won’t require a big change. The first iteration of Drupal 9 is just Drupal 8 with all of the deprecated code removed, mimicking Symfony’s model of major release upgrades.

This is good news. Keeping your platform up-to-date with the next version is no longer an “all hands on deck” situation.

As with all changes, however, this new model comes with its own challenges and problems. You will need to shift your own thinking and habits. When it comes to your Drupal website, your organization will need to begin running a marathon that goes on for years. Your relationship with outside vendors will take on a new cadence.

In this brave new world where upgrading to the next major Drupal release isn’t a big re-platforming effort, what does web development strategy look like?

Establish a cadence for web development work

One good thing about major re-platforming efforts is that they have to be planned for in advance. Budget and time have to be allocated. It acts as a huge celestial star, with everything else gradually falling into orbit around the big initiative. It draws out intention, direction, and sometimes enthusiasm, and none of these are bad things to have. 

Having this large, common goal that everyone sees with clarity can make a lot of this stuff come more easily, but now, you need to figure out how to create and harness these things while not depending on the existence of a monolithic target that dominates the landscape. And you need to maintain what you have at a sensible pace.

Planning out the release cycle

 Like Drupal 8, Drupal 9 requires frequent updates. To get the latest security updates, you need to stay on the latest minor point release (9.1, 9.2, etc.). With this, you also have the possibility of getting new features that have been included, and you should be aware of them. Our article on Drupal 8 release planning is still relevant for Drupal 9. In summary:

  1.  Build a schedule of releases and support windows for your software. Not only for Drupal 9 but also for any contributed modules and other software that is part of your hosting stack.
  2.  Schedule updates ahead of time, and do not let the desire for new features cannibalize these dates. These should be scheduled from the top by project managers. This might mean a sprint every month or so 100% dedicated to updates. These should be just as visible as other initiatives that are being developed and be treated as equally important.
  3. Promote any new features that were rolled out by these updates.

Planning new initiatives and features

Ideally, with a more iterative approach to development, stakeholders stay more involved and informed, and therefore better discussions can be had around the website. With the old Drupal upgrade model, there was a risk of the dreaded stakeholder swoop: someone swoops in, lists a bunch of requirements without regard to overall goals and priorities and swoops back out. Sometimes they aren’t seen again until close to launch.

That risk still exists but is mitigated by the extra number of touchpoints required from a more long-term, iterative approach. There are more starts and mini-launches. If a typical swooping stakeholder wants to get something done, they will have to do a lot more swooping, which might start to look more like informed involvement.

Regardless, you need to be more intentional with planning out and prioritizing new features. You can’t let every neat idea, frustration, or new design collect in a bucket over the course of two years, only to implement them on the new platform. With iterative development, your organization will need to communicate more, not less.

Set up regular touchpoints with stakeholders and domain experts. This will look different for every organization. The Marketing department might be the main driver. In that case, you’ll need to regularly meet with the person who has the authority to make requests and set priorities, in addition to domain experts that can answer questions and provide deeper knowledge. In smaller organizations, this could all be invested in one person.

If your website represents the needs of many different silos, like a company selling multiple products, then you will need regular meetings for each different product team. The same standards apply. You need someone with authority and domain knowledge.

These meetings should match up with your project management philosophy. For example, are you running sprints via agile? Invite these folks to sprint planning. Requirements with large uncertainties can trigger the creation of a targeted discovery phase with stakeholders, which becomes part of a sprint, just like all of the other tickets.

Regular usability tests are another way to find potential improvements. These don’t have to take a lot of time and money. The book Don’t Make Me Think outlines a simple framework that anyone can implement, and running them once per month is usually enough to fill any gaps in your pipeline.

Re-factoring and technical debt

Codebases tend to gather junk over time. This comes in the form of disorganized code, “temporary” fixes that have become permanent when no one was paying attention, things that work but could use performance tuning, and modules included in the codebase that aren’t used anymore.

Previous upgrade cycles allowed messes to be stacked up into a closet somewhere. When the migration or re-platforming came along, everything from that closet could be safely dragged out and lit on fire. Easy clean-up.

With an iterative model, you can’t afford to keep pushing things off. Eventually, that closet will need to be so big that it takes up the entire house.

Start making an inventory of things that need to be cleaned up, and start adding these as tasks for your team. Maybe your goal is to complete two technical debt tickets per month. Maybe you start smelling something really foul and need an entire sprint dedicated to a refactoring. Maybe a new feature will be easier to implement if some other code is reorganized, so you add that task as a pre-requisite. 

However you do it, do it with intention and planning. That closet is not going to clean itself.

With an iterative model, you can’t afford to keep pushing things off.

Allocating development resources

You have planned out the release cycle. You have a list of new feature requests that is constantly growing thanks to your increased communication with stakeholders. And you have identified good targets for refactoring. 

Now what?

Prioritize and allocate. This, of course, depends on your team's size and the number of stakeholders involved in the work. Your project managers have their work cut out for them because they also have to worry about allocating themselves properly within the changing tides of shifting priorities.

You might give each stakeholder a team they work with exclusively. This helps people grow comfortable working together and builds rapport. You are less likely to need project kickoffs each time something starts. 

You might also rotate developers and teams so everyone has experience doing a bit of everything. That way, there is some overlap in case of emergencies or turnover. It also keeps things fresh and can aid in preventing burnout. Let developers speak up and tell you what they’re thinking, and, if possible, indulge their preferences.

It can be helpful to have the same person in charge of the release schedule day after day, month after month. But does someone actually want that job day-to-day? Maybe they do. But you should be sure.

For smaller teams, you may have to split things up by month or quarter. Maybe November and December are the months everyone focuses on technical debt. Maybe the first quarter of the year is reserved for new design initiatives and higher priority feature rollouts.

But do not let security and software updates get lost in the shuffle. Do not get lost in the weeds of zealous re-factoring. Do not ignore the needs of your stakeholders. This can feel like a juggling act, but it is one your organization must master to keep your website secure, relevant, and successful.

Celebrate

There is no more “big reveal” and launch of a new website. The highs and lows, the victories and stress, are hopefully flattened out to more manageable peaks and valleys. That doesn’t mean you can’t celebrate the completion of smaller initiatives, however. You absolutely should. Announce them, treat them as huge milestones, make your hand sore from giving so many high fives and pats on the back.

This can be done in many ways.

  • Public recognition in meetings and internal newsletters.
  • Demos and learning days during which people responsible for the work show off what they have done and learned.
  • Small parties that happen immediately after lunch.

Whatever fits within your culture, do it.

Since each initiative and feature stands more on its own, it doesn’t get drowned out in the excitement of a “big reveal” where the gloss of so many new features can blind people from seeing certain parts of the work that have been done.

Iterative releases allow for more focus. And they give more opportunities for kudos. Take advantage of them.

Revisiting site architecture and content

Historically, many organizations have used major Drupal version migrations as a cadence to review information architecture. In part, this has been because content migrations across major versions haven't always been 1:1 migrations, so organizations have had to undertake information architecture (IA) work to decide what to migrate and where it should go. 

The other aspect of this is that in more complex migrations, it might have been faster to remove old content and deprecated content types than to spend the time migrating them, especially in the event of a custom migration.

The upside of migrating from Drupal 8 to Drupal 9 and beyond is that there is no content migration. The downside is that organizations now need another cadence for undertaking information architecture and strategy projects. This is similar to creating a new pattern of web development work, and a lot of the tips for development also apply here.

In fact, architecture and strategy should drive most new development work. Get into the habit of doing periodic IA and content audits. Every year or every six months. Whatever makes sense for your organization. It’s part of having a well-kept house. 

Some example questions you could start asking:

  • Is historic content still meeting your needs? Can it be updated, moved to another content type, or archived?
  • Does the site contain one-time-use fields that should be deprecated?
  • Do content types need to be trimmed?
  • Have the site's navigation needs changed?
  • Does our taxonomy structure still meet our needs and goals?
  • Have we added new departments or consolidated older ones? Are they represented properly?
  • Has our audience shifted? Have we started targeting new audiences?
  • How has our business changed? Have new competitors required us to think of different approaches and goals?

Sometimes your IA might require a major shift, and that shift needs to happen to a website that needs to remain up and running. For example, consolidating two content types into one new content type.

The good news is that the robust, well-tested migration tools that would have aided you during a full upgrade are there to help you accomplish a smaller shift. Drupal migrate tools are great at Importing content and pouring that content into different structures. Take advantage of them, even if it might feel like overkill at first.

Architecture and strategy should drive most new development work.

Bringing in outside help

Large upgrades and migration efforts demanded expertise and more developer hours, but without this traditional demand, is there still room to hire outside help?

Yes, there is. It can make a lot of sense, depending on your situation. In many circumstances, establishing a longer-term relationship with a vendor can yield even more gains, as the external team isn’t just around for six months to a year, then off to the next thing. Trust has time to grow. Communication settles into familiar rhythms. Everyone involved becomes more comfortable with each other.

Long-term engagements were still possible before the Drupal 9 paradigm, of course. At Lullabot, we have worked with some of our clients for 5+ years, which has enabled us to contribute in unique ways.  But without the inevitable, looming migration in the distance, it allows for additional possibilities, and we are excited about the potential.

There are many ways outside expertise can be utilized beyond a big project crunch. Keep in mind the lines between the following categories are fuzzy, but they are good places to start when trying to determine if you want to hire an external vendor.

Support and maintenance

If you want your own team to focus on new features or work you consider higher-value, using external help for support and maintenance of your existing infrastructure can provide a level of consistency that allows you to forge ahead.

Put a specialized team in charge of managing your release cycle. They manage the software updates and work within your schedule. 

A skilled support team can also help manage bug fixes. This frees up your developers’ time, so they aren’t bouncing around between tasks, losing a little bit of productivity each time. Help them avoid the concentration whiplash.

How do you know you might want help with support?

  • Assigning responsibility for the release cycle is like a game of hot potato. No developer really wants it.
  • The backlog of “urgent” bugs is growing month-over-month instead of shrinking. Complaints from stakeholders begin to grow.
  • Your software is continually months out of date, and it requires everyone’s attention to rectify the situation.

Fill gaps of expertise

Even if you have a large, diverse development team, you probably have some experience gaps. Technology is complicated. It’s hard for a team to stay up to date with everything that’s going on all of the time, and it can help to bring in some specialists.

Security audits. Accessibility audits. Performance audits. These are good opportunities for someone to come in, give you some actionable items, and ride off into the sunset. Each round of these helps educate your staff, as well. Depending on the scale of your website, it might make sense to augment your team with this type of experience for long-term engagement instead of periodic audits.

DevOps and continuous integration are also things that can benefit from a dedicated resource. A good expert in this area can help make your entire team more productive. They can set up and maintain the automatic deployment and testing of code, manage local development setups, and help enforce best practices.

Content strategy and design are areas that can benefit from outside perspectives. Good talent in these areas can help make your internal projects more successful by forcing you to clarify priorities.

Increase development velocity

Sometimes, you just don’t have the resources required to meet your goals. Too many initiatives with too many people demanding their pound of flesh. These requests and requirements could all be funneled through the Marketing department, or maybe your company has multiple departments that each own a section of the website.

Either way, you need more help, and you need that help to hit the ground running. An expert team can be integrated in several ways.

  • They can augment the current team with no change in structure. With this model, they become additional team members for you to utilize, whether they are project managers, content strategists, designers, or developers. The aim is to increase the general velocity of your development work.
  • They can come in with more focused intent as a differentiated team. They are assigned to a stakeholder or a specific initiative, so important work can get pushed forward without interrupting your normal development workflow. Close collaboration can still happen, but the external team has different priorities they will focus on.
  • A mix of both paradigms. There are no hard and fast lines to draw, and being flexible has its advantages. For example, after a team has completed a specific initiative, they move on to another one, or the team is split up and dispersed throughout other internal teams so domain knowledge can be spread. Or maybe they move to a support role.

Brave new world

Despite not having a big re-platforming effort on the horizon, web development in a Drupal 9 world still requires planning, thought, and intention. Release cycles need to be managed. New work needs to be planned and developed. Stakeholders need to be kept happy.

Drupal 9 makes it easier to take advantage of new features while keeping your site secure, successful, and relevant, but you can’t push things off anymore. No more waiting for the big migration to get rid of technical debt or re-work your information architecture. No more sweeping things under the rug until spring cleaning.

New habits need to be formed. New cadences need to be implemented. Exciting times are ahead as we juggle this new reality.

Oct 07 2020
Oct 07

About host Matt Kleve

Portrait of Matt Kleve

Matt Kleve has been a Drupal developer since 2007. His previous work in the media sparks a desire to create lean, easy to use workflow processes.

About host Mike Herchel

Thumbnail

A senior front-end developer, Mike is also a lead of the Drupal 9 core "Olivero" theme initiative, organizer for Florida DrupalCamp, maintainer for the Drupal Quicklink module, and an expert hammocker

Jul 15 2020
Jul 15

Continuous Deployment, Infrastructure as Code, and Drupal

The previous article of this series provided an overview of setting up a GitHub Actions workflow that would publish changes into a Kubernetes cluster. This article takes you through each of the steps of such a workflow.

For reference, here is the GitHub Actions workflow in the sample repository at .github/workflows/ci.yml:

on:
  push:
    branches:
      - master
name: Build and deploy
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/[email protected]
        with:
          fetch-depth: 1

      - name: Build, push, and verify image
        run: |
          echo ${{ secrets.PACKAGES_TOKEN }} | docker login docker.pkg.github.com -u juampynr --password-stdin
          docker build --tag docker.pkg.github.com/juampynr/drupal8-do/app:${GITHUB_SHA} .
          docker push docker.pkg.github.com/juampynr/drupal8-do/app:${GITHUB_SHA}
          docker pull docker.pkg.github.com/juampynr/drupal8-do/app:${GITHUB_SHA}
      - name: Install doctl
        uses: digitalocean/[email protected]
        with:
          token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}

      - name: Save cluster configuration
        run: doctl kubernetes cluster kubeconfig save drupster

      - name: Deploy to DigitalOcean
        run: |
          sed -i 's||docker.pkg.github.com/juampynr/drupal8-do/app:'${GITHUB_SHA}'|' $GITHUB_WORKSPACE/definitions/drupal-deployment.yaml
          kubectl apply -k definitions
          kubectl rollout status deployment/drupal
      - name: Update database
        run: |
          POD_NAME=$(kubectl get pods -l tier=frontend -o=jsonpath='{.items[0].metadata.name}')
          kubectl exec $POD_NAME -c drupal -- vendor/bin/robo project:files-configure
          kubectl exec $POD_NAME -c drupal -- vendor/bin/robo project:database-update

There are several systems involved in the above workflow. Here is a diagram that illustrates what is happening:

Building an image

The first step at the GitHub Actions workflow builds a Docker image containing the operating system and the application code, along with its dependencies and libraries. Here is the GitHub Actions step:

 - name: Build, push, and verify image
  run: |
    echo ${{ secrets.PACKAGES_TOKEN }} | docker login docker.pkg.github.com -u juampynr --password-stdin
    docker build --tag docker.pkg.github.com/juampynr/drupal8-do/app:${GITHUB_SHA} .
    docker push docker.pkg.github.com/juampynr/drupal8-do/app:${GITHUB_SHA}
    docker pull docker.pkg.github.com/juampynr/drupal8-do/app:${GITHUB_SHA}

The Docker images are named with the Git commit hash via the environment variable {GITHUB_SHA}. This way, you can match deployments with commits in case you need to review or roll back a failed deployment. We also created a personal access token to authenticate against GitHub Packages and saved it as a GitHub Secret as PACKAGES_TOKEN.

The Dockerfile that docker build uses for building the Docker image is quite simple:

FROM juampynr/drupal8ci:latest
COPY . /var/www/html/
RUN robo project:build

The base image is juampynr/drupal8ci. This image is an extension of the official Drupal Docker image, which extends from the official PHP Docker image with an Apache web server built in. It has a few additions like Composer and Robo. In a real project, you would use Ubuntu or Alpine as your base image and define all the libraries that your application needs.

The last command at the above Dockerfile, robo project:build, is a Robo task. Robo is a PHP task runner used by Drush among others. Here is the output of running this task in a GitHub Actions run:

Step 3/3 : RUN robo project:build
199 ---> Running in 2b550db4d25e
200 [Filesystem\FilesystemStack] _copy [".github/config/settings.local.php","web/sites/default/settings.local.php",true]
201 [Composer\Validate] Validating composer.json: /usr/local/bin/composer validate --no-check-publish
202 [Composer\Validate] Running /usr/local/bin/composer validate --no-check-publish
203./composer.json is valid
204 [Composer\Validate] Done in 0.225s
205 [Composer\Install] Installing Packages: /usr/local/bin/composer install --optimize-autoloader --no-interaction
206 [Composer\Install] Running /usr/local/bin/composer install --optimize-autoloader --no-interaction
207Loading composer repositories with package information
208Installing dependencies (including require-dev) from lock file

In essence, robo project:build copies a configuration file and downloads dependencies via Composer.

The result of the docker build  command is a Docker image that needs to be published somewhere the Kubernetes cluster can pull it during deployments. In this case, we are using GitHub Packages to host Docker images. 

Connecting GitHub Actions with the Kubernetes cluster

At this point, we have built a Docker image with our application containing the latest changes. To deploy such an image to our Kubernetes cluster, we need to authenticate against it. DigitalOcean has a Kubernetes action that makes this easy via doctl, its command-line interface.

- name: Install doctl
  uses: digitalocean/[email protected]
  with:
    token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}

- name: Save cluster configuration
  run: doctl kubernetes cluster kubeconfig save drupster

The first step uses DigitalOcean’s GitHub action to install doctl while the second one downloads and saves the cluster configuration, named drupster.

As for the DigitalOcean setup, there is nothing fancy about it. You can just sign up for a Kubernetes cluster which uses three nodes (aka Droplets in DigitalOcean’s lingo):

The Kubernetes Dashboard link at the top right corner takes you to a page that shows an overview of the cluster:

 The above dashboard is a great place to monitor deployments and look for errors. Speaking of deployments, this is what the next section covers.

Deploying the image into the cluster

Here is the step that deploys the Docker image to DigitalOcean:

- name: Deploy to DigitalOcean
  run: |
    sed -i 's||docker.pkg.github.com/juampynr/drupal8-do/app:'${GITHUB_SHA}'|' $GITHUB_WORKSPACE/definitions/drupal-deployment.yaml
    kubectl apply -k definitions
    kubectl rollout status deployment/drupal

The first line fills a placeholder with the image being deployed, which came from the commit hash. The next two lines perform the deployment and verify its status.

Note: We will inspect each of the Kubernetes objects in the next article of this series, but if you are curious, you can find them here.

After the deployment is triggered, the Kubernetes master node will destroy the pods and create new ones containing the new configuration and code. This process is called rollout, and it may take a variable time depending on the scale of the deployment. In this case, it took around 20 seconds.

If something goes wrong while pods are being recreated, there is the chance to undo the changes by rolling back to a previous deployment configuration via kubectl rollout undo deployment/drupal.

Updating the database

If the deployment succeeded, then the next step is to import configuration changes and run database updates. An additional command was added to ensure that file permissions are correct due to issues found while setting up the workflow). Here is the step:

- name: Update database
  run: |
    kubectl exec deployment/drupal -- vendor/bin/robo project:files-configure
    kubectl exec deployment/drupal -- vendor/bin/robo project:database-update

The above step completes the workflow. 

Next in this series…

The next and last article in this series will cover each of the Kubernetes objects in detail.

Acknowledgments

If you have any tips, feedback, or want to share anything about this topic, please post it as a comment here or via social networks. Thanks in advance!

Jun 18 2020
Jun 18

Drupal 7 to 9 Upgrade

If you're one of the 70% of Drupal sites that are still on Drupal 7 at the time of this writing, you may be wondering what the upgrade path looks like to go from Drupal 7 to Drupal 9. What does the major lift look like to jump ahead two Drupal versions? How is this different than if you'd upgraded to Drupal 8 sometime in the last few years? And how long will it be before you have to do it again?

Upgrading via Drupal 8

Before the release of Drupal 9, the best path for Drupal 7 sites to upgrade to Drupal 9 was to upgrade to Drupal 8. The big selling point in Drupal 9's evolution is that updating from a late version of Drupal 8 to Drupal 9.0 is more like an incremental upgrade than the massive replatforming effort that the older Drupal migrations used to entail. Sites that jumped on the Drupal 8 bandwagon before Drupal 9.0 was released could benefit from the simple upgrade path from Drupal 8 to Drupal 9.0 instead of another big migration project.

Migrating to Drupal 8 is still a good option for Drupal 7 sites, even though Drupal 9 is now out.

You might find that the essential modules or themes you need are ready for Drupal 8 but not yet available for Drupal 9. The Drupal 8 to Drupal 9 upgrade path for many modules and themes should be relatively trivial, so many of them should be ready soon. But, there could be some outliers that will take more time. In the meantime, you can do the heavy lift of the Drupal 7 to Drupal 8 migration now, and the simpler Drupal 8 to Drupal 9 upgrade later, when everything you need is ready.

The Drupal 7 to Drupal 8 migration

The Drupal 7 to Drupal 8 upgrade involves some pretty significant changes. Some of the things you previously needed to do via contributed modules in Drupal 7 are now included in Drupal 8 core. However, the way you implement them may not be the same as some refactoring might be required to get feature parity when you migrate to Drupal 8. 

The migration itself isn't a straight database upgrade like it was in Drupal 6 to Drupal 7; instead, you can migrate your site configuration and site content to Drupal 8. You have a choice of doing it two ways: 

  1. Migrate everything, including content and configuration, into an empty Drupal 8 installation (the default method).
  2. Manually build a new Drupal 8 site, setting the content types and fields up as you want them, and then migrate your Drupal 7 content into it. 

For a deeper dive into what these migrations look like, check out An Overview for Migrating Drupal Sites to 8

Planning migrations

The Migration Planner is a helpful tool you may want to consider in your migration planning process. This tool queries a database to generate an Excel file that project managers or technical architects can use to help plan migrations. Developers who are performing the migrations can then use the spreadsheets.

Performing migrations

Core comes with some capability to migrate content automatically. If your site sticks to core and common contributed content types and fields, you may be able to use these automatic migrations. However, if your site relies heavily on contributed modules or custom code, an automatic migration might not be possible; you may need a custom migration approach.

The Drupal Migrate UpgradeMigrate Plus, and Migrate Tools modules are good starting points for performing a custom migration. They add things like Drush support for the migration tasks and migration support for some non-core field types. You can access several custom migration processors that make it easy to do some fairly complex migrations. This can be done just by adding a couple of lines to a YAML file, like an entity_lookup processor that will take text from Drupal 7 content and do a lookup to determine what Drupal 8 entity the text refers to.

Drupal 7 works on older versions of PHP but recommends a minimum of 7.2. If you're migrating from an older Drupal 7 site, there may be several other platform requirements to investigate and implement. 

Tooling and paradigm shifts

With the change to Drupal 8, developers are also expected to use new tools. You now use Composer to add modules and their dependencies, rather than Drush. Twig has replaced PHPTemplate as the default templating engine. Some core paradigms have shifted; for instance, developers need to learn to think in terms of events, or extending objects, instead of the old system of hooks. Many hooks still work, but they will probably be deprecated over time, and the new methods are safer ways to write code. The changes aren't insurmountable, but your organization must invest in learning the new way of doing things. You'll need to account for this education overhead when coming from Drupal 7; development teams may need more time to complete tasks as they learn new tools and paradigms.

Drupal 8's deprecation model

In addition to big changes in Drupal 8 core and implementation details, Drupal 8 also features a deprecation model that's familiar in the software world, but new in Drupal version upgrades. Instead of deprecating a bunch of code when there's a major version upgrade, Drupal 8 has introduced a gradual deprecation model. 

As features and improvements are made in Drupal 8's codebase, old methods and functions are marked as deprecated within the code. Then, a few versions later - or in Drupal 9 - that code is removed. This gives development teams a grace period of backward compatibility, during which they can see alerts that code is deprecated, giving organizations time to implement the new code before it's completely removed. 

The deprecated code also provides an easy hint about how to rework your code using new services and methods. Just look at what the hook does, and do that directly in your code.

This gradual deprecation model is one of the core reasons that the Drupal 9 upgrade is more like a minor version release for Drupal 8 than a major replatforming effort.

Jumping straight to Drupal 9

With that said, can you jump ahead from Drupal 7 to Drupal 9? If you want to skip over Drupal 8 entirely, you can jump directly to Drupal 9. The Drupal 7 migration ecosystem is still available in Drupal 9. Drupal 9 contains the same migrate_drupal module you need to migrate to Drupal 8. There has been discussion around possibly moving this module to a contributed module by Drupal 10, although no decision has been made at the time of this writing.

If you intend to go this route, keep in mind that all of the considerations when upgrading from Drupal 7 to Drupal 8 apply if you jump straight to Drupal 9, as well. You'll still have to manage the migration planning, deal with tooling and paradigm shifts, and consider platform requirements.

Ultimately, however, jumping directly from Drupal 7 to Drupal 9 is a valid option for sites that haven't migrated to Drupal 8 now that Drupal 9 is released. 

When to migrate to Drupal 9

Whichever route you choose, whether you're going to migrate via Drupal 8 or straight to Drupal 9, you should start the migration from Drupal 7 to Drupal 9 as soon as possible. Both Drupal 7 and Drupal 8 will reach end-of-life in November 2021, so you've got less than a year and a half to plan and execute a major platform migration before you'll face security implications related to the end of official Drupal security support. We'll cover that in more detail later in this series. 

For any site that's upgrading from Drupal 7, you'll need to do some information architecture work to prepare for the migration to Drupal 8 or Drupal 9. Once you're on Drupal 8, though, the lift to upgrade to Drupal 9 is minimal; you'll need to look at code deprecations, but there isn't a major content migration to worry about. Check out our Preparing for Drupal 9 guide for more details around what that planning process might look like.

But what about waiting for a later, more stable version of Drupal 9, you ask? This is a common strategy in the software world, but it doesn't apply to the Drupal 9 upgrade. Because Drupal 9 is being handled more like an incremental point-release upgrade to Drupal 8, there aren't any big surprises or massive swaths of new code in Drupal 9. The core code that powers Drupal 9 is already out in the world in Drupal 8. There are no new features in the Drupal 9.0 release; just the removal of code that has already been deprecated in minor versions of Drupal 8.

Going forward, the plan for Drupal 9 is to release new features every six months in minor releases. The intent is for these features to be backward compatible, and to bring Drupal into the current era of iterative development versus the major replatforming projects of olde. There aren't any big surprises or major reliability fixes on the horizon for Drupal 9; just continued iteration on a solid platform. So there's no need or benefit to waiting for a later version of Drupal 9!

How to migrate to Drupal 9

Plan for migration

Planning for a Drupal 7 to Drupal 8 or Drupal 7 to Drupal 9 migration becomes a question of scope. Do you just want to migrate your existing site's content into a modern, secure platform? Or are you prepared to make a bigger investment to update your site by looking at information architecture, features, and design? Three factors that will likely shape this decision-making process include:

  • Time and budget
  • Developer skillset
  • Release window

Time and budget for a migration

How much time are you able to allocate for what is likely to be a major replatforming effort? What's your budget for the project? Do you need to launch before an important date for your organization, such as college registration or an important government deadline? Can your budget support additional work, such as a design refresh? 

For many organizations, getting the budget for a large project is easier as a one-time ask, so doing the design refresh as part of the migration project may be easier than migrating, and then planning a separate design project in six months. In other organizations, it may be difficult to get enough budget for all the work in one project, so it may be necessary to spread the project across multiple phases; one phase for the migration, and a separate phase for design.

When factoring in the time and budget for additional work, keep in mind that things like revisiting a site's information architecture could save you time and money in the migration process. Budgeting the time to do the work up-front can dramatically save in time and cost later in the migration process, by reducing unnecessary complexity before migrating instead of having to work with custom migrations to bring over content and entity types that you don't use anymore. This also improves maintainability and saves time for developers and editors doing everyday work on the new site.

Consider developer skills when planning your migration

10 years is a long time for developers to be working with a specific framework. If you've been on Drupal 7 since 2011, your developers are likely very experienced with "the Drupal 7 way" of doing things. Many of those things change in Drupal 8. This is a big factor in developer resistance around upgrading to Drupal 8 and Drupal 9.

Composer, for example, is a huge change for the better when it comes to managing dependencies. However, developers who don't know how to use it will have to learn it. Another big difference is that a lot of Drupal 8 and Drupal 9's core code is built on top of Symfony, which has changed many mental paradigms that experienced Drupal developers are accustomed to using. While some things may seem unchanged - a Block is still a Block, for example - the way they're implemented is different. Some things don't look the same anymore; developers will encounter things like needing to use YAML files instead of hooks to create menu items. Even debugging has changed; things like simple debugging via print() statement doesn't always cut it in the new world, so many developers are using IDEs like PHPStorm, or a host of plugins with other editors, just to code effectively in newer versions of Drupal.

All of this change comes with overhead. Developers must learn new tools and new ways of doing things when switching from Drupal 7 to Drupal 9. That learning curve must be factored into time and budget not only for the migration itself but for ongoing development work and maintenance after the upgrade. Progress during sprints will likely slow, and developers may initially feel resistant or frustrated while they learn the new ways of doing things.

Bringing in outside help during the migration process can mitigate some of this learning overhead. Partnering with an experienced Drupal development firm means your migration can be planned and implemented more quickly. One thing to consider when selecting an outside partner is how closely they collaborate with your internal team. When choosing a Drupal development firm to collaborate with your internal team, consider the value of partnering with experienced developers who can "teach" your internal teams how to do things. This reduces the learning curve for a team that's heavily experienced with older Drupal versions and can help your team get up to speed more quickly - saving money during the first year of your new site.

Plan a release window

The other aspect of planning for the Drupal 7 to Drupal 9 upgrade is planning a release window. Plan to have your migration project complete before Drupal 7 is scheduled to reach end-of-life in November 2021. If you can't make that deadline, then start planning now for an Extended Support engagement to keep your site secure until you're able to complete the migration.

You'll want to plan the release window around key dates for your organization, and around other support windows in your stack. For example, if you're a retailer, you may want to have the migration completed before the end of Q3 so you're not upgrading during holiday initiatives. Education organizations may plan their release during slow periods in the school's calendar, or government websites may need to be ready for key legislation. 

When it comes to your stack, you'll want to plan around other important release windows, such as end-of-support for PHP versions, or upgrading to Symfony 4.4. This is particularly important if you need to upgrade dependencies to support your Drupal 7 to Drupal 9 migration. Check out Drupal 8 Release Planning in the Enterprise for more insights about release planning.

Revisit information architecture, features, and design

Because the jump from Drupal 7 to Drupal 9 is so substantial, this is a good time to revisit the information architecture of the site, do a feature audit, and consider whether you want to make design changes. 

Is it time to update your site's information architecture?

Before you jump into a Drupal 9 upgrade project, you should perform an audit of your existing Drupal 7 site to see what you want to carry forward and what you can lose along the way. Did you set up a content type that you only used once or twice, and never touched again? Maybe you can delete that instead of migrating it. Are you using a taxonomy that was set up years ago, but no longer makes sense? Now is a good time to refine that for the new version of your site.

Content migration is also a relatively easy time to manipulate your data. You can migrate Drupal 7 nodes or files into Drupal 9 media entities, for instance. Or migrate text fields into address fields or list fields into taxonomy terms. Or merge multiple Drupal 7 content types into a single Drupal 9 content type. Or migrate content from a deprecated Drupal 7 field type into a different, but supported, Drupal 9 field type. These kinds of things take a bit more work in the migration, but are completely possible with the Migration toolset, and are not difficult for developers with migration experience. The internet is full of articles about how to do these kinds of things.

In addition to the fine details, it's also a good time to take a look at some big-picture questions, like who is the site serving? How has this changed since the Drupal 7 version of the site was established, and should you make changes to the information architecture to better serve today's audience in the upcoming Drupal 9 site? 

Have your feature needs changed?

Drupal 7 was released in 2011. Nearly a decade later, in 2020, the features that seemed important at Drupal 7's inception have changed. How have the feature needs of your content editors changed? Has your site become media-heavy, and do your content editors need large searchable image archives? Do you want to deliver a dynamic front-end experience via a bespoke React app, while giving content editors a decoupled Drupal framework to work in? 

Many editors love the new Layout Builder experience for creating customized site pages. It's something that doesn't exist in Drupal 7 core and is arguably better than what you get even when you extend Drupal 7 with contributed modules. Drupal 8 and 9 have built-in media handling and a WYSIWYG editor, eliminating the need for dozens of Drupal 7 contributed modules that do not always cooperate with each other, and focusing developer attention on the editorial UX for a single canonical solution.

Revisit the needs of your content editors and site users to determine whether any existing features of the current site are no longer important and whether new feature needs warrant attention in the upgrade process. This could be particularly helpful if you find that current features being provided by contributed modules are no longer needed; then you don't have to worry about whether a version of those modules is available in Drupal 8/9, and can deprecate those modules.

Ready for a design update?

If your Drupal 7 site hasn't had a design refresh in years, the upgrade process is a good time for a design refresh. Plan for a design refresh after the upgrade is complete. Drupal 9 will have a new default theme, Olivero, which features a modern, focused design that is flexible and conforms with WCAG AA accessibility guidelines. Olivero has not yet been added to Drupal core - it's targeted to be added in 9.1 - but it is available now as a contributed module any Drupal 8 or Drupal 9 site can use. Olivero is a great starting point for sites that want an updated design. 

If you're planning a custom design project, keep accessibility and simplicity at the forefront of your design process. You may want to engage in the design discovery process with a design firm before you plan your Drupal 9 release; a good design partner may make recommendations that affect how you proceed with your migration.

Perform the migration

The process of migrating from Drupal 7 to Drupal 8 has improved since Drupal 8's initial release, but it can still be an intricate and time-consuming process for complex sites. We wrote An Overview for Migrating Drupal Sites to 8 to provide some insight around this process, but upgrading sites must:

  • Plan the migration
  • Generate or hand-write migration files
  • Set up a Drupal 8 site to actually run migrations
  • Run the migrations
  • Confirm migration success
  • Do some migration cleanup, if applicable

Unlike prior Drupal upgrades, migrating to Drupal 8 isn't an automatic upgrade. A Drupal 7 site's configuration and content are migrated separately into a new Drupal 8 site. There are tools available to automate the creation of migration files, but if you've got a complex site that uses a lot of custom code or many contributed modules, you'll only go so far with automated tools. You'll need to revisit business logic and select new options to achieve similar results or deprecate the use of Drupal 7 contributed modules and custom code in your site to move forward to Drupal 8 and Drupal 9.

Whether you're going to upgrade to Drupal 8 and then Drupal 9, or migrating directly from Drupal 7 to Drupal 9, these migration considerations and the process itself will be the same. The only difference would be whether the new site you migrate content into is a Drupal 8 site or a Drupal 9 site.

Upgrading from Drupal 8 to Drupal 9

If you choose to go through Drupal 8, once you get to Drupal 8, finishing the migration to Drupal 9 is relatively easy. Upgrade to the latest version of Drupal 8; the upgrade to Drupal 9 requires Drupal 8.8.x or 8.9.x. Along the way, you'll be notified of any deprecated code or contributed modules you'll need to remove before upgrading to Drupal 9. Make sure any custom code is compatible with Drupal 9, and then update the core codebase to Drupal 9 and run update.php

Voila! The long upgrade process is complete. 

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web