Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Feb 16 2024
Feb 16

Looking for a way to distinguish yourself among the many Drupal developers out there? Look no further than the Acquia Triple Certification. Representing excellence across Drupal and Acquia, the Drupal Certification exams are for developers, front-end specialists, and back-end specialists that want to demonstrate their advanced expertise.

From "Drupal Grand Master" to "Triple Certified"

Since 2015, Acquia has been using the term "Drupal Grand Master" to acknowledge those who completed Drupal Certification exams for developer, front-end specialist, and back-end specialist. 

With help from the Drupal community, Acquia decided to rename this high level of qualification to "Acquia Triple Certified Drupal Expert" or the shortened version, "Triple Certified." This change was made to reflect more inclusive language. Dries Buytaert, Co-Founder and CTO of Acquia said, "The words we use matter. They make a big difference toward eliminating subtle forms of racism, sexism, and more in the technology industry." The technology industry and Drupal are making huge strides towards improving diversity and inclusion within the open source community. 

In 2022, Acquia renamed its highest level of Drupal certification, formerly "Drupal Grand Master," to reflect more inclusive language. Learn more about this name change here.

What Is Drupal?

Drupal is one of the leading open source Content Management Systems (CMS) and one of the largest open source communities in the world. Founded back in 2000 by Dries Buytaert as a chat platform for college students, today it stands alone as the most flexible, adaptable system on the market, enabling anyone to create advanced websites and applications.

Based on Drupal’s latest stats, 1.5 million sites, 38 percent of Fortune 50 companies, and more than 23,000 developers use Drupal every day to power the sites we interact with regularly. Bounteous has been using Drupal to build websites for customers since 2009.

The Establishment of the Drupal Certification Program

Just as mentorship is ingrained into the DNA of the Drupal community, it was a driving factor in establishing the "Acquia Triple Certification," formerly "Drupal Grand Master Certification," program. Buytaert saw the need to provide a valuable distinction among the community and thus launched the Acquia certification program in 2014. While extremely resource-intensive to create a global certification program, Buytaert says, "We believe that effort is worth it, given the overall positive effect on our community."

Regularly relied upon for guidance and expertise, Acquia Triple Certified Drupal Experts relish the honor to further the Drupal community. As of 2022, Bounteous has 13 Acquia Triple Certified Drupal Credentials, 8 Acquia Triple Certified Drupal Experts, and over 100 developer certifications—the most of any organization in North America outside of Acquia.

Where does Acquia fit in?

Acquia delivers a cloud-based digital experience platform built on Drupal that enables organizations to build experiences that scale and is also led by Buytaert. A preferred partner of Bounteous, Acquia empowers brands to embrace innovation and create customer moments that truly matter. 

Recently named a leader by both Gartner and Forrester, Acquia is committed to facilitating certification programs allowing developers to validate and promote their Drupal skills year after year.

Acquia Certifications

Getting certified is a great way to validate and promote your Drupal skills while helping you stand out from the crowd. Acquia is working towards making it easier to access free training to gain Triple Certification, as well as other Drupal and Acquia certifications. Learn more about how to get Acquia Certifications here.

Acquia Certified Site Builder

The Acquia Certified Drupal Site Builder is a credential intended for professionals who build Drupal sites using core and contributed modules. This exam is designed to validate the skills and knowledge of a Drupal Site Builder. 

Acquia Certified Developer

The purpose of the Acquia Certified Developer exam is to validate the skills and knowledge of a Drupal developer in the areas of fundamental web concepts, site building, front-end development (theming), and back-end development (coding).

Acquia Certified Front End Specialist

The purpose of the Acquia Certified Front End Specialist exam is to validate the skills and knowledge of a Drupal developer in the area of front-end development (theming).

Acquia Certified Back End Specialist

The Acquia Certified Back End Specialist exam validates the skills and knowledge of a developer in the areas of building and implementing Drupal solutions using code (module development).

The Ultimate Triple Certified Achievement

To achieve the prestigious Acquia Triple Certification, a candidate must pass the developer, Front End, and Back End Acquia certification exams within a year. Each exam validates skills and knowledge in different areas from fundamental web development to Drupal Core API. This is no easy feat! It is the highest-ranking Drupal certification available, the triathlon of development, and requires expertise in multiple areas of focus.

Why Does This Matter to Me?

Whether you're a developer, a Drupal agency, or an Acquia customer, the Acquia Triple Certifications are a trusted benchmark in the industry. The real-world scenarios included in the exam ensure a breadth of prior experience that helps enhance development. Many companies even require Acquia Certification which speaks volumes to the respect this program has established. 

We recommend all Acquia Triple Certified Drupal Experts add this credential to their LinkedIn profile under certifications, or if you already have "Drupal Grand Master" listed, make sure it is updated to reflect the new and inclusive language. 

Ultimately, Triple Certified designation allows developers an edge in the development world and provides an enormous amount of credibility in the Drupal community. While no doubt a significant undertaking, the achievement is well worth the investment. With more knowledgeable and engaged developers bettering the community, Drupal’s power increases and in turn supports the companies that leverage Drupal for their digital experiences.

Feb 16 2024
Feb 16

Let’s be honest, PHP is an old language, relatively speaking. It might not be as ancient as COBOL or Smalltalk, but it’s a dinosaur relative to the new generation of programming languages like Rust, Julia, or Typescript. Due to wide adoption and decades of maintaining legacy functionality, updating PHP to include newer features and runtime improvements takes significant time and consideration. The reality is this language is not going anywhere, and there are many benefits of upgrading to PHP 8.

PHP 8 includes improvements that show a clear desire to modernize, as well as capabilities of other popular languages that developers will appreciate. Thanks to PHP 8, Drupal 10 can now use tools that will enable continued growth and enhanced performance. Upgrading to PHP 8 will be beneficial to any site running on PHP–however, as a Drupal developer, I’m particularly excited about how this will impact Drupal 10. I’ll highlight some of the benefits that apply to many sites, but especially how it may apply to Drupal.

To make the most of your PHP upgrade, learn about all of the new and exciting features of this programming language.

Why Upgrade to PHP 8?

Before discussing the features of PHP 8, it’s important to understand why you will want to make this upgrade from PHP 7 to PHP 8. The short answer is that Drupal 10 is requiring a PHP upgrade to enforce requirements imposed by Symfony 6.

It’s also relevant to note that Drupal 10 specifically requires version 8.0.2, not version 8.1. For updates like Fibers and better JIT performance, be sure to use version 8.1 (as opposed to the minimum PHP version needed by Drupal 10). Conversations on this topic are still ongoing, so keep an eye out for updates as new versions of Drupal are released!

New Language Syntax

PHP 8 and 8.1 both introduced changes that can be generally split into three groups: language syntax, new functionality, and execution strategy improvements. Syntax changes give developers opportunities to write code that’s more expressive of what they are trying to do, often with fewer lines of code. 

If we can create the functionality we want with syntax we like, that not only benefits the quality of the sites we build but also helps us with the speed at which we can write code. This article will cover some of the most exciting syntax changes, but you can check out the full list of changes for PHP 8 and PHP 8.1.

Help with NULL

Accessing data from within a deeply nested structure can be a hassle. Values at keys within a PHP array or object can be NULL, and they need to be checked for this case since this can cause runtime errors if not properly addressed. The isset function has helped with this for years, but with the advent of object-oriented programming in PHP 7, lots of data accessing is done via class methods, and those do not play well inside of an isset call.

PHP 8 has added a popular solution to this problem with the nullsafe operator: ?-> (See Figure 1). Many other languages already have this operator, and its addition to PHP will allow us to access data more efficiently.

$data = new stdclass;
$data->foo = NULL;

var_dump($data->foo);
// result: NULL

var_dump($data->foo?->stuff);
// result: NULL

var_dump($data->foo->stuff);
// result: ERROR

Figure 1: The Nullsafe Operator

PHP also added special type operators for making custom types that can allow for more specificity in type declarations. One such operator is the union type operator (See Figure 2). In the spirit of taking inspiration from others, this syntax is the same as in TypeScript and Haskell’s type systems. Thanks to this update, a function or class method can now explicitly state if an argument can be exclusively one of two or more types as a part of its declaration. 

NULL can be included alongside types in a union type declaration, and when NULL isn't included it cannot be passed as a value to that argument. All of this helps developers better control cases with NULL values and helps reduce runtime errors and bugs.

function func(int|string $foo) {
    var_dump($foo);
}

func(5);
// result: 5

func('bar');
// result: 'bar'

func(NULL);
// result: ERROR

Figure 2: Union Type Operators

Match Statements

Switch statements have been a staple to PHP and several other languages for decades. PHP 8 includes an improved version of this age-old control structure: the match expression (See Figure 3). 

Match expressions function similarly to switch statements, but they differ significantly from switch statements in that they are expressions used to resolve to a value, instead of a statement that operates as a control structure that runs lines of code. Because of this, setting the value of a variable with the return value of a match statement when it is declared is now possible.

$bar = 8.0;

$foo = match($bar) {
    8 => 'int 8',
    8.0 => 'float 8',
    '8.0' => 'string'
};

var_dump($foo);
// result: float 8

Figure 3: Match Statements

Pattern matching is an integral concept in many strongly typed languages, such as Rust, Haskell, and OCaml. The addition of match statements to PHP indicates the language is taking inspiration from other languages when making improvements. Though PHP match expressions aren't quite as powerful as similar syntax in other languages, this is still a huge leap forward in modernizing PHP. 

The fact that they can return a value and don't need break statements should eliminate many unnecessary lines of code and allow for better readability. For switch statements, the default case was optional and could potentially lead to undesired cases if not handled properly. However, with match expressions, an error will be thrown when no default case is provided, naturally encouraging the creation of more secure code.

Pure Intersection Types

Pure intersection types were added in PHP 8.1 and they provide an interesting way of strengthening the object-oriented type system in PHP (See Figure 4). When an object that is being passed to a function needs to implement two or more specific interfaces, a pure type intersection operator can be used to combine those interfaces into one type. 

Instead of having to create a new interface in a separate file, intersection types can be easily created when needed. Having more opportunities to create new types allows developers to be more expressive with their code and improves legibility, which can save time and effort over the long run.

class Bar implements Stringable, Countable {
    function __toString() {
        return 'this is an object';
    }
    
    function count(): int {
        return 0;
    }
}

class Foo {
    public function __construct(
        public Stringable&Countable $bar
    ) {}
}

$foo = new Foo(new Bar());

var_dump('toString: ' . $foo->bar);
// result: 'toString: this is an object'

var_dump('count: ' . count($foo->bar));
// result: 'count: 0'

Figure 4: Pure Intersection Types

New Functionality

Aside from syntax, new functionality has been introduced in PHP 8 and 8.1 that provides an opportunity for Drupal to improve drastically. Namely fibers and class attributes present new ways to enhance underlying code directly.

Fibers

Another way that PHP has differed from several other server-side languages is through the lack of native support for managing concurrency or suspend execution of code. Using callbacks can allow developers to write asynchronous code. However, there is no API enforced by the language for this functionality; it's managed entirely by other packages. The addition of the Fiber class in PHP 8.1 opens a new world of non-blocking code possibilities. 

In the Drupal community, issue threads are getting traction as developers advocate for using fibers to make time-consuming tasks like cache rebuilds, queue runners, and automated cron runs more performant and easier to manage.

Despite some exciting use cases of fibers, we aren't likely to see them used extensively in the everyday development of Drupal. Even the PHP documentation states "Libraries will generally build further abstractions around Fibers, so there's no need to interact with them directly." Fibers are introduced through 8.1, so you might not see it in Drupal 10 core until 8.1 becomes the minimum version needed. PHP is striving to make up for lost time with fibers and we can expect to see revolutionary architectures in future applications thanks to their introduction in PHP 8.

Class Attributes

A huge improvement in Drupal 8 was the Plugin API, which includes a discovery system for finding plugin classes. PHP 8 class attributes can make this process even better. Several types of plugin discovery exist, such as Annotated Class Discovery. This discovery type makes use of a class annotation in a comment before a class definition. 

Since class metadata is inside of a comment, a separate library provided by the Symfony framework called Doctrine is needed to parse and use class annotations. With class attributes, this functionality is now a native part of the language. Class attributes allow for much more flexibility in specifying metadata for a class, without the use of a third-party library. 

Additionally, they can be used for tasks other than plugin discovery, since attributes can be placed on class methods as well. When used in that context, attributes can specify route handlers, event subscribers, and a whole range of other Drupal functionality. Other object-oriented languages like Java, C#, and even JavaScript have versions of class attributes. The addition of class attributes to PHP 8 is further proof that the language is striving to modernize and provide developers with tools to build better systems.

JIT Compilation

The maintainers of PHP might have added all kinds of new and cool syntax and functionality, but underneath it, all was still the same execution process of language parsing, AST creation, OPcode transpilation, OPcache, and execution inside a VM. Little had previously been done to enhance the step between the traditional Zend VM and the CPU, but with PHP 8 a new door has been opened. 

As hardware has gotten exponentially more powerful over time, compilation has gotten quicker and quicker. Because of this Lua, Python, JavaScript, and other transpiled languages have made use of a Just-In-Time (JIT) compilation step that happens after a script has been queued for execution and before it is actually executed. 

JIT compilers offer huge speed boosts since all optimization strategies that have been part of compilers, in general, can be applied to scripting languages as well. PHP 8 added a JIT compiler to the language, which will allow it to make huge strides in code performance in the same way that many other popular languages already benefit from.

While PHP will become much more performant with version 8, the web frameworks that use PHP won't see the same kind of speed boost. The limiting factors to server-side processing speed in Drupal will still be the same as they were before: database queries, bootstrapping on request, and other architectural constraints.

Benchmarks against other similar frameworks have shown only a 3-5 percent increase in page render speed, so the same can be safely assumed for Drupal. For functionality requiring complex processing that is written purely in PHP though, the JIT compiler will make a huge difference. Image processing and data analysis can now be reasonably implemented in PHP thanks to the new JIT compiler.

Switching to PHP 8

As always, teams will need to rigorously test their sites with new versions of PHP locally, as well as in higher-level environments afterward. Switching to PHP 8 is possible to do today under Drupal 9 and is fairly easy to do locally, especially if you’re using Lando and Drupal VM. Configuring the PHP version used by an environment in Acquia Cloud is also a relatively straightforward task.

Whether it’s using new syntax, experiencing the speed boosts of the JIT compiler, or trying out new features of the language, PHP 8 has many improvements for developers to be excited about. PHP is continually improving, and keeping up with its growth will make for a better Drupal experience, which in turn will make the life of developers easier as well. I recommend upgrading to PHP 8 and giving all of these new and modern features a try as soon as you can!

Feb 16 2024
Feb 16

It’s been ten years since the release of Drupal 7. In that time, building experiences for the web and digital marketing needs have grown more complex. The Drupal platform has had to evolve to keep pace with market trends and other content management solutions. This concept of an evolving platform aligns perfectly with "Embrace Change," which is Principle 7 from Drupal’s Values and Principles. There’s a long shared saying in the community stating that, "The drop is always moving." Drupal (the drop) needs to grow to remain competitive and developers need to embrace these changes to take advantage of the latest features and functionality.

When it comes to embracing the latest Drupal offerings, as a community we still have a lot of work to do. Of the over 950K sites reported on the Drupal Usage Statics page for November 21, 2021, roughly 520K of those websites are running some version of Drupal 7. Drupal 8 support ended November 2, 2021, and after an extension due to COVID, Drupal 7 support will end in November of 2023. This means for organizations running any version less than Drupal 9, now is the time to be planning the next steps.

Why Migrate to Drupal 9?

Before jumping into the process of planning and executing a migration, it’s important to understand the value of this effort to your business. This is the perfect time for an organization to evaluate if it makes sense to continue investing in Drupal and understand what they’ll receive from their investment. There are a number of compelling reasons to invest in moving to Drupal 9, but here are some highlights!

Drupal Is API-First

The API-First initiative, introduced in Drupal 8, simplified using REST APIs to pull content out of Drupal and into other systems. This means fully decoupling or progressively decoupling is easier in modern Drupal. Additionally, as composable architectures grow in popularity, Drupal enables developers to quickly surface REST endpoints as JSON in core or GraphQL via contributed modules. 

It’s also worth noting the Decoupled Menus initiative aims to prove the model for integrating JavaScript libraries with the RESTful Web Services API. The goals of this initiative are to make the JavaScript developer experience and the process of building decoupled experiences in Drupal rival competing content management platforms.

Drupal’s Continuous Innovation Model

Once you’ve migrated to Drupal 9, future updates (like Drupal 10) will be minor releases. This is due to introducing a continuous innovation model that aims to shorten release cycles, add new features and APIs, and simplify the upgrade path. This development model reduces the total cost of ownership and allows organizations to spend less time maintaining the platform and more time investing in features that will grow the business. 

More in Core With Drupal

Panels in Drupal 7 have been replaced with Layout Builder. This enables site administrators to visually configure content in Drupal by node or content type. Acquia customers can take this a step further by using the Site Studio low-code page building tool. Other enhancements include a native media library that enables content authors to upload and reuse media assets, a modernized administrative theme, improved multilingual support, enhanced accessibility features which comply with WCAG 2.0 and ATAG 2.0, customizable workflow tools, and robust configuration management. 

Where to Start With Your Migration?

Before any code is written or migrations are run, the first step to any successful platform upgrade should be putting together a plan and defining what success looks like. Many organizations will use these projects as an opportunity to make other enhancements to the website. For example, there may be content on your existing site that is out of date or doesn’t receive much traffic based on your analytics. You may manage a taxonomy structure that has grown unwieldy or content types that can be consolidated or removed altogether. In short, we want to ensure any incorrect assumptions, bad habits, or abandoned features don’t make their way into the new experience. 

If it’s been a few years since any visual updates have been made, this could also be a great opportunity to update the look and feel of the website. Moving from PHPTemplate to TWIG and including Layout Builder in core opens up new theming possibilities for developers. This may also be the right time to modernize the look of your site and revisit the information architecture while simultaneously updating the theme layer. As mobile traffic has increased, optimizing for a mobile-first approach that prioritizes performance, responsive assets, and serving the best experience regardless of the device should be part of the implementation plan.

Before jumping in, it’s important to understand if there’s a migration path for any modules in use and identify alternative approaches where there are gaps. This process typically starts with taking inventory of the existing modules and content, verifying and documenting existing functionality, and identifying deprecated code. Though this can be done manually, the Migrate Accelerate tool can significantly streamline this effort for organizations using the Acquia platform. The Acquia solution will recommend Drupal 9 modules and patches based on Drupal 7 and tooling that helps automate the content migration process. There are also some helpful solutions like the static PHP analysis tool, drupal-check, and using the upgrade status module.

The outcomes of this evaluation may reveal that an automated migration may not be the best approach. If the content, information architecture, and designs are changing significantly, it may make more sense to rebuild your site in Drupal 9 instead of porting over lots of content that is no longer relevant. The best approach will vary, but it's important to identify the unique needs of your organization before jumping straight into migrating data.

Migrating to Drupal 9

Due to underlying architecture changes in Drupal 8 and 9, more work is required to migrate a Drupal 7 site. Moving will require migrating both content and configuration to a clean Drupal 9 website. Drupal 9 also uses Composer for PHP package management, which is a shift from managing projects in Drupal 7. You’ll want to be mindful to structure your composer files correctly, but there are some helpful existing composer project templates like the Drupal Recommended Project, opinionated build tools via Acquia BLT, and the Acquia CMS project for Acquia customers. 

There are two approaches to migrating from Drupal 7 to 9, using the Migration UI or Drush. Using the Migration UI assumes you want to migrate everything and first requires enabling the core migration modules and any other required modules needed in the new experience. This includes installing and enabling the Migrate Upgrade, Migrate Plus, and Migrate Tools contributed modules.

To proceed with the UI approach, import the D7 database, visit the upgrade path, enter your credentials, and follow the on-screen instructions to run the migration. The core Migrate modules support most nodes, taxonomy, fields, users, content, and user roles. However, themes, custom modules, and views without a contrib migration path will need to be rebuilt. This also assumes the contributed modules used in Drupal 7 have a migration path to Drupal 9. Any modules without a clear upgrade path will require manual or custom migration. 

For developers looking for greater configuration options, using Drush provides more flexibility to pick and choose your migrations. After importing the D7 database, you can run the following Drush command to get a list of the available migrations:

drush migrate:upgrade --legacy-db-url=mysql://root:root@localhost/drupal7db --legacy-root=drupal7site/docroot --configure-only

From there, you can run Drush migrate-import (or Drush mim) for the individual migrations you’d like to bring over to the Drupal 9 website. For example: 

drush mim d7_user

Lastly, there is a lot of helpful documentation on this process on Drupal.org. There is also a running list of known issues which may be a helpful resource for troubleshooting migration issues. Depending on the amount of content that needs to be migrated and the complexity of the Drupal 7 site, developers should expect some level of trial and error to ensure all intended content was migrated successfully. Stripping down the Drupal 7 site to the bare essentials before starting this process can help greatly streamline this effort. 

The Drupal Drop will Keep Moving

The Drupal project celebrated its 20th anniversary in January 2021. The web has changed significantly over that time as we’ve seen internet fads come and go. However, it’s a testament to the Drupal project leads and the developer community for continuing to push the platform forward so that it remains relevant today. 

There are some exciting initiatives planned, including making Drupal easier for new users with the Easy out-of-the-box initiative, making it easier to find and install relevant modules with Project Browser, and simplifying the Drupal contribution process with GitLab Acceleration. For any organization evaluating a migration from Drupal 7, this should be the last significant upgrade needed to take advantage of all the new and exciting features now available in modern Drupal.
 

Feb 16 2024
Feb 16

As the Drupal landscape continues to evolve, we have seen some great advancements in how we build and maintain Drupal websites. From the powerhouse of DrupalVM with Vagrant to the agile world of Lando and Docker, each solution has its own strengths—as well as some tradeoffs that can require some level of technical expertise to navigate. Here at Bounteous, we typically use a Sprint 0 to define and implement the toolset that will be used for the project. During this time, we evaluate the needs of the project, as well as any client requirements that may dictate one solution over another.

As we evaluate these options we consider things like maintainability and flexibility. In some cases, client restrictions prevent us from running any type of virtualization. With any solution, there will be some amount of overhead as we onboard developers and support ongoing development. When faced with virtualization restrictions, a natural solution is to create a custom LAMP or MAMP stack on each developer’s machine. This is often costly to maintain and requires a high investment to get started. Fortunately, there is a new option available, designed to give developers the flexibility of a virtualized environment without the technical cost or restrictions.

Enter Acquia Cloud IDE—a web-based development environment designed for Drupal.  Depending on your Acquia subscription, your development team will be allocated a specific number of environments that can be created and destroyed on demand. When you create a new IDE, Acquia provides a container-based environment on cloud infrastructure that matches a standard Acquia Cloud server. Each IDE comes allocated with 4GB of memory, two CPUs, and 60GB of disk space. It also comes preinstalled with all the development tools you need for high-grade Drupal—including Composer, Drush, npm, and even ChromeDriver for running automated tests.

Once the environment has been provisioned, a developer is given a link to open their IDE and begin working. The IDE itself is based on Theia, an open-source web-based IDE developed by the Eclipse Foundation. The project is extremely active, with over 5000 commits, 90 pull requests, and five releases this year alone. These are all great signals of a strong project with great backing. The IDE itself is modular, highly customizable, and feels a lot like Visual Studio. It even allows you to load some of the Visual Studio code extensions you are familiar with (in .vsx form). Though you likely won’t need to add anything, Acquia does a good job of preconfiguring the IDE with all the things you need for a Drupal build.

On the first launch of your IDE, you are greeted with a “getting started” page that gives you multiple one-click setup buttons to make your life easier. The first button creates an SSH key in the IDE environment and associates the key with your Acquia account. This key can also be manually added to any external repositories such as Bitbucket or GitHub. Once your key has been added, you can use a second button to clone your Drupal codebase and database directly from your Acquia subscription. From here you have a fully functional, personal development environment and you are ready to write some code! The IDE gives you direct access to the CLI, and even provides a command to enable Xdebug.

Thoughts and Impressions

It’s pretty easy (and warranted) to be skeptical of a solution like this for a development environment. New IDEs and environments mean new ways of working and ultimately friction when it comes to accomplishing work. It’s extremely important for developers to have complete control over their development experience in order to maximize efficiency. I personally rely heavily on keybindings and shortcut commands to deal with some of the limitations my disability creates.

All things considered, I’ve found transitioning to cloud IDE to be mostly painless. I have been able to customize various shortcuts to match the keybindings I typically use (Shoutout to my dotfiles). I do miss some of the more intelligent features PHPStorm offers me like comment generation and fast file switching, but there is nothing that prevents me from writing solid code.

Skepticism included, I am extremely excited about this product and wish I could use it on some of my personal projects. The ability to have an on-demand development environment that takes little to no setup time is a game-changer. Even as someone with deep Unix and DevOps knowledge, I constantly find myself in battles with getting local environments to "just work." As much as I enjoy investigating and resolving issues that come up with Lando or DrupalVm, I’d much rather spend my time building amazing websites. Acquia Cloud IDE makes that possible.

Opportunities for Improvement

No solution is perfect, and running an IDE in a browser runs into some pretty expected problems. Some of the more complicated tasks like file browsing and terminal interaction require communication with the server. This can cause problems like console input lag or slow/finicky code completion. These problems can be exacerbated by unstable internet connections, leading to warranted frustration. This has been a pretty serious and ongoing issue that Acquia is working on fixing.

The other part I wish I had more control over is the setup process. While the IDE makes it easy to clone code directly from your Acquia environment, it doesn't easily allow you to clone from a BitBucket or GitHub repository. It would be great if there were some way to customize the welcome screen and button actions. My goal is to onboard developers as quickly and painlessly as possible, and it would be nice to have greater control over this process.

Final Thoughts

Acquia Cloud IDE has amazing potential and has already proven to be a valuable tool. We are a couple of weeks into using it on one of my clients, and we were able to successfully onboard 12 developers—for many of whom it was their first Drupal project. The ease of onboarding and consistency of environments across both Windows and OSX was a breath of fresh air. I’m excited to continue to use this tool on future projects!

Feb 16 2024
Feb 16

Leading agency delivering transformative digital experiences on the Acquia Digital Experience Platform

Chicago — October 27, 2021 — Bounteous, a leading insights-driven digital experience consultancy, today announced its elevated status as Acquia Global Partner and also received recognition as one of Acquia’s first Practice Certified Partners. Bounteous is being honored as a top-performing partner following a year of record growth and transformational digital experience implementations.

The Acquia Partner Program recognizes companies committed to the highest standards of technical delivery on the Acquia Open Digital Experience Platform (DXP). Partners must not only make significant investments in training and business development, but also deliver high-quality implementations and contribute to Acquia Open DXP. Acquia's Global Level Partnership includes only their top dozen partners and is a highly distinguished group of leading digital agencies.

"This recognition marks our team's incredible growth and deep understanding of product value and familiarity with Acquia Drupal Cloud," said Seth Dobbs, CTO at Bounteous. "Our partnership with Acquia is the epitome of co-innovation. As partners, we believe that we're stronger together and we're thrilled to receive this outstanding achievement."

Bounteous worked closely with Acquia throughout the year to surface opportunities for growth—from product advancements and early adopter programs to joint prospecting, marketing, and deal partnering. These qualifications in addition to the team's ability to deliver transformative digital experiences on the Acquia DXP have earned the organization this distinct qualification.

"We're delighted to be able to award this recognition to Bounteous," said Peter Ford, VP of Global Channels, Partners & Alliances at Acquia. "Partnering with Bounteous to drive unmatched capabilities and digital transformation for our clients helps solidify our standing as the top digital experience platform in the industry. As part of the Acquia Partner Advisory Board, Bounteous continually contributes to new product testing and helps inform the products that makeup both the Acquia Marketing Cloud and Acquia Drupal Cloud. As we look to grow our offerings over time, we're excited to be able to count on partners like Bounteous to help us reach our goals."

About Bounteous

Founded in 2003 in Chicago, Bounteous is a leading digital experience consultancy that co-innovates with the world's most ambitious brands to create transformative digital experiences. With services in Strategy, Experience Design, Technology, Analytics, and Marketing, Bounteous elevates brand experiences and drives superior client outcomes. For more information, please visit www.bounteous.com. For more information about co-innovation, download the Co-Innovation Manifesto at co-innovation.com

For the most up-to-date news, follow Bounteous on Twitter, LinkedIn, Facebook, and Instagram

About Acquia

Acquia empowers the world’s most ambitious brands to create digital customer experiences that matter. With open source Drupal at its core, the Acquia Digital Experience Platform (DXP) enables marketers, developers and IT operations teams at thousands of global organizations to rapidly compose and deploy digital products and services that engage customers, enhance conversions and help businesses stand out. Learn more at https://www.acquia.com.

Feb 16 2024
Feb 16

Drupal entered the market as a small content management framework in 2001. First created as a college project by Dries Buytaert to stay in touch with friends, the platform has become a flexible, enterprise-grade solution that powers some of the largest organizations.

I've been part of the Drupal community for over ten years, and it's been amazing to watch the growth of the platform, the vibrant community, and its adoption in the market. This open, API-first solution offers many features out-of-the-box, has a large and active community, and over 45,000 modules that add functionality to the core platform enabling teams to build digital solutions of any size.

Though many are familiar with Drupal, we often hear questions about where Acquia fits into the picture. If Drupal is an open-source platform that anyone can freely use, why do we need Acquia? What benefits does Acquia bring to the table that are not already freely available in the core Drupal platform? Why should we consider Acquia if we're already hosting Drupal internally?

Let's take a closer look at some of the solutions available through Acquia and why Acquia and Drupal together may be the right solution for your digital experience platform (DXP).

Enterprise Hosting Tuned for Drupal

Acquia is probably best known for its enterprise Drupal hosting capabilities. Acquia's high-availability hosting solution is tuned for Drupal and offers a scalable solution for hosting mission-critical websites on the Drupal platform. By default, a three-tier Drupal environment (development, staging, and production) with a drag-and-drop UI is included. Other features like search via Apache Solr, multiple caching layers, a global content delivery network, and Kubernetes-native environments are also available.

For organizations managing tens, hundreds, or thousands of Drupal instances, Site Factory is a powerful solution to build and govern those properties in a unified interface. There are also accelerators to migrate from other platforms or old versions of Drupal, a cloud-native development environment to onboard developers quickly, and CI/CD workflows available with Acquia Pipelines. All of this infrastructure is built on top of the scalable and battle-tested Amazon Web Services platform.

Acquia Content Management System

Drupal has configuration options that allow developers to take the core platform, compile the best community modules, and package those settings into an exportable distribution. Acquia CMS is a distribution of Drupal, which includes the best community modules curated by Acquia to enable teams to launch quickly on their platform.

In addition, Acquia CMS acts as an accelerator to remove some of the repetitive tasks development teams require on every project. Instead, that setup time can be repurposed and invested in building the platform to address business requirements.

Low-Code Page Building

Site Studio is a robust low-code page building solution built specifically for Drupal. Marketing teams can drag and drop components onto the Site Studio layout canvas and build out landing pages in Drupal without development or IT resources. These responsive components are highly configurable and can integrate with core Drupal features like Blocks and Views.

Developers can also create custom Site Studio components in other frontend solutions like React. After the Site Studio components have been defined, marketers and site builders can reuse these elements across the experience to create dynamic Drupal landing pages with localization.

Additionally, teams can utilize Acquia's UI Kit, which provides over 70 pre-built components to help accelerate the page building process. Bounteous is actively using Site Studio with about half a dozen clients, we've written about this product on our blog, and have even highlighted how it enabled our team to launch a new website in a little over a week.

Personalization at Scale

While there are many personalization solutions in the market, Acquia Personalization (formally Acquia Lift) is a no-code solution built to work with Drupal. Acquia Personalization lets teams pull content from Drupal and non-Drupal platforms via APIs to personalize your customers' frontend experience and use data from a unified profile to drive engagement.

Real-time segments can be created based on geography, device, source, past behavior, content interest, or any combination of segmentation criteria. Data can be personalized to the individual with A/B testing, refined over time, and integrated with any system with an API. Once configured, personalization programs can be launched in a no-code personalization UI without the need for IT support.

An Open Customer Data Platform

Acquia added its CDP offering in late 2019 with the acquisition of AgileOne. They have since renamed this solution to Acquia CDP, overhauled the UI to match other Acquia tools, and added forward-thinking features like machine learning models to the product.

Acquia CDP can be used to create a 360 profile with identity resolution to understand the actions and behaviors of every customer. Data can be consumed from any platform with an API, and custom dashboards can be made in their application or exported to other BI tools. In addition, innovative new features like machine learning fuzzy clustering can intelligently add each customer into multiple segments.

True Composable Architectures

While there are many tools available in the Acquia stack, there are thousands of MarTech solutions in the market. One of the most compelling features of the Drupal and Acquia offering is you're not bound by the tools only available in this ecosystem. For example, composable commerce solutions using the Acquia Commerce Framework can be utilized to manage landing pages and product data in Drupal, commerce in Elastic Path or Commerce Tools, and AI-driven product recommendations in Lucidworks.

This is just one example, but there is a lot of flexibility to compile the best solutions and pass data via APIs versus being locked into one platform. It's also worth noting we touched on the topic of creating Drupal platforms with interchangeable services in a recent webinar with Preston So. Picking the best tools for the job with Drupal and Acquia at the center of the stack opens the door to create flexible, feature-rich solutions using best-of-breed tools.

Why Acquia?

Is Acquia required to run Drupal? No. Our responsibility as consultants at Bounteous is to understand the needs of the client and make informed recommendations. As we've seen, there are many features available in the Drupal and Acquia offering, but that doesn't mean it's the right solution for every project.

That said, you can see the true power of Drupal as a platform when you combine it with the robust features and functionality available in the suite of Acquia products. With so many MarTech solutions out there, having products tuned to work with Drupal enables teams to get up and running quickly or pivot as the needs of the organization change.

If we've learned anything from this pandemic, it's that we need digital solutions that are as flexible as they are powerful. A recent customer experience study states 94 percent of organizations have changed their digital CX strategies in the past 18 months. Businesses require the capabilities to launch quickly, pivot gracefully, integrate seamlessly, and scale globally. Acquia is the leading solution for organizations to have those capabilities with Drupal and build their own open digital experience platforms.

In 2021, Acquia acquired Widen, a provider of digital asset management and product information management software. With Widen, Acquia extends its leadership in managing web applications. Widen solutions will make it simple for Acquia DXP users to add brand and product content to any digital customer experience and manage it at scale. The company's focus on managing content and the marketing workflow surrounding rich media and product information complements Acquia's and makes Widen an ideal fit.

We look forward to seeing how Acquia continues to invest in customers and the Acquia Open DXP.

Feb 16 2024
Feb 16

Headless websites have taken the industry by storm, promising to deliver unique brand experiences that enable customer loyalty. Using a headless approach for your project allows you to combine technologies that would normally be siloed due to language or server constraints.

Typically when we talk about a headless Drupal architecture, we are referring to using Drupal for its strength as a content management system (CMS), but using a framework like React or Vue to drive the frontend. This separation of concerns allows your teams to focus on using the tools they know best—ultimately delivering a better product.

The single most important metric in commerce implementations is response time. A website’s overall responsiveness can directly affect the conversion rate and the bottom line. According to a study by Porent in 2019, "The highest e-commerce conversion rates occur between 0 and 2 seconds, spanning an average of 8.11% e-commerce conversion rate at less than 1 second, down to a 2.20% e-commerce conversion rate at a 5 second load time." Let's explore why traditional commerce implementations are so slow and why headless might just be the solution.

Why Use Drupal as a Commerce Platform?

Before we consider the frontend, we need a robust, secure backend platform to deliver our data and business logic. One of the many reasons Drupal is a great candidate for headless, or really any CMS build, is its inherent flexibility and security. Drupal's fieldable entities mean you can structure your CMS to fit your data. Drupal is regularly screened for vulnerabilities and has a robust process to identify and fix security issues. This is especially important in commerce implementations where proprietary data is often pulled from a Product Information Management (PIM) system like Akeneo.

Drupal's true power comes in the form of a massive library of community-contributed and maintained modules. A great example of this is the Drupal Commerce suite maintained by Centarro. Drupal Commerce out-of-the-box provides a robust set of entities and plugins that provide a complete commerce experience. Commerce can be further customized by Contrib modules that provide everything from payment processors (like Stripe or Paypal) to shipping integrations (like UPS or FedEx). Community-contributed modules are the cornerstone of the Drupal platform, and the projects we build make them possible.

Customizing Your Drupal Commerce Forms

Why Is Traditional Commerce Slow?

In a traditional Magento or Drupal Commerce implementation, we often create frontend markup on the backend before delivering the page to the user. As we generate this markup, we make calls to various APIs like shipping rate calculators. Once we have this complete HTML document, we send it to the browser. The browser then parses this markup and scans it for additional documents like CSS, Javascript, and images. Once it gathers all of this data, it turns it into an interactable web page. All of these things make the average page load speed roughly 7 seconds on desktop. That's quite the gap between our target of <2 seconds.

To alleviate the work the backend has to do to render a page, we've come up with some pretty clever tricks. One example is using Edge Side Includes (ESIs). ESIs work by loading the majority of page content from cache, then replacing specific placeholders with dynamic calls to the server. Since the server doesn't need to render the complete markup, we can often achieve faster load times. Drupal Core offers BigPipe, a module that similarly renders the majority of a page from cache, then replaces placeholders with dynamic content. Oftentimes these solutions come at a high complexity and frequently cause problems related to caching. They also don't work for content that is highly dynamic like category pages with facets and filters.

How Does Headless Help?

When we implement a headless website, we can think of the frontend as less like a web page and more as an application. A properly designed React (or other JS frontend framework) app can be lightweight and heavily cacheable. On initial page load, we load our entire application into working memory. This means that as a user navigates through the site, they are actually interacting with a single-page application that does not require a page reload to show new content.

The reason we can get away with not reloading the page is that data can be asynchronously fed to the frontend. This means that as a user is browsing the site we can preload resources like images and linked pages. When we can run expensive operations independently of a user's browsing experience, we can make website response times appear to be instantaneous—or more within our targeted 0-2 second range.

How Do We Get There?

On the backend, you still need a robust, secure CMS to feed data to the frontend and handle complex or session actions (add to cart/checkout validation). This is where Drupal is an easy choice. One of the easiest ways to feed data to a frontend is via JSON data. Drupal Core provides the JSON:API module which allows you to easily expose your content as filterable JSON objects. This means you can leverage the strength of the Drupal Community while giving your frontend room to prefetch and asynchronously validate data.

Building a world-class commerce website necessitates a world-class toolset, but even more so a world-class team. Drupal has proven to be a reliable CMS capable of delivering highly custom experiences. When this is paired with a well-built frontend, load times become instantaneous, and conversion rates increase!

Feb 16 2024
Feb 16

Well, that was exciting! Releasing an enterprise-level Drupal Commerce solution into the wild is a great opportunity to take a moment to reflect: How on earth did we pull that off? This client had multiple stores, multiple product offerings, each with its own requirements for shopping, ordering, payment, and fulfillment flow. And some pretty specific ideas about how the User Experience (UX) was to unfold.

Drupal Commerce offers many possible avenues into the world of customization; here are a few we followed.

Can't I Just Config My Way Out of This?

Yes! But no, probably not. Yes, you should absolutely set up a Proof-of-Concept build using just the tools and configurations at your disposal in the admin user interface (UI). How close did you get? Does your implementation need just a couple of custom fields and a theming, or will it need a ground-up approach? This will help you make more informed estimations of the level of effort and number of story points.

Bundle Up

The Drupal Commerce ecosystem, much like Drupal as a whole, is populated by Entities—fieldable and categorizable into types, or bundles. Think about your particular situation and make use of these categorizations if you can.

Separate your physical and digital products, or your hard goods and textiles. Distinct bundles give you independent fieldsets that you can group with view_displays.

Order Types (admin/commerce/config/order-types/default/edit/fields) are the main organizing principle here: if you have a category of unpaid reservations vs. fully paid orders—that sounds like two separate order_types and two separate checkout flows. Softgoods and hardgoods are tracked for fulfillment in two separate third-party systems? Separate bundles. Keep in mind, though, that a Drupal order is an entity and is a single bundle. An order can have multiple order_item types, but only a single order_type.

Order Item Types (admin/commerce/config/order-item-types/default/edit/fields) bridge the gap between products and orders. Order Item bundles include Purchased Entity, Quantity, and Unit Price by default, but different product categories may need different extra fields on the Add to Cart form.

Adding to Cart

Drupal Commerce offers a path to add Add-to-Cart forms to Product views through the Admin UI.

Drupal Commerce path to add Add-to-Cart forms

You could alter the form through the field handler, the formatted, or template of course, but we wanted more direct control and flexibility. We created a route with parameters for product and variation IDs—now we could put the form in a modal and reach it from a CTA placed anywhere. The route's controller, given the product variation, other route parameters, and the page context, decided which order_item_type form to present in the modal.

class PurchasableTextileModalForm extends ModalFormBase {
 use AjaxHelperTrait;
 /**
  * {@inheritdoc}
  */
public function buildForm(array $form, FormStateInterface $form_state, Product $product = NULL, ProductVariation $variation = NULL, $order_type = 'textile', $is_edit_form = FALSE) {
  $form = parent::buildForm($form, $form_state, $product, $variation);
  ...

We extended the form from FormBase, incorporated some custom Traits, and used \Drupal\commerce_cart\Form\AddToCartForm as a model. We learned some fun lessons on the way:

  • Don't be shy when loading services—who knows what you'll wind up needing.
  • Keep in mind that the form_state's order_item is not the same as the PurchasedEntity. Fields associated with an Order Type are assigned at the form_state level, fields on an Order Item bundle are properties of the PurchasedEntity.
  • Want to check your cart to see if this particular product variation is already a line-item? \Drupal::service('commerce_cart.order_item_matcher')->match() is your friend.
  • When validating, recall again that PurchasedEntity is an Entity, which means it uses the Entity Validation API. The AvailabilityChecker comes for free, you may add custom ones simply by registering them in your_module.services.yml. Or you may want to create a custom Constraint.

Our add-to-cart modal forms (which we reused on the cart view page for editing existing line-items) turned out to be works of art. We had vanilla javascript calculating totals in real-time, we had a service calculating complex allocation data also in real-time, triggered by ajax. Custom widgets saved values to order_item fields which triggered custom Addon OrderProcessors.

class AddonOrderProcessor implements OrderProcessorInterface {
 /**
  * {@inheritdoc}
  */
 public function process(OrderInterface $order) {
   foreach ($order->getItems() as $order_item) {
...

Recognizing how intricate and interconnected this functionality was going to be, we committed ourselves early on to the necessity of building the forms from scratch.

Wait, What Am I Getting?

The second step of the experience: seeing how full your cart has become after an exuberant shopping session.

Out-of-the-box, Commerce offers a View display at "/cart" of a user's every order item, grouped by order_type.

We wanted separate pages for each order_type, so first we overrode the routing established by commerce_cart and pointed to our own controller which took the order_type as a route parameter.

class RouteSubscriber extends RouteSubscriberBase {
 /**
  * {@inheritdoc}
  */
 protected function alterRoutes(RouteCollection $collection){
   // Override "/cart" routing.
   if ($route = $collection->get('commerce_cart.page')) {
     $route->setDefaults(array(
       '_controller' => ...

That controller passed the order_type as the display_id argument to the commerce_cart_form view, where we had built out multiple displays.

We had a lot of information to show on the cart page that was not available to the View UI. We had the results of our custom allocation service that we wanted to show in a column with other Purchased Entity information. We had add-on fees we wanted to show in the line item's subtotal column. This stuff wasn't registered as fields associated with an entity in Drupal, these were custom calculations.

We registered custom field handlers that we could select in the Views UI, placing them into columns of the table display and styling them with custom field templates. The render function of these field plugins had access to all the values returned in its ResultRow by the view for our custom calculations:

$values->_relationship_entities['commerce_product_variation']->get('product_id')

Let's Transact!

The checkout flow has little customization available off-the-shelf through admin pages. You can reorder the sections on the pages and the Shipping and Tax modules will automatically create panes and sections for you, but otherwise, you get what you get, unless you roll your own.

A custom Checkout Flow starts with a Plugin (so watch your Annotations!) which need not do too much more than define the array of steps. On the other hand, we extended the buildForm() and tucked in a fair amount of alterations, both globally and to specific checkout steps.

Each checkout step can have multiple panes (also plugins: @CommerceCheckoutPane) each with its own form -build, -validate, and -submit functions.

We built custom panes for each step, using shared Traits, extending and reusing existing functionality wherever we could. With a cache clear, our custom panes were available for ordering and placement in the Checkout flow UI.

Manage Form Display tab in Drupal Commerce

We managed the order_type-specific fields and collected them in the field_displays tab in the admin UI. We could then easily call for those fields by form_mode in a buildPaneForm() function and render them. We used a similar technique in the validate and submit functions.

$form_display = EntityFormDisplay::collectRenderDisplay($this->order, 'order_reference_detail_checkout');
$form_display->extractFormValues($this->order, $pane_form, $form_state);
$form_display->validateFormValues($this->order, $pane_form, $form_state);

Integration Station

This project had a half-dozen in-coming and out-going integration points with outside systems, including customer info, tax and shipping calculator services, the payment gateway, and an order processing service to which the completed order was finally submitted.

Each integration was a separate and idiosyncratic adventure; it would not be terribly enlightening to relate them here. But we are quite sure that, rather than having custom functionality shoe-horned here and there in a number of hook_alters spread over the whole codebase, keeping our checkout forms tidily in individual files and classes helped the development process immeasurably.

And Finally, Ka-ching

The commerce platform space is a landscape crowded with lumbering giants. It was awfully satisfying to see Team Drupal put together a great-looking, custom solution as robust as the big boys, in likely less time and certainly far more tightly integrated with the content, marketing, and SEO side of things. The depth and flexibility that make Drupal such a powerful platform for content management and presentation can also be used to deeply and efficiently customize all aspects of the shopping and checkout experience with Drupal Commerce.

Feb 16 2024
Feb 16

Enterprise organizations are increasingly looking at Drupal as a reliable, open source option for developing their online presence—contributing and benefiting from the active community base and potentially taking advantage of cutting-edge, decoupled capabilities.

With over 10 years of Drupal experience and implementations ranging from small to complex, we are often asked to recommend the best approach for building a Drupal site. The answer, as with many questions is, it depends. For some clients, the best choice is to build a traditional, coupled Drupal website. For other clients, it makes sense to build a completely decoupled solution using Drupal as the backend. And for others, the best solution is somewhere in between. Many factors determine which is the best approach for a particular client and their situation.

One important factor in deciding what approach to take is to understand the needs and skills of the people that will use and maintain the system. The two main users to consider are the content creators and others that will work in the system daily and the developers that will build and maintain the system.

View Webinar Recording: Building Enterprise Websites with Drupal: Unleash Your Full Potential

Considering the Content Manager

For the content creators and content managers that work directly in the content management system (CMS), having an easy-to-use content admin system is key. Drupal has increasingly focused on this experience and has provided many features with this in mind. With traditional Drupal, content editors can quickly create pages leveraging the drag-and-drop capabilities of Layout Builder. Inline editing allows content editors to make quick changes to the content without diving deep into the content admin UI. And, content preview is available to review before publishing the content to the website.

All of this is also possible in a decoupled solution, but the developers must build and maintain this functionality or cobble a solution together from existing technologies. If the project requirements already require changes to Drupal's out-of-the-box functionality, building from scratch may be easier.

Considering the Development Team

The development team's skills are also an important consideration. If you have a team that has deep technical knowledge of a technology (or a desire to develop that knowledge), that can have an impact on the recommended approach. For instance, if your team has never themed a Drupal site before, but has experience with React, using a decoupled approach would fit nicely with the team's skills.

Like any framework, it takes time to learn how to theme Drupal sites. If you have a small team that is spread thin or maybe you don't have a team, a coupled approach using Layout Builder or Acquia's Site Studio could give your content editing team the flexibility it needs without requiring much help from a development team.

Considering the Digital Strategy

The overall digital strategy is an important factor to consider as well. Will the platform support a single site or is this a key piece to a multisite, multi-brand digital platform? Is Drupal the only platform involved, or is Drupal a part of a broader digital experience platform (DXP) that includes CRMs, Commerce platforms, a CDP, and other platforms? Whether working with Acquia, the open digital experience platform built on top of Drupal, or connecting into other tools—Drupal is designed to make these connections easy.

Drupal is a great platform to integrate with other platforms. Many integrations are easy to implement by installing a community module. Drupal provides a robust migration system that makes pulling data into Drupal easy to do. Drupal also makes it easy to pull data out of Drupal using REST APIs or GraphQL.

If you are only building one website with the platform and it is primarily for marketing your organization, a traditional Drupal build probably makes sense. The more systems you are integrating and the more channels you want to use the content in, the more sense it makes to build using a decoupled approach.

Considering the Requirements

The requirements are another important factor to consider. Requirements help us define the solution. Just as important as the requirements are how flexible the requirements are. Drupal provides lots of functionality out of the box. When you add the availability of more than 40,000 free community-contributed modules, Drupal can meet many requirements with very little effort.

As you define the requirements, you should compare them to what Drupal can do. And where there is a module that meets most, but not all, of the requirements, decide whether it is possible to change the requirements. The more Drupal satisfies the requirements that you would need to build yourself in a decoupled approach, the better the coupled Drupal approach makes sense. If you find that your requirements will require a lot of customization to Drupal, a progressively decoupled or even fully decoupled approach may make sense.

Considering the Budget and Timeline

Every project has budget and timeline constraints. If you are on a tight timeline (and budget), building a coupled Drupal platform is often a solid choice. Drupal provides so many out-of-the-box features that, with flexible requirements, you can build a website in a very short period of time. For instance, we were able to build a small Drupal site using Site Studio in just a few days. The more expansive your budget and timeline, the more options you have in approach.

The Versatility of Drupal

After you've done the analysis and come up with the best approach, understand that circumstances may change. And, regardless of the approach, because Drupal can handle any of the approaches in the spectrum, you can evolve your approach over time. Because Drupal has been built with an API-first approach, it allows you to change your approach from a coupled Drupal approach to a fully decoupled approach over time. 

Here at Bounteous, our website was originally built with a coupled approach. However, we recently decided to refresh the site. Drupal has allowed us to decouple parts of the site that make sense to decouple but keep the other parts coupled. As needs dictate, we can continue to decouple only the parts that we need to.  

Drupal is a versatile system that can be the centerpiece of your DXP. How you use it will depend on the factors above and others that you find important. Regardless of the approach you take, Drupal is versatile enough to change as your needs change in the future.

Feb 16 2024
Feb 16

Contributing to Drupal is one of the most important things we can do as a part of the Drupal community. Considering that the platform is open source, contributions are essential to keep Drupal advancing. When it comes to contributions, there are a number of ways to get involved—and they don't all involve coding. I recently had the opportunity to contribute in the form of speaking at DrupalCon about a module our team rescued.

The Origins of Our DrupalCon Session

Our Drupal team has been working with the TB Mega Menu module since 2017. As we worked on various projects and tried to meet each client's different needs, we ended up making many updates and changes to the module. We eventually realized this module was no longer being maintained, so we applied for ownership and, ultimately, ended up rescuing the abandoned project.

We saw first-hand the community benefit that came from this project going from abandoned to rescued. Once we added our fixes and started updating the module, the community began using it again. Seeing the community jump right back in helped us to understand the value of contributing back to Drupal.

Encouraged by this new understanding of the importance of this contribution, we looked for a way to share that with the greater community. In a way, us sharing our story about contributing back to Drupal is another way to contribute to the Drupal community.

The Speaker Application Process

Since we wanted to share our experience of community contribution and demonstrate there are many different ways one can contribute, we decided to share our story at Drupal camps and DrupalCon. We first applied to Florida DrupalCamp and we did not get in. If something similar happens to you, it's important to not get discouraged. We took that "no" and let it drive us—we only worked harder when we applied to DrupalCon.

We spent a lot of time updating our proposal to DrupalCon. Our hard work and proposal revisions paid off, and we were rewarded with a "yes!" Some tips to keep in mind when working on your proposal.

Pick a Topic that Excites You

Pick a topic that you're excited about. If you're passionate about your topic, that will shine through in your proposal (and later on in your presentation). We were very excited about our topic and held it close to our hearts, which fueled our proposal development.

For our DrupalCon proposal, we took a step back and thought about how we could share this experience we were so passionate about, and how we could have our audience understand the importance of this contribution and get excited themselves.

Keep Your Proposal Direct and Concise

Make sure your proposal is direct and concise. It's always helpful to have other people take a look at your proposal and provide a fresh perspective. If you're able to, it's also beneficial to have someone with speaker proposal experience review.

Select a Catchy Title

Choose a title that's eye-catching and true to the content of your session. Of course, you want your title to create interest, but it's also important to make sure that your session's attendees are getting the content they expected when they chose to attend.

My Experience as a First-Time Speaker

Contributing to TB Mega Menu and presenting at DrupalCon were my first major experiences within the Drupal community. This year, DrupalCon was virtual, and it was a cool experience presenting online. As a first-time presenter, there were a few things I found comforting about presenting virtually. Personally, I felt less nervous because I didn't have to stand on stage and present to a crowd. I felt a bit more casual and comfortable in my own home. There was a chat and Q&A feature so I could see if the audience was engaged in my presentation. Overall, I enjoyed presenting virtually for my first speaker experience.

Co-presenting with my colleague Wade Stewart was another important element of this experience. I had never presented before at any conference, so having a co-presenter for my session helped to alleviate some of the nerves I experienced.

We did a lot of individual practice to get familiar with our own pieces of the story, and we also practiced frequently together to ensure we both felt comfortable and that we had a good flow. For anyone who is interested in speaking at a conference like DrupalCon but who might be hesitant or nervous to do it alone, I definitely recommend finding someone to co-present with. In my experience, it removed a lot of pressure and made the experience more fun.

Giving Back to the Drupal Community

It has been great as a relatively new member of both the Drupal community and Bounteous to be able to speak at DrupalCon and participate in TB Mega Menu. Both of these experiences have really helped me to appreciate and understand how important the community is around Drupal.

I am thankful that I was able to contribute via our module rescue and then contribute again in a non-code way by sharing the experience and speaking at DrupalCon. I encourage everyone to explore the ways that you can give back to the Drupal community! Contributors can earn credits for identifying or fixing problems, contributing code, or a host of other non-technical options like speaking at conferences.

Feb 16 2024
Feb 16

In the latest version of Site Studio, Acquia has introduced a game-changing feature that is sure to challenge Drupal Core's Layout Builder as the premier go-to tool for site builders. Site Studio already has a superb component building and editing experience, but now users can add and edit components live on the page.

In this post, we'll go in-depth on this new feature, plus other recent updates to Site Studio.

Visual Page Builder in Acquia Site Studio

On previous iterations of Site Studio, users could edit existing components on the page live via the Page Editor, but the components had to already exist in the layout canvas field. This operated in a similar fashion to other Drupal page builder elements such as panels, layout builder, paragraphs, etc., and only is accessible through contextual links. If a user wanted to add a brand new component to the page, they had to add it via the node edit form. But now, all of that changes.

While the layout canvas is still accessible via the node edit form, content editors can completely assemble a page from the front end, providing an entirely new meaning to the layout canvas field concept. Other than page creation or administrative settings, content editors may have little need to open the node edit form when adding page content. Of course, this all depends on how your site's content types have been architected. Here is a brief tour of the new page builder experience:

image highlighting where to find the visual page builder button at the top left corner of the site

When a user is logged in and on a page that utilizes Site Studio and a layout canvas, they will see a new Page Builder button on the admin toolbar.

image showing how page builder mode enables you to add, edit, move, delete, duplicate, and/or save as component content

Enabling page builder mode will allow users to add, edit, move, delete, duplicate, and/or save as component content. Users can also save the entire page layout.

image showing components menu fly out on the left side of the screen

As great as this new experience is, it's also helpful to see consistency in how new components or elements can be added via the left side, off-canvas components drawer, making it seamless. Users don't have to re-learn how to add components, but rather get an improved page building experience.

image showing how the component editor looks

The component editor itself also behaves the same way whether users are using the page editor, visual page builder, or the layout canvas on the node edit form.

The visual page builder is included as a new submodule within Site Studio and has to be enabled before it can be used.

Pro Tip: Developers should also be aware that anytime you update Site Studio, enable new submodules, and/or create or alter components, it's important that you run the import and rebuild functions. This can be done from the Site Studio UI or via Drush commands. For additional information on how the visual page builder works, visit Acquia's Site Studio documentation.

Site Studio's new visual page builder provides a whole new meaning to "what you see is what you get." The page building experience for content editors has never been better or easier, and this new feature alone should be enough to convince you to use Site Studio on your next project.

Other Site Studio Highlights

While the addition of the Visual Page Builder is kind of a game-changer for Site Studio, the latest release also includes some other smaller but no less important features, including some accessibility enhancements, rel attribute support, and more.

Sync Batch Limit Overrides

On previous builds of Site Studio, admins were limited to importing 10 configuration items at a time via Site Studio Sync to reduce the amount of memory required. Acquia has now exposed a method allowing admins to override the default setting. By adding the following to a Drupal settings file, you can increase the number of configuration items that process per import batch:

$settings['sync_max_entity'] = 20;

This is one of the few items of Site Studio that is controlled by a developer and must be updated in code. Users should also be aware that by increasing this value, more memory will be required and can lead to issues.

Rel Attribute Support

Acquia has also added support for the Rel attribute on the link element. This attribute defines the relationship between the linked resource and the current document. Previously, if users wanted to have Rel attribute options on links, they had to be added by a component builder. Now, when a link uses the type "URL" and the target is set to "New window," a group of checkboxes will automatically appear for the following options:

  • nofollow - prevents backlink endorsement, so that search engines don't pass page rank to the linked resource.
  • noopener - prevents linked resources from getting partial access to the linking page, something that is otherwise exploitable by malicious websites.
  • noreferrer - similar to noopener (especially for older browsers), but also prevents the browser from sending the referring webpage's address.

The new Rel attribute can be found on the Link, container, slide item, and column elements. It should be noted for the SEO conscious, that the use of nofollow will stop search engines from passing page rank endorsements to the linked resource. This is often used in blog comments or forums, as these can be a source of spam or low-quality links.

Google and other search engines require nofollow to be added to sponsored links and advertisements. Additionally, the use of the No referrer toggle can affect analytics because it will report traffic as direct instead of as referred.

Nolink Token Support

One under-the-radar update from Acquia is the ability to use the token on Site Studio menu templates. For any experienced site builders, you probably know about the ability to use the token on menu links to render them as a heading, etc., and without a link attached. It's a great way to add sub-level menu headings.

On previous builds of Site Studio, users were unable to use the token as it would still render as an anchor tag with an empty href. In 6.5, using will result in the menu item rendering with a tag instead. Nothing needs to be done to start using the token, though, your menu styles may need to be updated to account for the usage of tags. Also to note, if a different HTML element has been specified in your Menu Template, that setting will take priority.

Accordion Accessibility Enhancements

Accessibility is a moving target. Keeping a site up-to-date with accessibility enhancements is one of the more important responsibilities we have and Site Studio is no exception.

In this version, Acquia has added some accessibility improvements to the Accordion element for the end-user. The header links will now have an aria-expanded attribute, which toggles between true and false when expanded and collapsed, respectively.

Accordion header links will now use aria-disabled="true" if the parent Accordion tabs container has the Collapsible setting toggled OFF. This is only applied when the item is expanded, to indicate to a screen reader that the panel cannot be collapsed manually.

When the panel is collapsed because a sibling accordion item is expanded, the aria-disabled attribute is removed. Accordion header links now have aria-disabled="true" permanently set if the accordion item has been disabled through Navigation link settings.

Bug Fixes and Other Improvements

The latest build of Site Studio also includes a bug fix that is related to sync package entity dependencies not being removed if they were no longer being used on the entity. Essentially, when a sync package contained entities that have had their dependencies updated, the sync package would contain both the original and the new dependency.

For example, if your component exists in a package, you then update that component's default image, both image files would be included in the sync package rather than just the latest one. Now the old dependencies should no longer appear in that sync package. This could also potentially reduce the size of sync packages in the case where multiple, deprecated dependencies were present.

Font Display Property Options

And last but not least, Acquia has now added a font-display property option to the Font library settings page. This CSS property, when used, will determine how a font face is displayed based on whether and when it is downloaded and ready to use. It is a very small feature update but a useful one; although, only developers really need to worry about implementing it.

Summary

As with any Drupal updates, it's recommended to fully test these new features and fixes (as applicable) on your site's development environment before deploying to production. You should also have a backup of any code or databases before upgrading. Version 6.5 of Site Studio is not backwards compatible.

With the addition of Visual Page Builder, Site Studio is just further cementing itself as an excellent component and page builder tool for Acquia-hosted Drupal applications. The more improvements they make, the harder it is to imagine building a site without it.

For additional information on Site Studio, check out some of our other posts:

Feb 16 2024
Feb 16

In order to create great digital experiences, you need to first have a great team in place. If you're reading this, you've probably already come to the conclusion that you need a Drupal team, whether it's to build a brand new Drupal site or to maintain an existing site. We've broken down some of the challenges and solutions for you to consider when building your Drupal team.

Defining the Skills and Roles Your Team Needs

First, it's important to step back and understand all of the different skills and roles that you may need on your team, depending on what stage you're at in your Drupal process. A team that is building a Drupal site may look very different from a team that is maintaining a Drupal site.

To build a Drupal site, your team likely needs to include:

  • Product Owner to gather the requirements for the site and determine what the site needs to do
  • Experience Design to design the site
  • DevOps to build the infrastructure to host the site
  • Technical Architect to plan out the site build
  • Developers to build the site
  • Project Management to keep the project on track
  • Quality Assurance to confirm the site works as intended
  • Content Editor/Creator to build out the content for the site

Once the site is built, however, your team needed to run and maintain the site may need to include:

  • Developers to maintain and enhance the site
  • DevOps to keep the site up and running
  • Content Editor/Maintainer to keep the content up-to-date
  • Marketing to attract users to your site
  • Analytics/Insights/SEO to understand how users are using your site and adjust the site accordingly
  • Project Management to manage the team on a day-to-day basis
  • Product Owner/Experience Design to plan out and design new features and functionality for the site

Not only can the needed roles change depending on whether you're building a site or maintaining it, but some of the skills required won't be needed at the same frequency. For example, for any reasonably sized site, you will need at least one full-time developer to maintain the code, fix bugs, and add enhancements. However, once the platform is built, the amount of DevOps tasks may not occupy someone full time.

So this then leads us to the question of: should you build your entire Drupal team in-house? Or should you outsource some of it—or even all of it?

Building Your Team: Hire In-House, Outsource, or Both?

The possible solutions fall on a spectrum and each has its own set of considerations.

Hire the Entire Team In-House

If your organization is large enough, there's a good chance you have the resources to hire an entire team.

First, map out the talent you already have available to you internally, and identify the gaps in skills that need to be filled. Then, before jumping immediately into recruiting for the specific roles outlined above, consider if you might be able to hire someone that is more of a generalist. For the skills that won't be needed often enough to keep someone busy full-time, can you find one person to wear several different hats? If so, can that person be effective enough at those different skills for your Drupal site to be successful?

After exploring your options, it's time to move into the recruiting and hiring process. Good Drupal talent can be hard to find, but it's out there! A good place to start is on LinkedIn, searching for people with Drupal capabilities that may be in or connected to your network. Networking in the community can be very helpful if you're looking for local talent: consider meetups, or local events like MidCamp in Chicago if the timing is right. There are also job sites that specifically call out Drupal talent, like jobs.drupal.org.

Hire An Agency That Already Has People With the Skills

If your organization is smaller and you don't have the resources to hire an entire team for building and maintaining your site, your best bet may be to work with an agency.

If you do not have an IT team, it might make more sense to host your site with a provider like Acquia rather than building a DevOps team to monitor and maintain the infrastructure. Even if you do have a knowledgeable IT staff, it may not make sense to use them for this if they are not used to working with the technologies needed to host a Drupal site.

By working with the right partner, you can rest assured that your site is in the hand of experts. When evaluating partners to work with, you’ll want to first make a list of the things that matter most to you. You probably want more than just a great website end result; more than likely, you also want to become smarter from the experience and retain knowledge, as well as have confidence that you’ll be able to maintain and grow the site.

Identifying what you want to get out of the experience besides the actual website will help guide you in choosing the type of company you want to work with. Some companies will focus solely on turning the website around quickly. Others, like Bounteous, focus on improving digital capabilities and maturity—while also delivering an excellent experience to your customers. If that entices you, look for partners that value co-innovation.

We also encourage you to choose a partner that contributes to Drupal. This will have a great impact on the Drupal community and, ultimately, improve the Drupal ecosystem.

Hire an Agency to Build the Site, Then Hire and/or Train Your Own People to Run & Maintain It

The perfect solution for your needs might be a mix of the first two options. Hiring an agency to do the build and then hiring and training your own people to maintain it will grant you the benefits of having experts build your site, and not having to hire an internal team for building that might then need to change once the build is done. 

This option is also beneficial because once your partner of choice is finished with the site build, they can actually be a great resource in helping to hire the talent you need to maintain the site.

Your strategy can (and likely will) shift over time, so your approach to your Drupal project should reflect that. Even if your long-term desire is to do it all in-house, you can ease into that through evolving your approach over time. Some of our clients bring us in at the start to build the platform and create a strong foundation. Then they have us actually teach them Drupal and work alongside them as they learn. Ultimately, they end up taking over everything in-house.

Take the Long Road and Learn as You Go

Building a Drupal team can seem like a daunting and challenging process, but the good news is that you're never alone. Take time to consider the phases that occur after launching a site, a new site will need to do more than just expose information—what integrations are required, what will come next in terms of digital capabilities? Thinking about maintenance and growing your digital maturity may influence your hiring/staffing goals.

Rethinking the Replatform - Taking Advantage of Drupal’s Features and Flexibility

Get (or stay) involved with the Drupal and open source community. Involvement in the community means you will always be surrounded by individuals who are more than happy to answer any questions and provide guidance along the way. Learn from other people's experiences and stories and apply those learnings to your own decisions. And lastly, adapt and learn as you go!

Your great Drupal team is within reach—get out there and make it happen!

Feb 16 2024
Feb 16

If you're reading this, you may have already noticed that we've recently given Bounteous.com a fresh coat of paint. What might be less obvious is that we also took this redesign as an opportunity to slowly begin decoupling the front end of our existing Drupal site. We've considered decoupling in the past but were unable to justify the effort for a full-scale overhaul of our front end given other competing responsibilities. So what changed this time?

Our initial design concepts implied a phased approach. There were effectively only two pages that featured a completely new design and also incorporated a number of new behaviors and animations not present on the existing site. For this first phase, the rest of the site would get a mostly cosmetic overhaul, applying updated global styles to better match the new design introduced elsewhere on the site.

This put us at a similar crossroads. We believed that leveraging a JavaScript framework would greatly benefit our ability to achieve the motion-based interactions implied by our ambitious new designs. But for this phase, introducing a JavaScript framework wasn't really necessary for the rest of the site. In the short term, the cost of decoupling the entire site would greatly delay our ability to ship what was essentially just two new pages. This conclusion led us to a question that in hindsight seems pretty obvious:

Could we start by only decoupling two pages on our existing site?

Initially, we didn't know. But as we started considering this project with the assumption that we could make this change only for these two new pages, it became the difference between taking this step now or continuing to kick a large-scale change to our front-end architecture down the road to some undetermined date.

We eventually landed on an iterative approach to decoupling Bounteous' existing Drupal site with Gatsby; starting with only two pages, but laying the groundwork for any page on the site to be rendered primarily by either React or Twig. What follows is a look at how we did it, what we learned, and what we think this means for the future of our site.

Upcoming Event - DrupalCon North America

An Iterative Approach To Decoupling Your Existing Drupal Site With Gatsby

One Site, Multiple Front Ends

For our JavaScript framework, we selected React, which we were technically already using in some minor ways on the existing site. While it would be possible to do this with a different framework, we found that the large React ecosystem would greatly accelerate our ability to achieve some of the motion-based interactions implied by our new designs. We ended up using both Framer Motion and react-lottie extensively, and they saved us quite a bit of time and effort.

While we had already decided that we'd be building additional React components in support of this new design concept, we also decided that we'd specifically be using Gatsby as our React framework of choice. Gatsby's plugin ecosystem greatly simplified the process of sourcing data from our existing Drupal CMS. Gatsby also opened up the possibility of statically generating portions of our site, which Bounteous.com was well suited for, given that most of our content changes infrequently.

Compared to a client-side approach to decoupling, server-side pre-rendering can have both SEO and performance benefits. As an added bonus, having these pages pre-rendered separately from Drupal also made it easier for React developers to contribute without ever having to set up a local Drupal environment.

Settling on these initial conclusions provided us with the following high-level architecture:

mock-up of the high-level architecture of the updates to bounteous.com

Drupal would be the CMS backend powering content for all of the pages on the site; both traditional CMS rendered pages, and pages rendered statically by the Gatsby build process. In the middle would be what we referred to as our 'front end globals.' These globals would be consumed by each front end and included shared styles, variables that serve as design tokens, along with full React components.

This structure allows us to take a progressive approach to introduce static content to our site. Initially, we'd only be building a small number of pages statically, but as we prove that this workflow can suit our site and our team, we could gradually shift where the line exists between the pre-built and dynamically built portions of our site.

Or alternatively, if we found that this approach didn't meet our needs, we could shift back to having Drupal render the content given that all of the data already exists in the CMS.

Front End Structure

After some consideration, we decided to take a monorepo style approach and have Gatsby, Drupal, and our front-end globals live in a single repository. Since this was a single domain and we had no concrete plans to distribute these components beyond Bounteous.com, we decided that a simplified repository would help streamline the process as we worked toward a tight timeline.

From the front-end perspective, this resulted in three main top-level directories in the repository: /fe-global, /drupal, and /gatsby. For this phase of the project, /fe-global exclusively contained Sass partials containing design tokens and global styles. Drupal and Gatsby would each selectively import from these partials as needed.

On the React side, we initially focused on building functional components with as little internal state as possible. This allowed us to prototype in the browser early, and also would allow us to provide data to these components from various contexts.

Regardless of if the data was being sourced from Gatsby's GraphQL API, directly from Drupal, or even hardcoded, the same component could be used. This also allowed us to use Storybook heavily during this phase of the project in order to get early feedback on these components before data was fully integrated.

page mockup show in Storybook

On the Drupal side of things, we created new content types for each of our decoupled page templates. We also continued to use paragraphs to represent our components as we had been doing for existing content on the site.

The structure of data from the Paragraphs module initially doesn't seem like a natural fit for decoupled Drupal projects, but with gatsby-source-drupal and a few small utilities (which we'll talk about later), we found this data to be reasonable to deal with. In fact, it ended up giving us a high level of layout control, down to the ability to reorder components on the resulting static pages.

Considering that the majority of our content was still being rendered by Drupal, we still had our traditional Drupal theme. This theme incorporated the partials and tokens from our front-end globals alongside Drupal-specific styles, templates, and JavaScript.

Serving a Subset of Decoupled Pages

One of the very first things we had to prove out to ensure that this approach was feasible was serving a combination of static routes (pre-rendered by Gatsby) alongside dynamic routes handled by Drupal.

As part of our Gatsby build process, we are copying the 'public' directory which represents Gatsby's build asset, into the document root for our Drupal site. For the initial phase of this project, we were able to use a couple of very specific .htaccess rules to serve our two new static routes.

We knew this solution wouldn't scale long term as we introduced more content to our site. Ideally, we'd want to be able to create Decoupled content within Drupal, specify a path alias, and automatically have that route handled statically. We eventually found that we could achieve this via .htaccess as well.

Our rules take advantage of Drupal's URLs not having a "file" component to them. When we call createPages in gatsby-node.js with an alias like /services, gatsby creates that route as /services/index.html. The main .htaccess rule checks if /public//index.html exists and rewrites if it does.

This essentially means that for any request the Gatsby route 'wins' if there is a related file in the 'public' directory (/public/my-alias/index.html, for example,) and all other requests fall back to being handled by Drupal. This has the extra advantage of bypassing Drupal's bootstrap process for all of our static routes.

As focus shifted over to data integration, some adjustments were also necessary to configure the gatsby-source-drupal plugin to meet our needs. The gatsby-source-drupal plugin pulls data from Drupal's JSON:API endpoints and makes this data available to React components via Gatsby's GraphQL API. By default, the plugin imports all data from the source Drupal site. Since for this initial phase Gatsby would only be used to build a small subset of pages, most of this data was unnecessary and also would have the side effect of greatly increasing our build times.

As an initial attempt to solve this problem, we used Drupal's JSON:API Extras module to only expose the resources that our Gatsby build needed to depend on. This helped, but we still eventually needed to enable the file resource, which pretty much immediately sunk our build times.

Gatsby was now importing (and worse yet processing) local versions of years worth of images that we didn't need to support our new content. We eventually found that it was possible to configure gatsby-source-drupal to only import the files referenced by content that was necessary for our builds, but it required a combination of configuration options that wasn't completely obvious from the documentation.

The first step was to add the file resource as a disallowed link type:

// In your gatsby-config.js
module.exports = {
  plugins: [
    {
      resolve: 'gatsby-source-drupal',
      options: {
        baseUrl: ,
        // Disallow the full files endpoint
        disallowedLinkTypes: ['self', 'describedby', 'file--file'],
      },
    },
  ],
}

This alone would result in all files being ignored by the plugin. A little bit further on in the disallowed link types documentation is the following note:

When using includes in your JSON:API calls the included data will automatically become available to query, even if the link types are skipped using disallowedLinkTypes. This enables you to fetch only the data you need at build time, instead of all data of a certain entity type or bundle.

This essentially allows us to re-include specific files if they are referenced by other content. What makes this feature potentially easy to miss is the fact that it uses the plugin's filter option, which typically further restricts the data sourced from the plugin. The resulting configuration ended up looking like this:

// In your gatsby-config.js
module.exports = {
  plugins: [
    {
      resolve: 'gatsby-source-drupal',
      options: {
        baseUrl: ,
        /// Disallow the full files endpoint
        disallowedLinkTypes: ['self', 'describedby', 'file--file'],
        filters: {
          // Use includes so only the files associated with our decoupled content
          // types are included.
          "paragraph--dhp_hero": "include=field_dhp_fg_img",
          "paragraph--dhp_animation_cards": "include=field_dhpac_images",
          "paragraph--featured_post": "include=field_dfp_bg_img",
        },
      },
    },
  ],
}

With this configuration, if a featured post paragraph is used on the homepage, any associated background images (field_dfp_bg_img) will be sourced by Gatsby as well.

Providing Drupal Data to Our React Components

So at this point, we have access to all of the necessary data, and also a set of functional components that aren't yet aware of Drupal's data. We also have content types that can use a number of different paragraph types, in any order. This is great from the perspective of layout flexibility, but less predictable from a data integration standpoint.

To help manage this mapping we created a custom React utility called paragraphsToComponents. Assuming that we have an existing GraphQL query that provides paragraph data to our template component, we could use it like this:

const HomePage = ({ data }) => {
  const paragraphs =
    data.nodeDecoupledHomePage.relationships.field_dhp_components

  const paragraphComponents = useParagraphsToComponents(paragraphs)

  return (
      
      {paragraphComponents.map((paragraph, index) => {
          return (
            

{paragraph.provider({ paragraph: paragraph, index: index, })}

) })} ) }

As we'll see in a second, the utility returns an array of components that can be used to render the related paragraph data. In the template component's render method we iterate through this array and render these paragraphs in order. This allows us to correctly process paragraph data in any order, with little heavy lifting or redundant code in our template components.

The utility itself is defined as follows:

import AboutUsBannerProvider from "../components/paragraphs/provider/AboutUsBannerProvider"
import AnimationProvider from "../components/paragraphs/provider/AnimationProvider"
import CalloutProvider from "../components/paragraphs/provider/CalloutProvider"
// … Additional component imports ...

// The paragraphs above map to these components:
const componentMap = {
  dhp_about_us_banner: AboutUsBannerProvider,
  dhp_animation_cards: AnimationProvider,
  dhp_callout: CalloutProvider,
  // … Additional component mappings ...
}

const paragraphsToComponents = paragraphs => {
  // Create a new array with paragraph data that also specifies the React component
  // we'll use to render it.
  const mappedParagraphs = paragraphs
    // Add a component key that defines the component using the following
    // naming convention: ParagraphTypeProvider
    .map(paragraph => {
      const componentType = paragraph.__typename.replace("paragraph__", "")
      paragraph.provider = componentMap[componentType]
      return paragraph
    })
    // Filter out paragraph types we don't yet have a mapped component for.
    .filter(paragraph => {
      return paragraph.provider !== undefined
    })
  return mappedParagraphs
}

export default paragraphsToComponents

This assumes a particular naming convention for our components: ParagraphTypeProvider where 'ParagragraphType' matches the Paragraph Type name from Drupal.

As you can see in the example below, our Provider components only have one responsibility: providing the appropriate data from Drupal to our functional components.

import React from "react"

import Callout from "../../components/Callout/Callout"
import HeadlineDivider from "../../components/HeadlineDivider/HeadlineDivider"
import { graphql } from "gatsby"

const CalloutProvider = ({ paragraph }) => {
  const heading = paragraph?.field_dhpc_heading?.processed
  const body = paragraph?.field_dhpc_copy?.processed
  const backgroundOption = paragraph.field_dhpc_bg_opts

  if (backgroundOption === "background__waves") {
    return 
  } else {
    return 
  }
}

export default CalloutProvider

export const CalloutFragment = graphql'
  fragment CalloutFragment on paragraph__dhp_callout {
    id
    field_dhpc_bg_opts
    field_dhpc_callout_size
    field_dhpc_copy {
      processed
    }
    field_dhpc_heading {
      processed
    }
  }
'

We're also defining a GraphQL fragment for the data that is required for this component. This gives us a consistent definition of the necessary Callout Paragraph data that can be imported into any other component that needs it.

Getting to this point took a decent amount of time and effort, but once defined it became much easier to integrate future Paragraphs from Drupal by following this pattern.

Shared React Components

There also was an integration problem to solve going from React to Drupal. We needed to syndicate the same header and footer component used by Gatsby to our Drupal pages so that we could provide a consistent look and feel throughout the site, regardless of which front-end technology owned rendering that page. Thankfully, by ensuring that our React components were strictly presentational we were well suited to use these components in a different context.

We approached this by creating an "exports" subdirectory alongside the rest of our React components which contained an exportable version of the header and footer. These essentially functioned as provider components just as we saw with our Gatsby data integration. Initially, these exported components used pre-defined data since they didn't have access to Gatsby's GraphQL API. However, we eventually found a solution to export these components using the same data that is available to our Gatsby build.

As a first step, we created a separate Webpack configuration that used these two components as entry points, and placed the related bundles into a 'dist' directory in the Drupal theme.

On the Drupal side, we used the Component module to help ease this integration. As a simplified successor to the progressively decoupled blocks module, Component allows you to define configuration in a .yml file alongside your JavaScript in order to expose your component to Drupal. In the case of our navigation, we defined the following configuration:

name: Evolution Navbar
description: 'Evolution Navbar'
type: 'block'
js:
  'dist/navbar.bundle.js' : {}
  'dist/vendors~navbar.bundle.js': {}
dependencies:
  - bounteous/react
template: 'evolutionnavbar.html'
form_configuration:
  theme:
    type: select
    title: "Theme"
    options:
      '': 'Dark Theme'
      'theme-light': 'Light Theme'
    default_value: ''

Alongside the following template:

This manages to do quite a lot with a little. Based on this configuration, a new block will be created that uses our evolutionnavbar.html template and loads our component JavaScript and any dependencies as a library. It also exposes a configuration form which in this case allows you to specify a light or dark theme to be used when rendering the component. The values of any form configuration will be added to the template as data attributes, in this case making 'data-theme' available to our component.

With that in place, the code for the navbar React component that we'll be exporting is as follows:

import React from "react"
import ReactDOM from "react-dom"

import { MediaContextProvider } from "../../components/layouts/Media/Media"
import abstracts from "styles/abstracts.scss"

// Grab cached data from disk.
import NavigationProvider from "../../components/provider/menu/NavigationProvider"
import templateData from "../../../public/page-data/template/page-data.json"

const queryHash = templateData.staticQueryHashes[1]
const data = require('../../../public/page-data/sq/d/${queryHash}.json')

const drupalProvider = document.querySelector(".evolutionnavbar")
const config = drupalProvider.dataset

ReactDOM.render(
  ,
  document.getElementById("evolution-navbar")
)

First, we're importing cached menu data from our Gatsby build. This was inspired by the approach outlined in Exporting an embeddable React component from a Gatsby app using Webpack. Sourcing this from a cache using a query hash seems…fragile, but it has been reliable for our needs thus far. This assumes that the Gatsby build runs prior to the Drupal build, which already happened to be the case in this project.

Next, we're selecting the wrapping div in the DOM in order to access all of the data attributes provided by our Drupal block. This allows us to pass the theme option set in the instance of this block as a prop to our navigation component.

Finally, we mount the component into #evolution-navbar which is used in the template that was specified in our block configuration.

This approach could add some overhead as the number of components you're working with increases, but works nicely for our header and footer. It also allows us to easily configure different instances of the component block to be used on different sections of the site. We use this to swap between the dark and light themes of the header, and even specify if the form in our footer should be expanded or collapsed by default.

Looking Ahead

While we're happy with the progress made with this initial release, there is intentionally more evolution to come. We've been working on improving the content editing and deployment process introduced by this new workflow. These changes include configuring Gatsby live preview, and also making enhancements to the build hooks module to allow our content team to trigger Gatsby builds on demand.

As we've continued incorporating more content into our Gatsby build process, we've also run into some pain and confusion around the seemingly arbitrary divide between our React and Twig components. In addition to working to make this distinction clearer to our team, we've also been experimenting with solutions that allow our React and Twig components to be used side by side in more contexts, including syndicating markup and styles from Drupal to be used in Gatsby content as needed.

So far we think this iterative approach can be of benefit to others looking to transition the front end of their existing Drupal platform without requiring the commitment of a large-scale re-architecture.

Taking an iterative approach can instead make it possible to prove that decoupling has clear value and also ensure that changes to the development, content editing, and deployment process fit the needs of your team. We're excited to continue evolving Bounteous.com and hope that you'll follow along.

For even more on this topic, check out Episode #284 of the Talking Drupal Podcast - Iterative Approach to Decoupling and An Iterative Approach To Decoupling Your Existing Drupal Site With Gatsby at DrupalCon North America.

Feb 16 2024
Feb 16

The contribution ecosystem is one of the most important reasons for Drupal's success. With over 45,000 modules available to enhance and extend Drupal's functionality, these contributions are critical to maintaining Drupal's status as an enterprise-class content management system.

Getting involved in the Drupal community is beneficial to all parties but can be intimidating, especially when it comes to committing code. It can be hard to know where to begin, or you may not necessarily have an idea for a new module. But that doesn't mean you can't get involved.

Drupal modules are built and maintained by members of the Drupal community. Sometimes, community members move on for a variety of reasons and the module becomes stale. That's what happened to the TB MegaMenu module, a project with over 30,000 installations at the end of 2020.

Rescuing TB MegaMenu

Mega Menus are a critical feature for many Bounteous projects. We selected TB MegaMenu for Wilson.com a few years ago because of the flexibility to power the site's extensive dropdown menus.

Wilson Sporting Goods Site Menu

Unfortunately, in addition to a number of bugs and missing features, accessibility was not well implemented by the module at the time. We were able to provide some patches to address these issues; however, since the project was not actively maintained at the time, we had no way to update the module for the broader Drupal community.

We were in a tough spot because we needed the improvements to keep using the module, so we found ourselves providing these enhancements solely for our clients. That situation was a perfect opportunity for us to get involved in the Drupal contrib community, so we decided to apply for ownership of the module.

Taking Control of Maintainership for a Drupal Module

What does that process of taking over a module look like? Turns out it's pretty simple. Creating a new request in the appropriate issue queue gets the ball rolling, and once you are approved as the new owner, you gain access to the codebase and the module's landing page.

The first thing we had to decide after taking over ownership of the module was how to prioritize our time. Needless to say, TB MegaMenu is not anyone's first priority, and all contributors have to strike the right balance between putting time into open source projects and billable work.

So with two codebases to maintain, one for Drupal 7 and one for Drupal 8/9, we prioritized our work as follows:

  • Fix the most critical bugs that were preventing upgrades and new installs
  • Commit the accessibility enhancements we'd developed internally
  • Apply for security coverage
  • Publish a stable release for Drupal 8/9
  • Continue to address bugs in the Drupal 7 version while prioritizing enhancements for  Drupal 8/9

Maintaining Open Source Contributions

Let's face it: carving out time to contribute to open source contributions can be difficult—especially if your clients' projects are not relying on updates or enhancements to that work.

The availability of our Drupal developers at Bounteous fluctuates as team members move between projects, so we knew that our contributions to maintaining the TB Mega Menu module would naturally ebb and flow over time.

In light of that, we knew that we'd need to push ourselves a little bit to keep up with TB MegaMenu maintenance work, so we gave ourselves some parameters for getting stuff done:

  • Established weekly "office hours" to prioritize issues and ongoing work
  • Leveraged a Jira board to track bugs and progress
  • Promoted our efforts internally to get other team members excited and drum up additional support 
  • Simplified onboarding for new contributors by creating dedicated local development environments for TB MegaMenu work
  • Lowered the barrier by providing different ways to contribute other than code

Fortunately, the odds turned out to be in our favor—particularly during the last quarter of 2020, when a core team of contributors came together and gained considerable traction on moving the module forward. Giving back to the community is a core part of working at Bounteous, and contributing to open source modules is just one way we bring that to life.

Supporting Open Source Drupal - Come For the Code, Stay For the Community

Quick Wins When Reviving a Drupal Module

It doesn't take much to bring real value to a stale project right away. If you're considering rescuing a module, scan this list to get an idea of the time investment needed and consider how small changes can make a big difference. Here are some things we were able to do right off the bat to start reviving the module.

Better Communication to Developer Community

One of the first things we did was to update the module's homepage to let people know that the module had new maintainers. This was one of several efforts to restore "faith" in the module and reassure developers who might otherwise be deterred by the number of open issues and lack of recent commits to the module's codebase.

homepage of the TB MegaMenu Drupal module
Test, Review, and Commit Patches

The TB issue queue had several instances of patches that had been posted but never tested and/or reviewed. Merging commits for issues that have already been patched is a great way to improve the module right out of the gate—without having to write any code yourself.

Clean Up the Queue

We evaluated and either closed or postponed open issues that were duplicates, not reproducible, were no longer applicable, or had already been addressed in another patch.

This can be time-consuming, and in the case of TB MegaMenu, there is still an enormous backlog of open issues going back years and years that we'll probably never get through. But on a smaller project, this kind of cleanup can go a long way to making the issues queue more manageable.

Identify the High-Priority Items

Inheriting a large number of bugs and feature requests dating back years (7+ in our case!) might seem overwhelming at first—even after cleaning up the queue. Fortunately, drupal.org forces every issue to be tagged with a priority level, which can be an invaluable tool for determining where to start.

Aside from noting the obvious urgency of critical and major issues, it can be helpful to also look at context and feedback from the community. Issues that have received a number of comments over time show recent activity probably warrants your attention more than one that was reported a while back.

Creating Documentation for Developers

TB MegaMenu's documentation was limited, so we created getting started guides for both the Drupal 7 and Drupal 8/9 versions of the module, following the Drupal.org guidelines, and posted links to them on the module's homepage. If it's easy to understand how to use the module, developers can much more quickly determine if it fits their project's needs.

Providing simple, concise, and accessible documentation can make the difference between hours of headaches and a smoother, more efficient working experience. Inevitably, our investment in documentation will yield a more popular module with a larger install base.

Bringing TB MegaMenu Back to Life

Most of these things can be done without too much effort, and they are a great way to start breathing life back into the project. Here's the fun part: shortly after we started work on TB MegaMenu, we noticed an uptick in activity in the issue queue. By pushing out new commits and simply responding to tickets, the community came back alive!

We started seeing new issues being reported, and new requests for support and features (not that we like bugs, but we do like being able to fix them!). All of those community members who are getting involved now are helping us to make the module better and could also be contributors in the future.

Looking Forward: Contributing to the Drupal community

Now that we have some momentum and are more comfortable with the process and the code, we have big plans for more work with TB MegaMenu. Some recent commits improve upon accessibility and coding standards, and we're getting ready to start a 2.x version of the Drupal 8/9 codebase which will simplify the front end. We've also applied for security coverage, which is a big step towards validating the module for use on some sites.

Contributing to the Drupal community is good for everyone. It benefits you as a developer because it gets you involved in the community, gives you ownership, and helps build your expertise. Of course, it also helps to strengthen the community of Drupal users and developers because our collective efforts translate to better modules. And finally, it's good for your brand—since your work will help to elevate your company's status in the Drupal community.

Are you aware of a module that needs some help? Here are some links to help you get involved:

Thank you to Andy Olson, Irene Dobbs, and Wade Stewart for the contributions to the TB MegaMenu module and their help on the blog!

Feb 16 2024
Feb 16

With so many products and services available these days to assist with your website needs, it can be difficult to navigate all the options and determine the right solution for your business.

At Bounteous, we do a lot of work with Drupal and Acquia products and services. Over the past year, we've spent considerable time working with Site Studio, developing training materials for clients, creating resources, and working through various projects. Let us give you a brief tour of Site Studio and why it's perfect for your next project.

What Is Site Studio?

Site Studio, formerly known as Cohesion, is a Drupal product from Acquia that makes it easy to build a component-based website. Site Studio transfers the Front-End theming layer to the UI and gives content editors and marketing managers more control than ever over their sites. It provides a new site-building paradigm that's far more efficient than traditional builds.

The tools and features that come with Site Studio provide an excellent base to start with that allows client-side developers to contribute to a build from Day One. The low-code nature of Site Studio shields content editors from the Drupal backend and allows developers to focus on the overall content editing experience they're creating. It also provides your development team the ability to create elegant, performant, more powerful sites in half the time.

Let's break down the top points that contribute to the above philosophy.

Low-Code

Site Studio is designed to be low-code. This can mean a lot of things for different systems; however, in the context of Drupal, this is an important point to highlight.

Drupal inherently has its own hooks and other functions in place that make it a very powerful but also customizable CMS. However, to be able to use these ideas and functions correctly requires some prowess that a developer who has never touched a Drupal site will most likely not possess.

This is where the low-code nature of Site Studio really shines. New developers do not need to learn hooks, template suggestions, or any of the other Drupalisms that you will find on most sites. By layering Site Studio on top of Drupal, we now have a mechanism that takes care of the heavy lifting that we as developers use to write in custom functions. Site Studio may be low-code, but it is certainly not low on features.

Easily Extendable

Site Studio comes with a myriad of predefined components and styles thanks to the DX8 UI Kit. After initial setup, developers are immediately given access to over 50 different components consisting of sliders, cards, accordions, and more. While this is all great to have at the start of site-building, Site Studio takes it one step further.

Every single component that comes with Site Studio is extendable. Not only does this allow developers without previous Drupal experience to build rich editing experiences—but it also gives a noticeable jump start to most any component the developer is tasked to build.

Need to build a slider but the existing component is missing a field you need? No problem! Extend the existing slider and add your field. All of which takes minutes and no custom code to be written. Site Studio has a ton to offer out-of-the-box, but there is far more you can do with it.

Staying True to Drupal

There's an important synergy to highlight here. Site Studio comes with wonderful and easy-to-use features right out-of-the-box, which is one of the main draws to using it on any project. But at the end of the day, Site Studio is using Drupal as its backend—and we as developers need to be sure things are done correctly and the overall health of the application is kept at the forefront.

The beauty of Site Studio is how it uses features of Drupal developers love and simply extends them rather than rewriting them. By using these strengths that come with Drupal, there's essentially no more complexity in debugging, testing, or deploying sites that use Site Studio. This translates into a developing experience that all developers, both experienced and novice alike, can use together.

Starting With the Backbones of Site Studio

Composer, configuration management, and local environment: these are things a Site Studio implementation will not shake up too much from the stock Drupal setup we already know.

When getting a client resource up to speed, it's important to make sure that these fundamentals are understood and configured correctly from the beginning to ensure the developer is set up to succeed. Let's break down each of these points.

Site Studio - Composer

Composer is still behind the scenes managing all packages and modules just like a stock Drupal site. The only difference here is the inclusion of acquia/cohesion in the composer.json file.

However, for a developer working with Drupal and Composer for the first time, it can be a bit daunting to make a change to the project. The understanding of Composer's inner workings and how it relates to a Site Studio and Drupal implementation is crucial for any developer to contribute to a Site Studio project.

Site Studio - Local Environment

Again, no real changes here when developing with Site Studio. Something we have found that can greatly benefit any developer, whether it be internal or client-side, is to choose a single virtual environment scheme for all Site Studio implementations. Once a standardized system has been sanctioned, you can build custom tooling for said system.

This is a massive save when it comes to time constraints and debug headaches. When using Site Studio, there is a Drush command that updates all Site Studio templates and components. Manually running this one command may seem trivial on the surface—but when added to our automated tools for resetting a local environment, it proved to be one of those little things that was worth more than its weight in code.

Site Studio - Configuration Management

With Site Studio, this is probably the biggest change you will first notice. Just about every setting, component, and template in Site Studio is tracked in configuration. While this is a wonderful part of Drupal 8 that Site Studio utilizes well, it can get a bit confusing when trying to decipher exactly what these YAML files are doing when it comes to a pull request and working alongside other developers.

The best way to handle these config files is to pull down the branch of the pull request and import the config to be sure that everything is showing as expected and there are no errors on import.

What's New in Site Studio

Like with any Drupal Module, continuous development and improvements are essential in keeping your module relevant and in use. Since we first got our hands on Site Studio in early 2020, we've seen a ton of improvements. We started with version 5.7.9 and are now working with version 6.4.0. The following are some of the improvement highlights.

Component Field Repeaters

In version 6.3, Acquia added the Component field repeater feature to Site Studio. This is one of the most essential and important features added to date. Previously, repeater patterns were only available via the Site Studio Views Template builder. Think of this like a For Each loop, a control flow statement for traversing items in a collection.

Without the ability to repeat field items in components (a feature that is common in Drupal Site Building), we had to create a parent/child component relationship using dropzones. This method was okay and got the job done, but there were some limitations with the accordion components. Some of the issues with using this approach were that we could not limit which components could be added to a dropzone nor could we save a component as component content (more on that later).

The only other option was to create components with limited cardinality or item limits. That's okay in some cases, but when considering an accordion component or a carousel, users usually want unlimited. But now with field repeaters, that can be done directly on the component, making Site Studio even more powerful than before.

For example, to build a component using Site Studio's accordion elements, you no longer need to rely on the drop zone and separate components. Users can set a field repeater on the accordion item and repeat as needed. This simplifies the component building experience, making it even more intuitive.

Component Content Improvements

In the newest version of 6.4, Acquia adds two major improvements to component content. For those not familiar, component content is a saved component created on a page that allows users to reuse it on multiple pages but have a singular point of entry in order to update.

Previously, component content could only be created via a layout canvas field and saved from there. This is fine for instances where you create a component that you decide to reuse after the fact, but what about a component that you'd like to go through an approval process before it ever ends up on a page? Now, component content can be created directly from the list of component content which now matches a more standard entity behavior.

In addition to being able to save component content from the Component Content Manager, you can also now save components as component content when using a dropzone element. Previously, components using the dropzone element could not be saved as component content, thus limiting what you could actually save. This was challenging to deal with since, in order to create the effect of a repeating field, we had to rely on dropzones beyond their original intended scope. Now, any component can be saved as component content.

There are plenty of other improvements and fixes. It's clear that Acquia is committed to continuously improving Site Studio, and we would expect nothing less.

Why You Need It for Your Next Project

Creating Drupal websites with Site Studio has never been easier for the content editor or the developer. No matter their experience, anyone can contribute to a Site Studio project. With all the feature-rich aspects that are available today, as well as the ones to come in future updates, we can all see why Site Studio is an important shift into modern application and website development.

Gone are the days when there are hard lines in the sand between content editor and developer. Site Studio is a welcomed tool because it echoes the same mantra that Drupal and the community have had since its inception: let's build something together.

Want to learn more about Site Studio and what it can provide for your client solutions? Check out our article that takes a closer look at Site Studio as well as how we built a site in 10 days.

Feb 16 2024
Feb 16

High-performing websites require thought and intentionality behind their design and implementation. A single web page today is composed of many requests that happen over the network. These requests could include the markup for the page you're looking at, CSS instructions for how the page should be styled, fonts, images, interactions with analytics tools, and much more.

A common method to improve performance for all those requests is to use a Content Delivery Network (CDN), which is now out-of-the-box on Acquia Cloud! But, how do you set it up? More importantly, why do we even use a CDN? Let's explore these questions and equip you with guidelines for how to set up Acquia Cloud Platform CDN on your own project, and articulate its importance.

Let's Start With How

Before we can get to the "why" of using a CDN, it would be helpful to have some vocabulary about what a CDN is and how it works.

Let's start with the concept of HTTP caching. The HTTP protocol has instructions that tell a browser it can cache a response for a period of time. There are a lot of configurations that vary in use across browsers and servers, but let's just focus on one of those instructions called the Cache-Control header. This header can tell a browser that it’s allowed to cache an HTTP response for a period of time.

Take an About page as an example. Say the server responds with a Cache-Control header with the value max-age=60,public. This tells the browser that it can cache the response for one minute. Here's a visual of what that looks like:

illustration showing how the HTTP Client interacts with the server

You can see that the second and third requests from that browser are cache hits; the requests never hit the server. Why? Because the browser was told it can cache the response for one minute.

This is great for that user. But, what about all the other users coming to the site? They won't get a cache hit. Introducing...the HTTP proxy cache! An HTTP proxy sits in between your browser and the server that the HTTP request is going to. By default, an HTTP proxy cache just lets HTTP requests pass back and forth between the browser and the server. These HTTP proxies are allowed to respect the cache rules of the HTTP protocol, hence why we call them proxy caches. So, imagine many users going to the site. Each one will have their own browser cache, but the proxy will have its own cache. Here's what that would look like:

illustration showing how each HTTP Client receives their own browser cache and how the proxy has its own cache

In this instance, user one goes to the site and the request goes all the way to the server because the user's browser does not have a cache value yet, nor does the proxy cache. But, when user two goes to the site, even though the user doesn't have a browser cache yet, the proxy cache does. So, the request doesn't go all the way back to the server, it just goes to the proxy cache and the response is returned.

Now, imagine there are 1,000 distinct users going to the site all within that single minute, only the first request would go all the way back to the server. The rest of the requests would be served from the proxy cache.

Why are we talking about proxy caches? Because, in large part, that's what CDNs do; that's how they work, and when you're thinking about how Acquia Cloud Platform works, it's good to keep this in mind.

Why Use a CDN?

Why would you use a CDN with Acquia Cloud, especially knowing that it already comes with a proxy cache called Varnish? Doesn't that seem like it's just duplicating functionality? Not exactly, especially when you think about the geo problem.

The Geo Problem

You're not sitting in the same room as the server that rendered this blog post to you. You might be miles away from the server and network latency can have a big impact on how quickly the site responded. With Acquia Cloud, you have some flexibility over what geographic region your servers are in. Let's say that About page we talked about earlier in our example was hosted on servers on the east coast of the United States. If you also live on the East Coast of the United States, you're in luck. But, what if you're viewing that page from Kenya?

illustration showing world map and the issues that can occur when viewing a site rendered from across the globe

Your browser is going to wait for each request to the server (including the Varnish proxy cache) on the East Coast of the United States and back. The network latency in this case can have a critical impact on the site's perceived performance to the user.

Well, what if we could serve that content from a server closer to the user? That is to say, what if we had a network of servers that could serve this content and the user gets the content from the server closest to them? That would certainly help with the geo problem! Introducing...the Content Delivery Network! With a CDN like Acquia Cloud Platform CDN, your users will get content from a server closest to them (after caching rules are applied).

illustration showing how a CDN can create a global network of servers
Other Benefits of a CDN

There are other benefits to a CDN besides addressing the geo problem. It can reduce the overall requests that hit your Acquia subscription, which might help you target a lower subscription level. It can help improve your site's performance under peak load.

Consider the fact that any request served from the CDN is a request that does not consume resources like memory or Central Processing Unit (CPU) on the application or database servers. There are also security benefits for some CDNs, which are worth investigating on a case-by-case basis to see if they apply to you.

How to Setup Acquia Cloud Platform CDN

Acquia Cloud Platform CDN comes with your Acquia Cloud Enterprise subscription, however, there are some steps to get it set up which we'll discuss here.

On the tech side, there are a few interesting points:

  • It's supported by the Acquia Purge module, which means you can do active purging of expired content.
  • It doesn't support customizations: it's not compatible with a custom Varnish config (VCL) and it really only responds to the Cache-Control and X-Drupal-Cache-Tags headers from Drupal (it’s a little more complicated than that, but that’s the basics).
  • It's not compatible with other HTTP proxies in front or behind it.
  • It uses Fastly under the hood.
  • It's still in Beta as of the time of this writing, so your setup process may vary from what's laid out here.

Here are two useful documentation links if you'd like to read more:

And now, the moment you've been waiting for—the steps for setting up Platform CDN. Here are my personal notes; again, since Platform CDN is in beta this may change, here is what I'm recommending:

1. First, talk with the Acquia Account manager to confirm Platform CDN is available on the subscription.

2. Add all domains you want supported to all environments in Acquia.

3. Add SSL certificates on each environment, ensure those certificates cover all domains on their respective environment.

4. Create a Support ticket to enable Platform CDN, be sure to clearly outline the Application as it is named in Acquia, which Environments, and Domains you want supported.

5. At this point, expect some back-and-forth with Acquia support as you iron out details of the setup. For example, at this point, you may go through setup of the Purge module.

6. Once Acquia confirms it's set up on their side, verify the CDN is working (we'll talk through verification later).

7. Update your Domain Name System (DNS) records to a low Time to Live (TTL) so that if you switch over to Platform CDN and it doesn't work, you can quickly switch it back (optional).

8. Update your DNS records. This will make Platform CDN live.

9. Again, verify the CDN is working.

10. Last, update your DNS records to a higher TTL (optional).

The overall process may take some time. I would set expectations at 3-4 weeks to include time to do testing, roll out code changes, and coordinate the rollout with Acquia.

How Do You Verify It Works?

The last thing you want is to switch your DNS over to Platform CDN only to realize some configuration is wrong and your site is down. You can easily prevent this scenario by verifying it's set up correctly, and below we'll go through five things to check. You’ll want to wait to do these verification steps until after Acquia has confirmed the CDN is set up on their side.

1 - Verify SSL

First, verify SSL is set up correctly. Of the five verification checks I list here, this is the only one you can do prior to cutting over your DNS. To verify Secure Sockets Layer (SSL), you can start by verifying the SSL certificate on the server environment itself is correct. The way I do this is a bit of a roundabout, but it works.

Get the public IP of one of the load balancers using a tool like nslookup. The domain for it's usually a pattern like sitenamestg.prod.acquia-sites.com where sitename is the name of your subscription. Then, pick one of your custom domains and set it to that IP in your /etc/hosts file (this file may be located in a different place depending on your operating system). Here's an example walking through these steps:

First, get the IP of the load balancer:

$ nslookup examplestg.prod.acquia-sites.com
Server:        127.0.0.1
Address:    127.0.0.1#53

Non-authoritative answer:
Name:    examplestg.prod.acquia-sites.com
Address: 151.101.41.193

Now, we set your custom domain to this IP in your /etc/hosts:

$ vim /etc/hosts
...
151.101.41.193 stg.example.com

Finally, we can open up our browser and check that the SSL certificate is valid. Both Firefox and Chrome will show a padlock in the address bar.

where to check the SSL certificate is valid in your browser

If you're on Chrome, you can additionally check what IP address stg.example.com resolved to by looking at the headers of the request in the network tab:

where to check the IP address in Google Chrome

Now, repeat these steps for each domain on each environment you set up. If you're planning a DNS cutover for a new site launch, you can even test the live domain with this tactic. For example, if you set up the domain "www.example.com" on your PROD environment, but you don't want DNS to point there yet, you can still set up the SSL certificate and verify it works using this method.

Last, remember to remove those entries from your /etc/hosts file!

2 - Verify DNS is Pointing to Fastly

This is an easy check, but at this point, it requires that you have updated your DNS records according to what Acquia support has noted in the setup instructions. Take each domain and verify it's pointing to the correct location using a tool like NsLookup.

$ nslookup stg.example.com
Server:        127.0.0.1
Address:    127.0.0.1#53

Non-authoritative answer:
stg.example.com    canonical name = acquia.map.fastly.net.
Name:    acquia.map.fastly.net
Address: 151.101.189.193
3 - Verify HTTP and HTTPS Ports Are Open

This is also an easy check. It may seem unnecessary, but doing this can give you assurance that at least the path on the network from your local computer to the destination ports are working OK. I love doing this because I know if it works, any issues I do run into are at least not related to firewall issues. There are a variety of port checking tools you can use, here we'll use Netcat (nc).

$ nc -z -w 1 151.101.189.193 80
$ echo $?
0

You can see the exit code was 0 which means it succeeded. Now, we'll check the HTTPS port.

$ nc -z -w 1 151.101.189.193 443
$ echo $?
0

4 - Verify You Get Cache Hits From the CDN

Let's say you think the CDN is set up and working correctly and the site comes up, how do you know you're getting cache hits from the CDN and not Varnish? That is, how do you know the request is being returned from the CDN instead of going all the way to the server environment and back? We can inspect the HTTP response headers from the server to tell us this. To do this, we'll use curl, though you can use any HTTP client that shows you the HTTP response headers.

$ curl  -ksD /dev/stdout -o /dev/null "https://stg.example.com"
...
cache-control: max-age=60, public
x-cache: MISS, MISS
x-cache-hits: 0
...

You'll see the x-cache header had MISS, MISS. This means the request was a miss on the CDN and a miss on Varnish. More importantly, note that the x-cache-hits value is 0. This means Varnish has had no cache hits for this request. So, let's make that request again.

$ curl  -ksD /dev/stdout -o /dev/null "https://stg.example.com"
...
cache-control: max-age=60, public
x-cache: MISS, HIT
x-cache-hits: 1
...

Great! We see a cache hit! But, that was a hit from Varnish. How do we know? Because the x-cache-hits header incremented by 1. The x-cache-hits header is controlled by Varnish, not the CDN. So, what we want to see is a request where that value does not increase. Let's make the request again.

$ curl  -ksD /dev/stdout -o /dev/null "https://stg.example.com"
x-cache: MISS, HIT
x-cache-hits: 1

Great! We see the x-cache-hits stayed at 1. This means the result came back from the CDN, it didn't go to the server environment.

5 - Verify Browser Cache Is Working

If you've already passed the last four checks, you're in good shape. The CDN is working. However, you probably also want to check that your browser's cache is also working. It's an easy check to do, here is an example of how to check it in Firefox:

where to check browser cache in FireFox

Here you see the network tab. The "Transferred" column will show "cached" if it was served from browser cache. Be sure to look at different asset types to make sure they are getting cached: HTML, JS, CSS, Fonts, Images.

What Are Good Cache Settings?

Now that you know how a CDN works, why you would use one, and how to set up Acquia Platform CDN, you might be wanting to dig deeper into tuning your cache settings. How do you know what good cache settings are?

First, it's important to understand that you don't simply cache a "page," you cache the resources that make up the page. A given page might comprise a variety of resources. Here's an example that I pulled that shows a breakdown of what types of resources make up the "page" by the size of each resource:

pie chart breaking down breakdown of what types of resources make up the “page” by the size of each resource

Resource: https://www.webpagetest.org/

You can see that over 95 percent of the size of the page is JS, CSS, Images, and Fonts which are all highly cacheable. By default with Drupal, those will be cached for 14 days! That's pretty good, and depending on your site, you may consider increasing or decreasing that value which you can find in the htaccess file.

The HTML on the other hand is far trickier. The HTML may be highly-static content like an About page that you don't expect to change very often. Maybe you're OK if it takes 24 hours for someone to see an updated version of the content; that's pretty great cacheability for HTML content. But, what if that HTML has pricing or inventory of a product? That’s not very static, so if you do let it be cached you don't want it to be cached very long. A user seeing the wrong price might result in an unhappy customer.

The setting for HTML caching in Drupal is set in Configuration > Development > Performance, under "Caching" you'll see a setting called "Browser and proxy cache maximum age." If you change this value, keep in mind that any HTTP cache (like a user's browser or a CDN) will keep the cache for the last time it read the value.

Here are some reasons for a high cache maximum age:

  • You have the Purge module set up to actively purge expired content.
  • Your content is highly static.

Here are some reasons for a low cache maximum age:

  • You are not using the Purge module to actively purge expired content.
  • Your content is highly dynamic.

There are even reasons for disabling cache in certain scenarios. For example, some content may be sensitive and you want to ensure no one has a copy of it, including the browser's cache (read this drupal.org issue for an example).

If you read around, you'll find recommendations that vary widely. Some recommendations are conservative around 1 to 60 minutes, and some are not, and say 6 to 12 hours. Unfortunately, it's difficult to make general recommendations about caching policies. The truth is, it depends. And, for complex sites, you're not making a policy for all content and all users—it may vary by user role or by type of content.

For example, you may want content pages to have a high-cache maximum age, but product pages have a low-cache maximum age. The policies will also depend on what other caching headers you are using, a key one being the Vary header. Ultimately, you'll need to put some thought and rigor into deciding on what policies best suit your needs.

It's worth repeating that high-performing websites require thought and intentionality behind their design and implementation, and cache settings are a fundamental aspect of high performance.

Feb 16 2024
Feb 16

PHPStorm is one of our favorite Integrated Development Environments (IDE) for building Drupal sites. In addition to its outstanding ability to help any PHP developer's productivity, it offers several Drupal-specific time-saving tools—like the ability to handle code completion for hook declarations and applying Drupal coding standards.

Among the many tabs that border the PHPStorm IDE window is one that offers access to one of the hardest-working components of the Drupal ecosystem...the database!

Many developers only interface with the database via Drush commands, performing database backups, or moving content from the server to their local machine. Given the Drupal database is where all content and active configuration are stored, developers should feel comfortable leveraging the database as a research and diagnostic tool when developing solutions or debugging problems. The database can provide insight into how your data is flowing throughout the system, which can help when debugging errors or when working with a new module that modifies data before saving.

In this guide, we'll show you how to connect to your local Lando environment’s Drupal database from within PHPStorm. If you’re using another local environment like DrupalVM or DDEV, you can use the following steps as a guide for how you can connect these other environments.

Obligatory warning: After you connect to the database, you'll have access to modify or delete data, tables, or the entire database. Be sure you’re not working directly with a live/production database! We suggest using the database tool on a local copy of the database that can be restored if needed.

Step 1 - Allow Lando to Receive Incoming Database Connections

By default, Lando does not allow anything but the Lando app to connect to the database server, so we need to tell Lando that it’s OK for PHPStorm to connect. In the Lando configuration file (either the project-wide .lando.yml, or in the local overrides .lando.local.yml), add the following lines:

services:
  database:
    portforward: 3307

When added to a basic .lando.yml recipe, the file will look like this:

code added to a basic .lando.yml recipe

This allows port forwarding on port 3307 to the host 'database', which is the default name of the database container in the drupal8 / drupal9 recipe in Lando. If your database hostname is different, update as needed.

Finally, for this step, rebuild the Lando environment with lando rebuild --yes.

Step 2 - Connect the Database in PHPStorm

Now that Lando has been rebuilt and is running, we can connect PHPStorm to the Drupal database. If it’s not already open, click the database tab to open the database pane.

In the database pane, click the + sign to add a new data source, select MySQL (or MariaDB).

click the + sign to add a new data source, select MySQL (or MariaDB)

In the new window that opens, enter the server information and credentials to connect:

data sources and drivers window with the server information and credentials shown

Since we’re using the default values that come with the Lando drupal8 recipe, we’ve entered:

  • Name - a name you want to call this in configuration
  • Host - defaults to localhost, you should be able to leave it as is
  • Port - 3307 (or the port you assigned in .lando.yml)
  • User - drupal8 (or the database username you assigned)
  • Password - drupal8 (or the password you assigned)
  • Database - drupal8 (or the database name you assigned)

When you click "Test Connection" you should see a green checkmark to verify that it connected successfully.

So, now what?

Step 3 - Use the Database!

The database pane should show you a tree of the database tables in your Drupal database. When you connect to the database, a console tab will open up in the main editor window. You can also browse the data in a table by double-clicking on the table name in the database tab. In the image below, we’ve opened the table block_content and have the data as a table in the main editor window.

database pane with tree of the database tables in your Drupal database

Why Use this Database Tool Over Others

The database tool within PHPStorm has most of the features of JetBrains’s DataGrip IDE. There are too many features to cover, but here are three of our favorites:

Feature 1 - Viewing the Data in a Table

This seems pretty mundane, but scanning through data tables can help you visually pick up patterns about your data. The column headers allow you to sort by one or more columns to help you review the data in the table. You can also drag to rearrange the columns to make viewing the data easier for your task.

Feature 2 - Finding Data in a Table

When you need to find a specific string in a table, you can write a query by hand or you could use some of the built-in tools to make finding the string much easier. When you’re viewing a table, you can search all rows and columns by simply pressing Cmd+F or Ctrl+F. A magical search form will appear:

search appearing to help find data in a table

As you start typing, data cells with your search string will be highlighted. You can also check the "Filter rows" box to only show the rows that have your search string in them:

data cells highlighted with the search query typed in

There are also options to search with case-sensitivity or with regular expressions, which can help you find all of the data that you’re looking for.

Feature 3 - Finding Data ANYWHERE!

This is a great tool to use when you know what you're looking for, but you aren’t sure where to find it. You no longer have to navigate a huge haystack of the SQL export text file to look for your needle!

In the database tab, right-click on the drupal8 database and pick "Full-Text Search...":

options shown after you right-click on the drupal8 database

In the new window that opens, you can enter your search term and press Search:

window for full-text search

PHPStorm will open the "Find" tab and show you how many matches were found in the tables of your database:

number of matches shown found PHPStorm

Delivering Great Digital Platforms

PHPStorm is a true workhorse of Drupal development. It allows talented people to be more productive in their efforts to create amazing features for Drupal and awesome digital experiences for users. The built-in suite of tools for PHPStorm—especially the database tools—makes this IDE my favorite when it comes to delivering great digital platforms for our clients at Bounteous.

Feb 16 2024
Feb 16

Moving between hosting providers is never an easy task, but it can be done in a way that doesn’t have to be painful. One of our clients recently recognized the value of a hosting provider like Acquia. We were tasked with moving their site from custom AWS hosting to the Acquia Cloud Platform.

Acquia is the only Drupal hosting platform that's built for Drupal developers by Drupal developers. Acquia Cloud Platform is also the only web hosting solution for Drupal designed to scale to meet the demands of enterprise-class business challenges. With Drupal managed hosting from Acquia, you can create, scale, and manage your digital experiences knowing you’re leveraging the best that Drupal has to offer.

Acquia Cloud Platform provides secure and compliant web hosting for Drupal that delivers everything your teams need to build and manage Drupal-based digital experiences, including fully managed Drupal hosting, robust development tools, enterprise-grade security, and world-class support.

When migrating your site to a new platform, we want to ensure we’re still following best practices. There are many caveats to moving websites between hosting providers that can arise. We will discuss a few common ones throughout this article; however, every situation is unique. This means that your migration should be well documented, predictable, and repeatable. You should expect to perform the steps multiple times as these issues are uncovered and resolved. If we follow best practices and develop iteratively, we can prevent problems from making it to our live site.

Codebase

Our first step is to evaluate the codebase and make sure it is following best practices for Drupal development. This includes things like ensuring we are properly using version control, dependency management, and the config system. Most Drupal 8 sites should already be using these basic concepts, but this is a great point to perform some basic checks.

Next, we want to prepare the codebase to take advantage of all of Acquia Cloud Platform’s features. At the very least, we will want the Acquia Connector module which will allow our site to send metrics and other data to the Acquia subscription. This gives us access to tools like Insights and also helps Acquia maximize uptime. Another module we want to install is Acquia Purge for clearing varnish as well as the Cloud Platform CDN.

Once our code is ready, we need to get the code into Acquia’s repository so that we can deploy it to our new pre-production environment. This is a great opportunity to evaluate our CI/CD pipeline and make adjustments that aligned us with best practices. Fortunately, this project was already based on Acquia’s Build and Launch Tool (BLT), which gave us a plethora of commands to easily plug into our CI system. Using BLT also meant pushing the code was simple as changing the git.remotes configuration setting and running the artifact:deploy command. 

Database

With our codebase in place, it’s time to get the fun started and transfer the database to the new environment. Using our friendly neighborhood Drush CLI Tool, backing up and restoring the database is extremely easy. To use Drush, we need to download aliases that are conveniently provided under the credentials tab within our Acquia account settings. The aliases are simply dropped into the drush/sites directory within our codebase.

To create a backup of the database we use the following command:

drush @client_legacy.prod sql:dump \
     --result-file /tmp/client.sql --gzip \  
     --structure-tables-list="watchdog,cache_*,search_api_db_*,migrate_message_*"

The results-file parameter tells Drush to store the file in a consistent place. This helps us maintain that predictability and consistency that is so crucial to our success. We also want to make sure we’re passing the gzip flag to compress the resulting backup file. It is important to note that this flag will add .gz to the end of your results-file path. Thus our resulting backup is actually located at /tmp/client.sql.gz.

The structured-tables-list option tells Drush to skip backing up the data for any tables matching the list. In the case of Drupal, we can safely ignore any cache as well as any module-specific tables that are generated dynamically or do not need to be preserved. This is extremely helpful in cutting down on database backup sizes. 

Once the database backup has been created, we need to transfer it to the Acquia server. There are many ways to accomplish this, and my preference is to use sftp or scp. This is also a good point to take some notes on how long the transfer takes!

sftp [email protected] 
sftp> get /tmp/client.sql.gz
    # /tmp/client.sql.gz. 100%  500MB   4.2MB/s   02:00

sftp [email protected]
sftp> put client.sql.gz
    # /tmp/client.sql.gz. 100%  500MB   2.1MB/s   04:00

Our last step with the database is to import it into the Acquia site. One significant problem with this migration in particular was that the client’s database backup was roughly 2GB when uncompressed. Importing a larger database can present many problems such as the server running out of resources or the ssh connection timing out. Our solution for these issues was to run the import process as a fork and monitor the server until the import finished. To minimize the problem surface, we ran each command in an atomic way—avoiding unix pipes and logic where possible. This made our lives easier as we debugged the issue we encountered.

The commands we ran to import the database were as follows:

drush @client.prod ssh --ssh-options="-o ServerAliveInterval=60"# SSH into acquia server
    cd ~ # Go to default upload location
    rm -f client.sql # Remove any existing unzipped backups
    gunzip client.sql.gz # Unzip our backup
    cd /var/www/html/client.prod # Navigate back to our codebase.
    drush sql-drop # Delete any existing 
    drush sqlc < ~/client.sql & # Import the database
    free -h; ps -aux; top; # etc... Wait for database import to complete

At this point, we should be able to visit our temporary Acquia production URL and see a version of our site without any images.

Files

The next step in the migration is to sync the files which is easily achievable via the Drush rsync command. However, to keep in the spirit of optimization, we grabbed the rsync command executed by Drush and added a couple of options to make it more performant. This was especially helpful as the client had dozens of gigs worth of files.

The rsync command we used to sync the files were as follows:

rsync -e 'ssh ' -akzv --ignore-existing \
    --exclude "styles" --stats --progress \
    /efsmount/client.com/files/  \
[email protected]:/var/www/html/client.prod/docroot/sites/default/files

The ignore-existing flag tells rsync skip copying files that already exist, which is helpful if your files tend not to be changed. We also want to exclude the styles directory as it can be dynamically generated (similar to cache tables). 

Test and Launch!

Now that you have your complete site copied over, you can begin testing and validate that the site was properly copied. As issues are uncovered in QA and UAT, you will likely want to recopy the database and files to your Acquia Cloud Platform. Good thing we clearly documented our steps! Client data constantly changes and we want to do our best to ensure the success of our migration.

Once your site is stable and has been thoroughly tested on Acquia, it’s time to launch! Using the timings from our notes we can work with the client to schedule a maintenance period. It’s during this period that we will perform one final migration before cutting over our DNS. On launch day, we review our documentation with the entire team to ensure all members of the team (including the client) are on the same page. 

As the work begins, you should be able to copy and paste all commands that you need to run and easily notify your team as you progress through the steps. Once your migration is complete, all that's left is to flip the DNS and decommission our old servers. Congratulations on your new Acquia Cloud Platform site!

Feb 16 2024
Feb 16

Normally, being asked to build a component-rich website in 10 days might feel like a tall task that requires a superhero effort from all parties involved. But with Acquia’s Site Studio, formerly Cohesion, that’s exactly what we did.

There were no panic attacks and while we might look like superheroes, it didn’t require a superhero effort. Working alongside Marketing and Experience Design (XD), we took the requirements for a component-driven, single-page site and built it in a little over a week with ease.

What We Needed

Here at Bounteous, specifically within the Drupal Practice, we’ve spent a lot of time and effort learning about Site Studio. We completed the Early Adopter Program, we’ve earned certifications, and we’ve even written about it. It was time to put Site Studio to the test and our own Co-Innovation initiative was the perfect candidate for it.

We needed a single landing page site, one that would be a rich, component-driven page that was also elegant, bold, and looked great on any device. We needed a webform that would drive visitors to download our Co-Innovation Manifesto along with all the other behind-the-scenes elements involved with a build. And, it needed to be built in 10 days to coincide with a webinar that was being hosted by our CEO, Keith Schwartz.

Building a site in 10 days should not feel like a big deal, but to do it right, you need Marketing, XD, and Development to come together quickly to provide an actionable plan, provide design direction, and architect it. But, we are always up for a challenge.

How We Did It

So, how did we build a component-rich website in 10 days? The easy answer is that Bounteous is awesome and that’s just how we roll. We’re experts at what we do and there’s no challenge we can’t meet. But a more specific answer is, we used a combination of Drupal, Acquia Site Studio, and UI Kit to complete our project in such a short timeline.

We met with Marketing, where they outlined the requirements, which were to launch a landing page to coincide with a webinar. But how could we pull this off? We were all immediately on the same page: Site Studio. This gave Bounteous and the Drupal Team a great opportunity to finally put Site Studio’s promises to the test.

In addition to using Site Studio, we also suggested Acquia’s UI Kit. UI Kit was designed and built to accelerate the design and development process of a component-driven website. It provided us with the ability to build a Drupal site at scale, fast and efficiently.

Besides saving significant time on the build, another benefit of UI Kit was that marketing was able to view demos of each component, allowing them to quickly and easily select the elements they wanted us to use.

Not only that, but UI Kit provides templates using Sketch, an app that allows for rapid prototyping and collaboration. All we had to do was apply our color palette and typography to keep our brand consistent with our other digital properties. We even made a few structural and functional tweaks with ease to make the site shine. This made our conceive phase super fast, efficient, and it set everyone's expectations about how the site would behave once it was assembled and in the browser.

For the build, we quickly spun up a new Drupal site on our Acquia Site Factory instance. We configured our site based on Drupal standards. We installed Site Studio, imported UI Kit, and started building. From there, all that was needed was for us to add our color palette and typography. Next, we took advantage of Site Studio's ability to easily update and adjust components to fit within the Bounteous style guide.

There was no backend coding needed. This led to faster deployment and put the site into the hands of our stakeholders faster than ever before. It was just that easy. Once everything was in our production environment, we added content and published it. All in ten days, with plenty of QA time to spare.

Easy Building & Theming with Acquia’s Site Studio

As we use it more and more, Acquia’s Site Studio continues to be an exciting product; one that lives up to the hype. Site Studio makes the process of building and theming sites from start to finish smooth and easy. I am personally excited to continue to push the boundaries of what can be accomplished with Site Studio and the projects that it will benefit. And as for the Co-Innovation site, we have plans to expand it even further.

Feb 16 2024
Feb 16

Many of the enterprise-grade Drupal sites we build at Bounteous rely on lots of data—much of which is often managed and stored in a third-party provider’s system. While conventional APIs—like those that rely on RESTful services—are common sources for pulling external data into a website, you may encounter some third-party providers who dispatch updates via webhooks. Here's how to work with those notifications in Drupal.

While Drupal 8/9 core provides all of the necessary tools for receiving and processing webhook notifications, the lack of an established API, dedicated plugins, or generic contrib modules can make building a custom solution a bit daunting. In this blog post, we’ll walk through a complete top to bottom implementation that’s able to create, update, and delete (CRUD) entities whenever webhook notifications are received.

Note: To try out our example code, you can skip over the lengthier explanations and simply follow the instructions in the blue boxes. You’ll want to begin by downloading the sample Drupal module we’ve assembled from the Bounteous GitHub account. Clone that repository to your Custom modules folder and enable it to follow along with our example.

What Are Webhooks, Anyway?

If you’ve worked with APIs in the past, you’re probably familiar with the general process: make a request to a third-party service to ask for some data and it responds to let you know what—if anything—is new.

In Drupal, we most commonly rely on a cron task to make periodic requests, tailoring the frequency of those calls to the timeliness of the data that’s being retrieved, the ebb and flow of traffic to the site, or both. Think of that process as being akin to calling a friend every evening to find out what’s happened over the past 24 hours. Some days will be slow and they won’t have any updates to share, while others are full of news; either way, you get a complete rundown of their day in one fell swoop.

Webhook notifications are more like that friend who texts throughout the day whenever something happens. While the frequency, urgency, and length of their messages may vary, you always receive their updates on a rolling basis—and only when there’s something that (they feel) you need to know. Webhooks provide a similarly timely heads-up whenever data is modified in an external system, saving you that daily (API) call.

Use Cases for Webhooks 

All that’s required in Drupal 8/9 is core—there are no contrib modules required! But, there are a few additional prerequisites to cover before we get started.

A Data Provider That Dispatches Webhook Notifications

This piece of the puzzle will be specific to your particular use case. Learning that your third-party service is capable of delivering webhook notifications likely led you to this post; if you’d like to work with that provider’s actual data, you’ll need to review their API documentation and adapt the code in our example module to accommodate the specific data structure in their notifications.

In order to bootstrap a working example, we’ll be using the Postman app to post webhook notifications to our Drupal site in this tutorial. Whether you plan to follow along with our example or do local development against actual data you’ll want to download and install Postman on the computer you typically use to write code.

Sample (or Actual) Notification Data

In order to develop a custom solution that can act on a provider’s data, you’ll need at a minimum a sample notification that represents what will ultimately be posted to your Drupal site. Example data is useful for any project since it—in combination with Postman—allows you to trigger notifications without logging into your provider’s system and/or modifying any actual data.

Fire up the Postman app on your computer, then follow their guide on importing Postman data to pull in the sample collection (Webhook Entities.postman_collection.json) found in the root directory your downloaded copy of our example module.

If you’re already up and running with a particular provider and would like to use their data but can’t find an example in their documentation, several online tools may provide some help. One, webhook.site, is a particularly indispensable resource. Simply pull up that site and copy the temporary URL it generates, then log into your provider’s system and paste the temporary address into their webhook notification field. At that point, any valid events in the provider’s system should result in a notification being sent to the temporary webhook.site URL you’d copied—and that will allow you to see all of the data received from each new notification that’s generated.

A (Publicly) Accessible Drupal Site

Longer-term, your site will ultimately need to be publicly accessible via the Internet in order to actually listen for any real notifications. While that’s a given for hosted environments (and you can skip the rest of this section if that’s you), most development with modern tools (Acquia Dev Desktop, a Docker container running the Lando D8 recipe, or Drupal VM, etc.) is done locally—and therefore effectively offline. While offering solutions for all possible approaches to local development is beyond the scope of this post, two common approaches have proven to be the most reliable and quickest to get up and running for us at Bounteous:

Exposing a Local Environment on the Internet via ngrok

ngrok is a tool that allows you to create a secure tunnel to a locally hosted site so that it’s accessible via the web. If your local development workflow requires working with actual notifications dispatched directly from your provider, then this tool might be the way to go.

Let Postman Stand in for Your Webhook Provider

If you can access your local or hosted dev environment from a browser on your computer, Postman can post sample notifications to it. We’ll be relying on this approach below since it’s much more tooling-agnostic and the Postman app is freely available for Windows, Mac, and Linux.

Universally Unique Identifiers

The last prerequisite is a Universally Unique Identifier (UUID) that will be used to permanently associate an individual data point in your third-party provider with a corresponding Drupal entity. This value will be distinct from Drupal’s internal entity IDs and is required in order to look up previously imported records whenever future updates are made. Consequently, every Drupal entity type that will be storing webhook data needs a custom field to store the identifier that accompanies each notification.

Log in to your site and navigate to /admin/structure/types, then Manage Fields for the Basic Page content type and add a new plain text field named Webhook UUID.  Ensure the generated machine name is field_webhook_uuid) before saving.

While many providers automatically include a unique string that represents a record in their system, others may rely on a specific field. In the rare instance that your notifications don’t contain a dedicated UUID that’s present across all events, you may need to do some additional legwork to concatenate one or more static values into a usable identifier. Check your specific provider’s documentation or use webhook.site to examine notifications and determine which value(s) might be good candidates.

Building the Webhook Entities Module

The sample code you’ve already downloaded has all of the necessary components that are required in order to listen for, receive, and process webhook notifications. It was built to serve as a reusable springboard that can get your own project up and running quickly (in other words, feel free to use our code!). Here’s the overall file and folder structure:

Webhook Entities file and folder structure

We’ll briefly review the key components below.

The Listener Endpoint

webhook-entities.routing.yml

All webhook dispatchers require an endpoint that can receive notifications, so the first step is to define a new route in Drupal.

webhook_entities.listener:
  path: '/webhook-entities/listener'
  defaults:
    _controller: '\Drupal\webhook_entities\Controller\WebhookEntitiesController::listener'
    _title: 'Webhook notification listener'
  requirements:
    _custom_access: '\Drupal\webhook_entities\Controller\WebhookEntitiesController::access'

You might have noticed that the last line in the code from our routing.yml file above looks a bit different—that’s because it enforces custom access checking on the listener endpoint.

Access Tokens

/src/Form/WebhookSettingsForm.php

Security is a critical consideration whenever data makes its way from any external source into Drupal; since webhooks fit that bill, our custom access check validates each incoming notification to ensure it was legitimately dispatched from the actual provider.

In order to facilitate that handshake, our custom module includes a simple form that allows you to specify a secret key that can be used to allow or deny access. The most common security mechanism implemented by webhooks is an Authorization header that’s included in each notification and corresponds to a secret value that only you and your provider know (like an API key).

Log in to your Drupal site and navigate to /admin/config/webhook_entities/settings. Enter the authorization key used by our sample Postman collection: 123456. Then save the form.

Authorizing Notifications

/src/Controller/WebhookEntitiesController.php

In this simple example, we retrieve the config value saved via the form and compare it to the notification header to validate that the notification is legitimate and should be captured in the Drupal database.

/**
   * Checks access for incoming webhook notifications.
   *
   * @return \Drupal\Core\Access\AccessResultInterface
   *   The access result.
   */
  public function access() {
    // Get the access token from the headers.
    $incoming_token = $this->request->headers->get('Authorization');

    // Retrieve the token value stored in config.
    $stored_token = \Drupal::config('webhook_entities.settings')->get('token');

    // Compare the stored token value to the token in each notification.
    // If they match, allow access to the route.
    return AccessResult::allowedIf($incoming_token === $stored_token);
  }

Be sure to check your specific provider’s documentation to confirm this is the correct authorization method, since some services implement more robust security measures. For example, one CDN that we worked with required combining a notification-specific signature with a timestamp, hashing that value, and then comparing it to another header value. Clearly they’re a bit more serious about not letting anyone spoof their notifications!

Updating the Notification URL

Now that we have a dedicated path that can be used to listen for incoming notifications, we’ll need to instruct our provider to direct your implementation of their webhook API to that URL. 

Your mileage will vary here, since the actual method for accomplishing this varies from provider to provider—however, in most cases, it’s a simple change that can be accomplished by logging into the control panel associated with your account. To facilitate local development we’ll make this change in Postman.

With Postman running and our sample collection imported as described above, expand the collection folder named “Webhook Entities” to find three sample requests. One at a time, you’ll need to click on each one and update the POST value found at the top of the Params tab to point to your development environment.

For example, if you access your local development site via your browser at http://mysite.local, you’ll need to update the POST URL in all three of the requests to http://mysite.local/webhook-entities/listener.

Additional Data Concerns

Before completely moving away from the topic of security, it’s worth discussing some additional measures that are often overlooked when processing notifications. While it’s probably unlikely that your provider will intentionally deliver malicious code, it’s possible that a bad actor could gain access to their system and inject something nasty or get ahold of your authorization token and spoof legitimate notifications.

In order to safeguard against those risks, we’ll follow two golden rules of working with someone else’s data:

  • Only keep what you’re actually going to use;
  • Sanitize everything before using it.

Since Drupal typically sees us capturing all user input as it’s entered and sanitizing on output (and Twig’s autoescaping facilitates that to a large extent), we’ll focus primarily on working with only a limited subset of incoming data in the queue worker (below). However, the extra-cautious among us might also consider the addition of a generic service capable of sanitizing individual data points in each webhook notification or escaping HTML entities on markup-rich fields like body text.

Handling Notifications as They Arrive

/src/Controller/WebhookEntitiesController.php

The controller referenced in our routing.yml file (above) primarily serves as a gatekeeper that receives incoming notifications, determines whether or not to act on them (via the authorize method), and then shuttles them along to their final destination.

/**
   * Listens for webhook notifications and queues them for processing.
   *
   * @return Symfony\Component\HttpFoundation\Response
   *   Webhook providers typically expect an HTTP 200 (OK) response.
   */
  public function listener() {
    // Prepare the response.
    $response = new Response();
    $response->setContent('Notification received');

    // Capture the contents of the notification (payload).
    $payload = $this->request->getContent();

    // Get the queue implementation.
    $queue = $this->queueFactory->get('webhook_entities_processor');

    // Add the $payload to the queue.
    $queue->createItem($payload);

    // Respond with the success message.
    return $response;
  }

For maximum efficiency, we’re not doing anything with the data as it rolls in—but instead handing everything off to Drupal’s queue API for actual processing.

Processing Notification Data

/src/Plugin/QueueWorker/WebhookEntitiesQueue.php

Relying on the queue to process notifications in batches helps prevent your site from becoming overloaded in the event that it’s inundated with an influx of webhook notifications (for example, a bulk update that’s triggered when you upload a CSV file to your third-party provider).

Our custom module tells Drupal to queue notification data for processing later alongside any number of other notifications that might have come before or after it; the queued notifications (or a portion thereof, depending on how full the queue is) are processed during each cron run.

While authorization has already occurred by the time a notification reaches the controller, we perform several additional verifications to ensure the data we’ve received is usable and speed up processing time. We start by checking to ensure the notification body actually contains data and isn’t empty, then further validate that it contains the necessary UUID identified during our preparatory steps above (for simplicity we assume the UUID is a simple value contained within the headers for each notification).

Assuming both of those checks pass, we then implement the previously mentioned security tactic of stripping out anything we won’t be using. This step has the added benefit of simplifying the data we’ll be working with later as well as potentially gaining some efficiency by not passing along unused values that might end up being processed unnecessarily.

Remember that all-important UUID you’d identified in your incoming notifications? Here’s where it finally comes into play. Since your third-party provider probably doesn’t know anything about Drupal (most webhook notifications are purposely written to be generic), we’ll need a way to cross-reference the incoming data with any entities that Drupal already knows.

Since two of our CRUD actions (updating and deleting) will require database queries to find existing nodes—and considering there’s a good chance some of your other custom code will also need to identify those entities—we’ve abstracted this functionality out into a service (/src/WebhookUuidLookup.php) that other components of our Drupal site can leverage in order to more easily work with the entities managed via webhooks.

public function findEntity($uuid) {
    $nodes = $this->entityTypeManager
      ->getStorage('node')
      ->loadByProperties(['field_webhook_uuid' => $uuid]);

    if ($node = reset($nodes)) {
      return $node;
    }

    return FALSE;
  }

The last step is to shuttle each notification on to its final destination according to the action it represents. We’re managing create events a bit differently from the others since they’re the only occasion where we specifically don’t want to have an existing record in the Drupal database. 

// Handle create events.
if ($entity_data->event == 'create') {
  // Create a new entity if one doesn't already exist.
  if (!$existing_entity) {
    $this->entityCrud->createEntity($entity_data);
  }
  // Otherwise log a warning.
  else {
    $this->logger->warning('Webhook create notification received for UUID @uuid but corresponding entity @nid already exists', [
      '@uuid' => $entity_data->uuid,
      '@nid' => $existing_entity->id()
    ]);
  }
}
// Handle other modification events.
else {
  // Ensure a Drupal entity to modify exists.
  if ($existing_entity) {
    switch($entity_data->event) {
      case 'update' :
        // Update an entity by passing it and the changed values to our CRUD worker.
        $this->entityCrud->updateEntity($existing_entity, $entity_data);
        break;

      case 'delete' :
        // Call the delete method in our CRUD worker on the entity.
        $this->entityCrud->deleteEntity($existing_entity);
        break;
    }
  }
  // Throw a warning when there is no existing entity to modify.
  else {
    $this->logger->warning('Webhook notification received for UUID @uuid but no corresponding Drupal entity exists', [
      '@uuid' => $entity_data->uuid
    ]);
  }
}
}
// Throw a warning if the payload doesn't contain a UUID.
else {
$this->logger->warning('Webhook notification received but not processed because UUID was missing');
}

Ultimately this is yet another component that will be specific to your provider and data model; the sample notifications in our Postman collection contain an event key, the corresponding value of which indicates which action should be taken when that particular notification is posted to your Drupal site.

Managing Drupal Entities

/src/WebhookCrudManager.php

Now that we have a tool for recalling data that’s already been sent to Drupal, we can build out the logic required to handle each type of event that can be triggered by one of our notifications.

Since our sample postman collection contains short and simple notification data, all of our example CRUD components have been defined as separate methods within a single service class—however you might want to consider breaking yours out into separate services, since operations on actual data will almost certainly be more complex.

Rather than diving into the specifics of the CRUD manager service in our sample module, we’ll wrap up our code explanations by pointing out some general observations for best practices worth considering when you modify the examples to your own needs.

Our create() method offloads the handling of incoming notification data to a separate mapFieldData() function, which in turn constructs an array of values corresponding to Drupal field data that are required for creating a node. We’ve taken the approach of only mapping those values that might also be included in other events (such as updates) in order to prime the pump for future code reuse. We also ensure the notification payload contains a title value before creating a new node—since that’s the one value required for the basic page content type.

The update() method implements a series of simple checks to determine which values exist in the notification data—since unlike API calls that often return complete records, webhook notifications typically only contain modified values. This allows us to only act on those fields that have actually changed, rather than updating every value for a given node.

And finally, the delete() method does simply that. Like the update() method it’s receiving the complete node entity as an argument—so we’re able to call that entity’s built-in method in order to remove it from Drupal.

Seeing it All in Action

Go back to Postman and post the sample notifications in the same order as the actions listed above, returning to Drupal in-between each post:

Create: after posting the create notification and running Drupal cron, you should find a new node listed on your content overview page. View that node and you’ll see that all of its values correspond to those in the notification data (excluding the one we removed before handing the create notification off to the CRUD worker).

Update: post this update notification, run cron, and reload the Drupal node and you’ll find the title and body fields have been updated.

Delete: Finally, post the delete notification and run cron a third time to remove the sample node from Drupal.

Webhook Processing Done Simply

And there you have it—a simple yet functional example of processing webhook notifications. While this tutorial has touched on all of the key pieces that are required to manage one type of core-provided entity (nodes), you’ll find that your own specific application might warrant additional considerations such as:

  • Locking down (or hiding) any Drupal fields populated from webhook data. This helps to preemptively stave off frustration for content admins since they won’t be able to edit any values that might be programmatically updated via future notifications.
  • Creating additional CRUD managers: Each distinct entity type—particularly any custom ones you might create to store webhook data—will require its own set of field mappings. This is especially true if you aim to manage Media entities as we did for a recent project, since that task also requires parallel management of File entities. Be sure to leverage that UUID!
  • Handling duplicate entries: While our example module simply throws an error—and ignores create operations—whenever a representative node already exists, your use case might warrant a different approach to safeguard against data loss. For example, you might want to instead hand off the incoming notification data to your update method.

Finally, despite how powerful webhooks can be it’s important to give some consideration to what they can’t do:

  • Perhaps most critically, Drupal won’t receive any notices when your provider’s system goes offline and stops dispatch notifications—since your site is passively listening, the running assumption is that no news means nothing has changed. Unless the third-party service that’s broadcasting your notifications is capable of queuing and re-sending notifications, that gap will translate to missed updates that won’t be made to your Drupal entities.
  • In a similar vein, webhook notifications are a one-and-done setup—so if your custom code contains a bug that prevents the changes specified in a payload from being saved, that update is lost forever once the queue believes it’s been processed. Be sure to test your code thoroughly with sample data that are highly representative of the actual notifications you’ll be receiving!
  • Additionally, there’s always the possibility that your provider might modify the data structure of their notifications. Hopefully, they’ll be considerate enough to give you a heads-up if they do so, however it’s not a bad idea to wrap any functions that parse that data in try/catch statements so you’ll see some indication that things aren’t being processed in your Drupal logs.
Feb 16 2024
Feb 16

Hosting web applications presents a lot of challenges. Designing and building a valuable experience for your users is difficult enough, why should you increase the effort by managing a complicated technical stack? Acquia offers a purpose-built Drupal hosting solution that lets you focus on the most important part–your users.

Three Types of Service Models

Before we examine the benefits of Acquia as a Drupal host, we need to understand what hosting models are available for Drupal. Generally speaking, there are three categories of hosting service models, each offering a different level of sophistication and requiring the technical knowledge to match1. Selecting one of these models depends significantly on the project’s requirements. Let’s compare these models and examine the separation of responsibility for managing each aspect of the technical stack.

Management Responsibility Across Hosting Models

  On-Premises IaaS PaaS Content     x     x      x Web Application     x     x      x Data     x     x      x CDN     x     x   Runtime     x     x   OS     x     x   Networking     x     x   Virtualization     x     Servers     x     Storage     x    

X = You Manage

On-Premises

On-Premises covers self-hosted and self-managed hardware–no cloud involved. In this model, you manage the entire stack from the bare hardware and networking through the web application and its data. Often this model requires a team of expert technicians, administrators, and developers to manage safely and securely at scale. It provides a high degree of flexibility and customization but requires significant resources to match.

Infrastructure-as-a-Service

Infrastructure-as-a-Service (IaaS) begins to remove some of this complexity by taking over management of the lower levels of the technology stack. In most cases, IaaS services will handle the management of physical hardware, allowing administrators and developers to focus on the software systems required to manage their web applications.

Creating and destroying machines can be done relatively easily, allowing hardware to scale based on traffic or change based on new requirements. However, this still requires a certain level of expertise to keep running services up-to-date with the latest security and bug patches. Usually, the team must have knowledge specific to the IaaS provider in addition to sysadmin level IT skills.

Examples

Google Cloud Platform, AWS, Azure

Platform-as-a-Service

Platform-as-a-Service (PaaS) further removes complexity, allowing developers and site administrators to focus directly on the web application. PaaS providers handle the management of the entire technical stack while the site owner is still managing the web application and its data. This provides an excellent balance of customization in the web application without requiring a large, knowledgeable team to manage infrastructure. 

PaaS providers such as Acquia provide purpose-built solutions for deploying custom, scalable web applications like Drupal. These are carefully tuned environments based on years of experience that may not exist in smaller IT teams.

Examples

Acquia, Google App Engine, AWS Elastic Beanstalk, Azure Marketplace

Selecting a Service Model

Platform-as-a-Service is the Default

For most web applications, a PaaS model will provide a strong value proposition. By providing cloud solutions maintained by experts, they can offer economies of scale that smaller IT teams cannot attain on their own. Expensive hardware no longer needs to be purchased.

In turn, this removes the maintenance overhead for cooling, power, and other support systems. In the PaaS hosting model, software maintenance is handled by the provider, allowing your internal support personnel to focus on other tasks. Any maintenance tasks that remain can usually be run directly by developers or other project team members.

Additionally, the costs associated with a PaaS model like Acquia’s are more spread out, increasing the business' agility in managing costs. By removing the need for hardware purchase and setup, the initial cost is reduced significantly and capital expenditures can be made elsewhere. This also makes the application team more Agile in how it responds to changes and new opportunities by providing additional flexibility in the hosting costs. As needs scale or new opportunities appear, it can be much easier to grow or alter hosting needs.

When to Choose IaaS or On-Premises

There are some circumstances where taking on additional maintenance responsibilities may be required, driving your application toward an IaaS or On-Premises model. Most importantly, legal concerns and other policies may prevent you from selecting a cloud solution. Special privacy concerns might require an on-premises model to maintain strict control over personally identifiable information or other sensitive data. It’s also possible that existing agreements and contracts require the use of a particular service. In these cases, it might be valuable to assess when a switch might be made or if a PaaS service can be worked into existing infrastructure while following applicable policies.

It’s also possible you have a strong technical reason to select another service model. If the application has very specific technical requirements, it may be necessary to host it in an IaaS solution or even On-Prem to allow customization of the stack in ways Acquia doesn’t allow. These would generally be exceptionally unique circumstances driven by heavily customized features or specific networking needs.

Why Acquia?

If you’ve decided that a PaaS solution is right for you, Acquia is a PaaS provider specializing in Drupal. Dries Buytaert, Drupal creator, is both the co-founder and CTO of Acquia. Buytaert along with Jay Batson founded Acquia to provide infrastructure, support, and services to enterprise organizations using Drupal. In addition, Acquia was created to help Drupal scale, make Drupal easier, and to empower a thriving network of Drupalists around the world. Today, Drupal is about one of every 40 websites used.

Acquia gives Drupal development teams access to targeted solutions offering features that smaller IT teams can’t reasonably support. This provides a compelling value proposition, often letting site owners run services that are more complex than their team would otherwise be able to maintain. The technical stack can be more robust, improving value, reducing time-to-market, and reducing costs.

Fully-tuned Stack

Acquia is able to apply an immense amount of time and resources towards carefully tuning its stack to provide optimal hosting for Drupal and related technologies. This lets them provide situation-specific efficiencies and support that are simply not reasonable to expect from self-managed solutions. Acquia has spent many years refining the hosting environments for hundreds of clients. This level of sophistication is not achievable for smaller IT teams.

Improved Access and Support

Sites hosted with Acquia are generally faster and more reliable than sites hosted internally. Acquia operates at a large, global scale and has the networking and storage locations to support such an operation. Some of these technologies that are required at scale are difficult to maintain.

Acquia provides these technologies for teams that would otherwise be unable to support them. For example, Content Delivery Networks and robust caching tools provide fast, global access to your site through local access nodes, reducing load times and improving the user's experience.

In addition to faster access, Acquia also offers additional support for your site. This reduces or removes the need for an on-call rotation of technicians to maintain the site. Acquia hosting comes with a defined Service-Level Agreement (SLA) setting contractual obligations for reliability of the site. In other words, Acquia takes on the burden of maintaining the servers 24 hours a day.

Additionally, Acquia provides added reliability features and tools such as New Relic, recording important diagnostic information for problems on the site. Features like these can drastically improve the user's experience with your brand without placing a heavy burden on support teams.

More Robust Security and Recovery

Along with the added support features, sites hosted with Acquia are more secure and better equipped to recover from incidents. Because the technology stack is managed by Acquia support professionals, security patches, and bug fixes are applied regularly and the stack supporting the application is constantly monitored.

Acquia also offers edge protection solutions, defending against Denial of Service and other HTTP attacks. Acquia will even support Drupal in some situations. For example, Acquia has provided additional protections on occasions when vulnerabilities in Drupal have been found. These sort of specialized benefits offer great protection for your users and your business.

In the event an incident does occur and recovery is necessary, it is easy to restore the application to a prior state. Data backups are taken daily and databases can be quickly and easily rolled back through a simple, drag-and-drop admin user interface (UI). The same UI can be used to rollback code to match. These features let development teams react very quickly to incidents and quickly get the application running again.

Delivery Tooling

Because Acquia is already managing the technical stack beneath the application, they must support delivery and deployments. This makes deployments much easier. Acquia’s simple drag-and-drop interface makes it easy to move code across environments. Acquia’s Cloud Hooks and Pipelines features provide a complete Continuous Integration/Continuous Deployment solution, out of the box. These pipelines are tailor-made for Acquia and Drupal and are generally much easier to set up, drastically reducing time-to-market for new features.

The Benefits of Acquia

Each application and team has unique needs that will naturally push a project towards a given hosting model. For exceptionally complex or custom applications, an IaaS or On-Premise solution may be required. However, these models lose the benefits Acquia provides. For most Drupal projects, the additional security, support, and tooling, as well as performance improvements, make Acquia the right choice.

1Commonly, a fourth category called Software-as-a-Service is included but this model doesn’t fit Drupal’s customizability well and has been intentionally excluded from this article.

Feb 16 2024
Feb 16

One of the best parts about working for a digital experience agency is the number and variety of projects we get the opportunity to work on. And while the size and complexity of the digital experience platform projects we work on differ, they’ve offered us the opportunity to learn and discover best practices that others can use to help drive the success of their own projects.

Though the type of client work we take on can vary greatly, some frequent projects we’ve been tasked with are clients looking to switch content management systems (CMS) and clients looking to build multiple websites. From this, we’ve discovered the best way to ensure success involves two key factors: having the correct mindset and the correct approach. 

It’s a Replatform—Not a Lift and Shift

A key part of any digital experience platform (DXP) is a CMS. The CMS serves as a hub to centrally manage content. Over the past several years as clients have built out their DXP, we have seen more and more of them looking to move off of one CMS and onto another. 

Many times, the move is between two different CMS options (e.g. Sitecore to Drupal). Other times, it can be moving from one major version to the next major version of the same CMS (e.g. Drupal 8 to Drupal 9). In both these cases, it’s best to think about the project as a replatform or a rehost, and not as a lift and shift.

The term “lift and shift” can make the project seem very easy. We already have a website over here. We just need to move it over there. That shouldn’t be too difficult. Not only does the term obscure the project’s complexity, but it also misses one of the most important advantages to a project like this. When moving to a new CMS, it’s the perfect time to reassess the goals and requirements.

Discovery Phase

When a project like this comes up, an upfront Discovery phase is key for a successful replatform. The Discovery phase helps the team understand the requirements and learn what works in the current system and where improvements can be made in the new platform.

A key component of the Discovery phase is to perform stakeholder interviews to find out what is and is not working in the current system. If you just “move” everything as is to a new platform, you’re bound to repeat the mistakes and shortcomings of the current system.

We aren’t just concerned with the current system’s mistakes. If the current system has been in use for a number of years, the system’s goals may have changed since it was built. If you are investing in a new platform, you do not want to solve yesterday’s problems.

Platform Audit

In addition to the stakeholder interviews, a full audit of the current platform is also key. Even though the goal is not to recreate the current platform in the new CMS, the architect can learn a great deal from the current platform.

Part of the audit should focus on the custom code that has been written. Often, custom code will contain business logic that is needed in the new platform. Another important part of the audit is understanding how the current platform is used and any workflows that have been created for it. The better the architect understands the current system, the better they can plan for the building of the new system.

Understand the CMS Features and Functionality

One last key point is that the new CMS will have different features and functionality than the current CMS. When moving to the new CMS, you will want to change how the current system is built to take advantage of the strengths of the new CMS. Trying to make the new CMS work exactly like the old CMS will result in a lot of frustration and a poorly-built platform.

Build an Ecosystem, Not a Series of Websites

Whenever you need a system that will support multiple websites, it’s important to approach it as an ecosystem and not just a number of individual websites. Building an ecosystem can be, and often is, a challenge. But done correctly, building an ecosystem results in an easier-to-use and easier-to-maintain system that takes advantage of the CMS.

Building an ecosystem allows you to take advantage of the economies of scale. One way to realize that is to build all of the websites with the same code base. This lets you update the CMS and modules as needed in one place, saving time and resources.

But, you can extend this further. If your platform is built with a component-based approach and you build all the websites using a common set of components, the builds will take less time, as will future updates.

By building a custom theme for each website, but using the same components, you can create different looks to cater to your specific brands. Or, for even more scale, you can build out a common theme to use for all websites and just change colors, fonts, etc. By leveraging the same functionality and components across websites, you can make the platform much easier to maintain and use.

Component Consolidation

One of the main challenges with building an ecosystem versus a series of websites is that doing so requires compromises from the websites owners. It is not uncommon for a client with 10 websites to have hundreds of components and dozens of page templates among the websites.

However, when building the new ecosystem, you should consolidate the components and page templates to reduce the number needed. Without consolidation, the build will cost more and take longer than needed and result in a harder-to-user and harder-to-maintain platform.

This consolidation will require the stakeholders to make compromises as it is not possible to rebuild all of the websites exactly the same with fewer components and templates.

A well-built ecosystem lends itself to be easier to build, use, and maintain. This reduces the total cost of ownership and makes it a better choice than building highly-customized, individual sites.

Flexibility is Key

A new DXP is a large undertaking. Today’s consumers expect a much more personalized and seamless experience across channels. The CMS is a critical piece to providing that flexibility.

One way to provide flexibility in the CMS is by using a component-based approach for the content editors to create content. A component-based approach allows content editors to build pages using a series of components within the CMS rather than having a structured format to the page.

This allows flexibility to build pages tuned to the exact message they are trying to send. When done correctly, it can also speed up the content building process by eliminating the need to have developers involved in the creating and publishing of content.

Component-Based Approach

Component-based approaches are much more common these days, but they’re not always executed well. Having someone experienced with this type of approach is vital to the success of the project.

From a design perspective, striking the correct balance between the number of components and the number of component settings is essential to creating an easy-to-use content editor experience.

From a technical perspective, there are usually a number of ways to execute a component-based approach and pros and cons to each. For example, in Drupal, we can use the Layout Builder module as the foundation for our component-based approach, and it works very well.

As an alternative, we can also use a site-building tool like Acquia Cohesion to execute the component-based approach. Both are solid options with pros and cons depending on the requirements.

A CMS that Provides Data

Another way to provide flexibility is by having the CMS be able to provide data to all of your platforms. Using your CMS as a centralized content source allows each channel to use the content as needed.

Drupal is an example of a CMS that excels in this area. Drupal was built with an API-first mentality, meaning that exposing content using APIs is baked into its fabric. Drupal has several modules that make exposing your content as REST APIs services very easy. Drupal also makes it easy to return that data in a variety of formats, such as JSON and GraphQL, as needed by the system consuming the data.

Mind Over Matter

No two projects are alike. However, your next project can benefit from what we have seen and learned from our projects here at Bounteous. The best way to be successful is to have the correct mindset (“This is a replatform, not a lift-and-shift”) and the correct approach (“Build an ecosystem, not a series of websites”) while focusing on creating a flexible system.

Feb 16 2024
Feb 16

Drupal is one of the leading Content Management Systems (CMS) and one of the largest open source communities in the world, with more than 1 million passionate developers, designers, trainers, strategists, coordinators, editors, and sponsors.

As an open source community, organizations and individuals have many ways they can support the community, contribute thought leadership, and advocate for the principles of open source technology. Bounteous and team members have been Drupal users, contributors, and creators for more than a decade and have found several ways to help contribute to Drupal’s continuous innovation. 

As advocates of open source software, we also strongly believe in the guiding principles of the Drupal community: collaboration, education, and innovation. Below, we’ve outlined some ways that companies and organizations can contribute to the health and success of Drupal—from hosting community events and volunteering to sponsoring Drupal conferences. We wouldn’t be where we are today without the support of this community and we’re honored to give back in as many ways as we can.

Conferences & Volunteering

One way to contribute to the Drupal community is through attending, volunteering, and speaking at Drupal conferences. With local conferences or “camps” around the world, it’s a great way to share knowledge and expertise with the Drupal Community.

Drupal team members at Bounteous have spoken around the world, at conferences such as DrupalCon North America, DrupalCon Europe, Design4Drupal, Florida DrupalCamp, and MidCamp. We’re also annual sponsors of the Drupal Association and DrupalCon North America, as well as MidCamp.  

Events help spread ideas and sharing ideas can benefit everyone. While most Drupal Camps are regional events that happen in-person, we feel strongly about enabling access to thought leadership and helping to educate all those in the community. We’re especially excited about the Drupal Recording Initiative that helps capture Drupal conference presentations and publicly share recordings with the larger Drupal community. If you haven’t taken advantage of these recorded sessions, check out Drupal.TV for many recorded events. 

This year’s conference schedule has seen radical shifts, with regional events like MidCamp as well as the national DrupalCon conference shifting to entirely virtual events. While this has certainly been disruptive and changes the experience for attendees, we’re optimistic about the ease of access to information that these virtual events enable, providing extensive opportunities for those who wouldn't be able to attend in person. We’re also excited about new ways to connect and share educational resources using digital formats to their fullest potential.

Conferences, whether online or in-person, always require volunteers and are a great way to connect with other Drupalists. In 2020, a few members of the Bounteous team helped organize MidCamp, making it the first conference in the space to go completely virtual and at the same time meet accessibility needs of all attendees, ensuring a quality experience for everyone.

Open Source Contributions and Insights

Drupal wouldn’t exist without its diverse group of passionate volunteers to move the project forward. With such an inclusive and supportive community, it’s no wonder how such a strong content management system has been sustained for so long and why people keep coming back.

In addition to the events and meet-ups that we host, we also encourage our team to contribute code to the Drupal project, building time into our roles to account for contributions of all kinds. Some team members provide code patches to Drupal core, while others have created and currently maintain Drupal contributed modules and themes.  

It’s not just developers who contribute to Drupal. There’s always a need for project managers, bug reporters, QA testers, people to help write documentation, and so much more. At Bounteous, this means getting more people involved than just the Drupal team and using mentors to introduce people to open source concepts and offer ways to make an impact. 

Interested in learning more about Drupal migration paths, embedding external JavaScript, or automating Drupal deployment? We, like many others, frequently post about Drupal on the Insights section of our site, featuring Drupal tutorials and resources, and covering the latest advancements in the industry. You’ll find a wealth of Drupal information online, created and shared freely. If you haven’t subscribed to the Weekly Drop, sign up to receive frequent updates and regular posts.

Drupal and the Drupal Association

Behind the scenes, the Drupal Association is responsible for being the caretaker of the Drupal open source project by managing drupal.org, coordinating the promotion of Drupal, facilitating the Drupal Security Team, and many other activities which ensure that the Drupal project remains strong.

Many may know of the Drupal Association through their annual DrupalCon conferences, which is a huge part of raising both awareness and funds to support Drupal and the Drupal Association. These international educational events are jam-packed with curated panels, sessions, keynotes, trainings, and contribution days. Both sponsor and community-supported, these events attract Drupalists from all over the world, with an emphasis on education and collaboration, with sessions and speakers selected from hundreds of submitted topics.

In addition to the yearly conferences, individuals and organizations can become Drupal Association members. The funds contributed go directly to supporting the efforts of the Drupal Association and promoting the open source Drupal community to the world.

Companies can choose from three partner programs: Supporting Partner, Hosting Supporter, or Technology Supporter. Whether you work at a Drupal shop or use Drupal for personal projects, consider joining the Drupal Individual Membership program which comes with a host of benefits.

Sustaining the Drupal Association | COVID-19 Impact

As a company, we are committed to supporting #DrupalCares, which is raising money for the Drupal Association in this time of uncertainty, and continuing to fund the Drupal community—from contributing to Drupal software and volunteering within the open source community to dedicating resources to ensure growth and financial success. 

In this time of need, Bounteous has pledged its 2020 sponsorship of DrupalCon North America to support DrupalCon Global (July 14 - July 17), and we look forward to working with the Drupal Association to make the virtual event a huge success.

We encourage everyone in the Drupal community to come together and find creative ways to help the Drupal Association and each other. 

  • Consider making a donation to the Drupal Association.
  • Other DrupalCon sponsors can consider this year's sponsorship as a contribution and pledge their support to DrupalCon Global.
  • Individuals can consider becoming a member, increasing their membership level, or submitting an additional donation.

Together, 28 individual @Bounteous employees are donating $2,575 to #DrupalCares. We bundled our donations to receive a match from the Bounteous Charitable Fund. Our donation, its match from Bounteous, and @Dries & Vanessa Buytaert brings an impact of $7,725 for @drupalassoc!

— Scott W (@NodeLoophole) April 17, 2020

Feb 16 2024
Feb 16

In a recent internal discussion, the topic of incorporating an external JavaScript dependency into a Drupal project came up. To the surprise of many, we found that this common task still inspired quite a bit of discussion. How could that possibly be? As is often the case in the world of Drupal, there are a number of valid approaches with subtle but relevant differences. The approach to add a JavaScript library can change if you view the task as front end versus back end.

Let's explore the different ways you can include a third-party JavaScript library into your next build — either as a dependency of your theme, custom module, or overall project.

Assumptions

Before we dive into specifics, let’s first talk about a few assumptions that we had as a team.

First off, all of these approaches assume that our preference is to have JavaScript as a local dependency rather than externally hosted. This allows greater control over items like aggregation and would also allow all of the assets in your project to be hosted on the same Content Delivery Network (CDN).

Secondly, our preference is to use package managers like Composer and NPM to easily import external dependencies. Given how common Composer is on Drupal projects and NPM is on front end projects, this isn’t a very controversial assumption, but it does dictate a number of things related to the approach.

JavaScript That is a Dependency of Your Theme

If your JavaScript is a dependency of your theme, the specifics will likely vary a bit based on your front-end tooling and workflow. But at the highest level, you’ll be adding your dependency using your preferred package manager. We typically use NPM on projects, but if you’re using Yarn or another tool the process will be similar. If for example, your dependency was the Inline SVG package, you’d run the following in the same location as your existing package.json file:

npm install inline-svg

In this case, Inline SVG is a production dependency, so be sure not to use the --save-dev option when installing in order to ensure that this dependency will be available for production builds.

Now that your dependency is available, how you incorporate it into your project will vary based on your workflow. We’re increasingly using Webpack as part of our front end workflow. If you’re doing the same, you’d most likely want to import this dependency within your JavaScript so that this can be part of your main bundle or the appropriate code split portion of your bundle. Those bundles are most likely already incorporated into libraries that are part of your theme. The import statement would look something like this:

import inlineSVG from "inline-svg";

If you’re not using a bundler and are instead using something like Gulp (or the less likely scenario of no task runner at all), you’ll likely want to create libraries for these dependencies. The tricky part here can be making the necessary files easily available to your theme. When faced with this problem in the past, I’ve found the vendor-copy utility helpful. Vendor copy allows you to copy client-side dependencies to the folder of your choosing.

Once the files are somewhere that your theme can access, you can follow the standard approach to including JavaScript assets in a library in your theme. 

JavaScript that is a Dependency of A Project

If your JavaScript is a dependency of your overall Drupal project, the approach will be a little bit different. Since you’re managing PHP dependencies with Composer, it would be ideal to manage your JavaScript dependencies with Composer as well. Thankfully, Asset Packagist allows you to do exactly that. If you’re using the new recommended project composer template available starting in Drupal Core 8.8, you’ll need to make a few adjustments to your composer configuration as outlined in Drupal’s Composer documentation.

Add the Composer Installers Extender PHP package to your project's root composer.json file, by running the following command:

composer require oomphinc/composer-installers-extender

Add Asset Packagist to the "repositories" section of your project's root composer.json.

(Note: the screenshots below illustrate the difference you would see after making these changes to the default composer.json created by drupal/recommended-project)

screen shot of Default composer.json before

Ensure that NPM and Bower assets are registered as new "installer-types" and, in addition to type:drupal-library, they are registered in "installer-paths" to be installed into Drupal's /libraries folder.

Screen shot showing default composer.json after

You may now require libraries from NPM or Bower via Composer on the command line by running something like:

composer require npm-asset/slick-carousel

With the settings above, you’ll end up with a slick-carousel folder in your web/libraries directory containing the assets for the slick-carousel NPM package. Since you now can predict the location of these assets, you can create libraries in Drupal that can be used to load your JavaScript dependencies.

If your dependency doesn’t exist on Asset Packagist, there are still a few ways that you can configure Composer to manage your dependency. If the dependency has a repository that contains a composer.json file (or if you could fork the dependency and add one), then you should be able to load the package as a VCS repository using Composer. If the dependency can’t include a composer.json file, you may be able to adapt this recipe on managing CKEditor plugins with Composer to meet your needs.

JavaScript that is a Dependency of A Custom Module

It is also possible to add your dependency to a composer.json file included within a custom module rather than the root level composer.json file. This has the advantage of allowing you to fully self-encapsulate your module, including the necessary JavaScript dependencies. If you’re publishing your module and using it across multiple projects, this is the ideal approach.

The story is a little different if the custom module is specific to a single project. Current best practices require manually adding each custom module to the repositories section of your project’s root composer.json file, which can be tedious if you have a number of custom modules. As a result, we’ve found that we are more likely to add these dependencies at the level of the Drupal project. With Drupal continuing to evolve its Composer support, it will hopefully become easier to inherit dependencies from custom modules and less necessary to individually add dependencies at the project root level.

If All Else Fails…

If none of the package manager based solutions are practical for your use case, you still have a few other options. Let's say there is an externally-hosted version of the dependency on a CDN. You can add this as an external library in your module or theme’s libraries.yml file like this:

angular.angularjs:
  js:
 https://ajax.googleapis.com/ajax/libs/angularjs/1.4.4/angular.min.js: { type: external, minified: true }

Finally, if an externally hosted version of the library isn’t available, you could also take the brute force method. Download the dependency, then package it with your module or theme. This makes it a little more difficult for the dependency to be quickly updated, but at least it gets the job done.

Common Gotchas

While it is great to have the option to add JavaScript dependencies via Composer with something like Asset Packagist, this approach does present some possible challenges. Since you’re now potentially updating JavaScript dependencies along with PHP dependencies, you’ll need to make sure that this is correctly coordinated across the team so there are no surprises on the front end. If for example, your work depends on a specific version of Slick Slider, you’ll need to make sure to pin the specific version of this library in your composer.json. You may also need to adjust your workflow to ensure that your front-end developers have the opportunity to control or at least review any updates to these dependencies.

For cases where you’re managing dependencies of your theme using NPM, it is also important to consider if these dependencies are specific to a single theme, or shared by multiple themes. If they are shared by multiple themes, you may want to consider making some of your JavaScript libraries a dependency of your base theme. This allows you to standardize on a particular version of a dependency and more easily update libraries across multiple themes. On the flip side, you can manage different versions at the individual theme level to give you more control. However, this will add complexity to your theme inheritance and require more effort to manage.

Happy JavaScripting!

The front-end landscape continues to grow and evolve faster than ever before. Every day there are more and more tools in the JavaScript ecosystem that can help us build amazing user experiences with less effort. Hopefully, better understanding the approaches to add external JavaScript dependencies to your project will make it easier to take advantage of these tools and build amazing front end experiences using Drupal. We can’t wait to see what you build.

Feb 16 2024
Feb 16

Welcome to the world of virtual conferences! While there have always been entirely-digital conferences, this year will mark a shift as many organizations are forced to consider converting their previously planned in-person events and conferences to a digital format. For both organizers and attendees, these will be new experiences with different challenges.

As conferences scramble to digitize their offerings, logistics will likely take a front seat in order to make sure the events run seamlessly and any technical challenges with registration and execution are addressed. This is not, however, a time to forget about accessibility and the conference experience. Instead, take this opportunity to consider all of the participants and the unique challenges that many may face as they log on and attend your virtual event.

Bounteous + MidCamp

For the past seven years, Bounteous has contributed to the Midwest Drupal Camp through sponsorship, volunteer organizers, speakers, trainers, contributors, and more. Drupal Camps are often regional events where members of the Drupal community convene to share knowledge of Drupal. With camps around the world, each is different and present great ways to talk about new technologies, best practices, complementing technologies, self-development, and more.

Midwest Drupal Camp (MidCamp) 2020 was the seventh annual Chicago-area (turned virtual) event that brings together designers, developers, users, and evaluators of the open source Drupal content management software. Attendees come for three days of presentations, contribution sprints, and socials while brushing shoulders with Drupal service providers, hosting vendors, and other members of the broader web development community.

This year, Bounteous was excited for the best MidCamp yet, with several of our teammates on the organizing committees and a full roster of engaging speakers, including many traveling in from out of town. But due to the worldwide COVID-19 response and concerns for the health of our community, the organizers of MidCamp had to reconsider the implications of hosting the event. In one week, they agreed on the difficult decision to change MidCamp from an in-person event to a 100 percent virtual event.

What followed was a whirlwind week with a small, yet mighty, group helping to coordinate and communicate changes for one of the first fully-virtual conferences in Spring 2020. As many tackled how to logistically pull off a virtual conference, the committee that I served on, the accessibility committee, tackled a different set of challenges. How could we ensure that everyone — everyone — would have a consistent and thoughtful experience during and after the conference. While we had been planning MidCamp’s accessibility goals for months, this opened up new questions and new opportunities for us to create an inclusive experience.

As other conferences consider “going virtual,” we’ve highlighted key areas that we focused on as a roadmap and checklist for your upcoming events. When possible, we’d encourage everyone to take advantage of digital and all it has to offer, new tools and features as well as distributed volunteers to create amazing and authentic, live experiences.

An Accessibility Case Study: 2020 MidCamp Goes Virtual

The accessibility committee needed to quickly evaluate how this change to virtual attendance would impact our participants. The following checklist was used to ensure the event was accessible for everyone before, during, and after the event.

Virtual Event Accessibility Checklist

  1. Ensure the registration website conforms to Web Content Accessibility Guidelines (WCAG).
  2. Ask participants upon registration to identify any additional accommodations they need to fully participate in the event.
  3. Email communications and Twitter accessibility.
  4. Share presentation slides before the event.
  5. Set expectations with speakers and participants before the event.
  6. Provide live captions during the event.
  7. Offer live support during the event.
  8. Provide transcripts on the website after the event.
  9. Provide captions for recordings after the event.

Before the Event

Registration Website 

MidCamp performed an accessibility audit to ensure users were able to sign up for the event and access crucial information on our site. For registration, we needed to make color contrast adjustments to our registration form. Our online ticketing vendor, Tito, was outstanding at supporting our accessibility needs.

There are many accessibility checker tools you can use to run an accessibility audit for your site. However, these tools can never replace human review to identify potential issues and elements that may be confusing or misleading. We suggest you use this WCAG checklist in addition to any accessibility checker tool. We used both manual checking and the SiteImprove browser extension.

In addition, on the MidCamp registration form, we asked participants to let us know if they needed any additional accommodations. The accessibility team reviewed each request and reached out with a one-on-one message. Note, however, that MidCamp did not require registration once we shifted the event from in-person to virtual, so some potential accommodation requests may not have reached us.

Each message to participants began with the following communication:

“Hello!

I'm on the accessibility team for MidCamp (along with my other team members cc'd on this email). You indicated on your registration that you have an accessibility request. Please let us know your needs and we will do our best to provide accommodations.

Be assured that we take your request seriously and strive to have the most inclusive environment for everyone. Our aim is to listen and advocate for you.

Thanks again for registering for MidCamp. We're excited to have you there!”

Email Communications and Twitter

We reviewed our email templates and any communication up to and during the event for potential accessibility issues. This included making our Tweets more accessible. Important information and essential calls to action needed to be clear and actionable for everyone that would attend MidCamp.

screenshot of Twitter Accessibility Settings

Presentation Templates

MidCamp provides slide templates in a wide variety of formats that have been evaluated for accessibility. The templates have been reviewed to ensure proper color contrast, font size, and slide transitions. These are provided on the Speaker Resources page.

image of the first slide of the MidCamp Presentation Template

Consider the differences between a live event and an online event. Instead of a separate captioning monitor, speakers may have captions overlaid on their slides depending on the platform and technology used. Communicate this information to speakers in advance, and review early submissions. Ensure that all attendees will receive a quality experience regardless of whether captioning is turned on or off.

Share Presentation Slides

Speakers were encouraged to upload and share their presentations ahead of the event on the MidCamp website. We let speakers know that they have the ability to upload and we also provided the following explanations on why this is helpful. Sharing ahead of the event allows participants to:

  • Download the slides and increase the size of graphics that may appear small during the screen share;
  • Set up the use of a screen reader;
  • Prepare in advance of the session so they can follow along if they are not native English speakers;
  • Understand the information at their own pace.

Set Expectations for Speakers

MidCamp held three separate training sessions prior to the event to answer speakers’ questions and review the logistics so speakers felt comfortable and prepared. This allowed time both to make accessibility accommodations for speakers and to let speakers know our team’s expectations for putting on an accessible event.

After reviewing the technical logistics, we also provided speaking tips such as:

  • Speak clearly. And avoid speaking too fast, so participants and live captioners can better understand you and keep up.
  • Use plain language. Make all information as clear as possible. New vocabulary/techy words are nice to have written on slides, especially for those in the audience who are not native English speakers.
  • Describe pertinent parts of graphics, videos, and other visuals. Describe them to the extent needed to understand the presentation.
  • All sessions professionally captioned. MidCamp was able to provide live captioning through the Zoom interface by working with event sponsors.
  • Transcripts available later. Transcripts were created from the live captioning and added to session pages on MidCamp.org after the event.
  • Take advantage of the digital tools. Presenting online offers a number of new features and tools to measure audience engagement and participation, like chat and non-verbal responses. Encouraging participation would help replace the in-person cues.

Set Expectations for Participants

MidCamp notified participants that we were going to use the application Zoom to host the event and use the application Slack for communication during the event. This allowed participants time prior to the event to download the necessary applications, learn more about each tool, create accounts, and join the event sessions as well as participate in discussions.

MidCamp modeled our Zoom set up on a physical conference environment. The Zoom invitation links were defined as rooms, with one link for each room. Participants were able to visit the site schedule, click on a Zoom invite link, and join a session at the selected time. All times were listed in Central Daylight Time to lessen confusion for participants across time zones.

MidCamp Slack was used for communications and discussions during the event. Speakers encourage using chat functionality in Zoom for questions and comments to the speaker during the actual session. A Zoom room monitor facilitated questions and answers on behalf of the speaker. The Zoom room monitor also notified participants of MidCamp’s Code of Conduct, which was modified to accommodate the virtual format of the event.

During the Event

Provide Live Captions

MidCamp provided live captioning during all sessions. With the financial support of sponsors, we were able to use Alternative Communication Services (ACS) for remote captioning. We used ACS's remote CART* service to provide an instant, realtime transcript accessible by all viewers during the event.

screen grab of slide from a midcamp presentation using live captioning

We had one captioner from ACS in each of our five virtual rooms at our event. Note that captioners are real people and are not an automated speech to text captioning tool.

Zoom provides an easy way to provide captions. We were able to assign the captioner permission to type Closed Captions for each session. Participants were able to turn on or off captioning by clicking the “CC Closed Captions” button in the Zoom interface.

*CART stands for Communication Access Real-time Translation

Offer Live Support

MidCamp ensured that organizers of the event were available to help participants. This included the following activities:

  • Directing participants to the correct Zoom link.
  • Checking into each Zoom virtual room prior to the session to ensure there were no technical issues.
  • Ensuring live captioning was functioning properly for all virtual rooms.
  • Monitor any Code of Conduct violations.

After the Event

Add Transcripts To Your Site

ACS provided transcripts for each session, which were then added to each individual session on the MidCamp website. View an example of a transcript on the MidCamp site by scrolling to the bottom of the Live Captioning presentation page. Adding the transcripts helps us reach an even wider audience by providing SEO (Search Engine Optimization) value for our site.

screen grab of MidCamp Transcript Example

Adding transcripts to session pages of the website made each page very long. We made an adjustment to the theme styling to contain the text for the transcript within an element that scrolls. Note that this update to the MidCamp site is keyboard accessible, which allows more users to navigate properly to read the transcript.

Provide Captions for Recordings

Thanks to Bounteous’ sponsorship, MidCamp recorded each session and made them publically available on YouTube and Drupal.tv. While YouTube provides automatic captioning upon upload, the speech recognition technology is not always accurate. For example, “MidCamp” would be auto captioned as “mid kant” in videos.

screen grab of midcamp welcome presentation on YouTube using captions in SRT format to provide the correct language

For that reason, we had ACS provide the transcripts in SRT format so we could upload it to YouTube to improve the accuracy of the captions in the videos. View this support article to learn how you can upload your own subtitles and closed captions. Note that this was an additional cost and service provided by ACS in addition to the remote CART service.

A True Virtual Success

Together, our team of technologists and accessibility experts helped pull off one heck of a turnaround, with the efforts of many succeeding in delivering a fantastic experience. Our volunteers, speakers, captioners, and attendees were engaging and responsive, helping to make our virtual event a true success.

“Thanks for bringing the community together despite the adversity you were dealt. It was a very positive action and outcome during these wild times. I know it wasn’t easy, so thank you for your hard work and dedication.”

Interested in more about how MidCamp went from an in-person event to 100 percent virtual? Check out the following podcasts:

Feb 16 2024
Feb 16

Previously we shared scripts for automating Drupal deployment and rollback. Those scripts serve as a starting point, but every project has unique constraints and challenges that merit making customizations. And, there are a myriad of problems you can run into in practice. Inevitably, every project will be deployed or rolled back in a unique way, especially for enterprise applications.

Since automated deployment and rollback are typically set up once with minor changes over time, you may find yourself never having a chance to see how others have solved similar problems. Wouldn’t it be encouraging if the solutions you’ve come up with are the same that others are using too? Or, what if there was a new idea that could improve your process? You’ve come to the right place.

Having set up automated deployments for a variety of Drupal projects, I’ve noticed a few patterns that are worth sharing. This article is written in a "recipe" fashion, so you are encouraged to skip around to parts relevant for you.

Using Drush

A number of deployment tutorials talk about running deployment steps from the browser. Don’t do this! That isn’t automated. And, the environment configurations when using the browser are different than command line; a common example is the browser running into an execution timeout or memory limit that doing it from the command line wouldn’t cause.

You should be running these in a scriptable way using tools like Drush, Drupal Console, Acquia BLT, or if your hosting provider provides an API that works too.

You Don’t Have a Build Artifact

Another thing a number of deployment tutorials talk about is running composer install as part of the deployment steps. If you are doing this you might be awaiting a disaster.

Here’s an example why: remember in 2016 when a huge number of build processes started failing because a library was deleted in the NPM registry? Have you ever had a build fail with a connection timeout when composer was trying to download a patch from drupal.org? You do NOT want your deployment success dependent on the stability of the Internet or community libraries!

The wisdom that Michelle Krejci shared with me long ago comes into play: "Production is an artifact of development." (You can watch her talk from DrupalCon 2016). At minimum, you should be running composer install and committing those dependencies to the repository as part of a build process. This creates an "artifact" that you can then deploy.

Personally, I recommend using Acquia BLT, and there is a quick video you can watch to see what the tool does in action. BLT is compatible with Acquia and any hosting provider that uses git for managing code.

Pre-Production Deployments Are Failing When Scrub Runs

Please note the two different database scrub processes that may run on Acquia. They serve as a good reminder that you need to be aware of what processes are running on any hosting provider to ensure that after you deploy code changes the update process completes before anything else.

Acquia Cloud Site Factory (ACSF)

You’ll need to be careful of the 000-acquia_required_scrub.php hook in post-db-copy. You’ll notice that the scrub process rebuilds the router (see line 159) which will likely run before your update process on pre-production deployments.

A suggestion is to implement a post-db-copy hook named with the prefix "000-aaa-" to ensure it runs before the scrub. Keep in mind this hook does not run on production deploys, so you’ll need to implement the update process differently for production deploy.

Acquia Cloud

Be careful of the db-scrub.sh hook in post-db-copy. You may only have this hook if you’re using BLT. But, if you do have it, you’ll notice that it runs a full cache rebuild (see line 161), which of course causes the problems we’ve discussed on pre-production deployments. You can use the same suggestion for addressing the ACSF scrub hook here as well.

Copying Caches From Production

When you are deploying to pre-production, you want to copy all application state, including caches, to the pre-production environment. If you aren’t storing caches in the database and are instead using a storage like memcache, you may not be able to copy caches on deploy or it simply may not be practical.

If you can’t copy the caches, then you have an awkward application state when the update process runs: the caches are representative of the pre-production state before the deployment, which is a complete unknown, but the database is from production. Shouldn’t that cause issues during the update process since caches haven’t been rebuilt yet?

I haven’t heard of any issues proven to be caused by this, but if you suspect this is causing issues for you, there is something you can consider: put important caches in the database. Then, when the database is copied the caches get copied as well. In concept, if you’re using memcache the code would look something like this:

$settings['cache']['bins']['render'] = 'cache.backend.memcache';
$settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.memcache';
$settings['cache']['default'] = 'cache.backend.database';

The idea would be to put all caches in the database and then choose which ones are OK to leave in memcache or other cache storage. Be warned, I haven’t proven this works in practice!

Improving Speed of Deployments

There are a myriad of reasons why you want your deployment to be as fast as possible; minimizing downtime on production is a key reason. There are a couple patterns I’ve seen used:

Don’t Take a Backup During the Deployment, Instead Trigger it Separately

This might work well if you have a highly-cached site with few or no changes to the application state and can inform anyone with access to the Drupal admin to not be in the system during the deployment window. Be cautious of trying to do this with commerce sites, sites that accept user-generated content, or sites with integrations that result in frequent changes to Drupal’s state.

Don’t Clear All Caches, Instead Selectively Clear What You Need

This is a good fit for large multi-sites, commerce sites, and sites with lots of traffic. To achieve this, you’ll need to be aware of any processes that might be clearing cache. Notably, you’ll want to use the --no-cache-clear option with updb to avoid it running drupal_flush_all_caches().

You should still clear some caches on deploy like entity definitions, but caches like dynamic page cache and HTTP proxy caches (e.g. Varnish) can now be cleared more cautiously.

Improving Speed of Restore

Likewise, the speed of restoring your site in the event of an outage or degraded service due to a deployment can be critical to the business. A few ideas I’ve seen include:

Don’t Take a Backup Before Restore

This is a risky path to go. The amount of coordination and effort it takes to plan a release can be significant. You will want all the tools and data you can get to successfully diagnose the cause of a failed deployment so that you can resolve it for the next. If you don’t take a backup before you restore, you may be missing out on the critical information you need to diagnose. This approach can work, but use it cautiously.

Don’t Restore the Media

This is sometimes low risk. Drupal does a good job of keeping the original files for media entities intact and you can easily regenerate image styles. However, there are at least three be-cautious-of-sites that have a large amount of media: you may be putting yourself in a worse spot if you have to re-generate lots of image styles due to a configuration change. Also be aware that if changes were made to media after your restore point was taken, you may end up with data that is inconsistent with the actual files on the filesystem.

Always Restore the Database

Always, always restore the database during a rollback. Skipping this step will cause you more harm than good. The only exception that comes to mind would be if you are rolling back a very minor code or config change that you know is idempotent. A rule to consider is this: you can rollback a deployment whose changeset is equal to or less than the number of years you’ve been managing Drupal deployments. If you’ve been doing it for two years, then you are allowed to evaluate whether to rollback two lines of code change.

How To Identify Restore Point on Rollback

Performing a rollback assumes you have a way to uniquely identify the restore point associated with the last good deployment. You can’t simply restore to the latest restore point because the rollback script might also be creating backups.

If your backup process is capable of associating a custom label with the backups (such as Acquia Site Factory), you could set the label to the tag that was checked out when the backup was taken and whether it was generated as part of a deployment or a rollback. Then, you know if you see a backup labeled "deployment-5.3.8" it could be used as a restore point when deploying 5.3.9. And, if you see a backup labeled "rollback-5.3.9" then you know that it was the backup taken during the rollback process from 5.3.9 to 5.3.8 which can be used for auditing. In the case of multiple backups with the same label, you would take the most recent.

Assuming you don’t have the ability to label your backups you would need to identify something else as a unique identifier and pass that as a parameter to your rollback script. This could be an actual identifier number or a timestamp that includes seconds.

Update Your Data, But Only After Config Changes Are Applied

Needing to update data after config changes have been applied is a common use case. For example, maybe you create a new field on a node and you want to move data from the old field to the new one. There is a Drupal ticket to create a hook_post_config_import_NAME which would allow you to achieve this very goal! What can you do in the meantime?

In that ticket, a comment explains that you can use hook_post_update_NAME and modify how you run the update process to only run it after config import. You would change your deployment steps to use the --no-post-updates flag when running updb and then run it an additional time after cim like so:

drush updb --no-post-updates
drush cim sync -y || drush cim sync -y
drush cim sync -y
drush updb

You can see the Massachusetts’ government website (mass.gov) has included this in their deployment steps. See their Deploy Commands class.

Alternate Workaround: Change Configs in the Update Hook

If you can’t use the hook_post_update_NAME workaround, there is an alternative that is well documented in the notice Support for automatic entity updates has been removed (entup). There are several code examples on how to modify configurations, but they can be pretty cumbersome, even duplicative if the configurations you need are already part of the configuration synchronization process.

There isn’t a very easy way to just import a configuration you need. That’s where the Config Importer and Tools module comes in handy. With this module you can now import configurations in just a couple lines:

$config_importer = \Drupal::service('config_import.importer');
$config_dir = \Drupal\Core\Site\Settings::get('config_sync_directory');
$config_importer->setDirectory($config_dir);
$config_importer->importConfigs(['mymodule.settings’]);

The key parts are to set $config_dir to the directory containing the exported configuration files you want to import, and pass the list of configurations you want to import to importConfigs. The project page shows other examples, including how to import configurations from a module directory.

Need to Reindex Search

If your Search API configurations are managed in code, you’ll want to ensure that on deployment your search index is updated. The search index process has potential to take a lot longer than you would tolerate during a deployment window. But, there’s an easy compromise.

Add this line at the end of your deployment script:

drush search-api:reset-tracker

This will mark the indexes you have to be reindexed, but it doesn’t delete the data. So, your search should still be functional while your cron takes over to do the reindex. Be mindful of cases where not clearing the index will cause issues (e.g. template files expecting a new field that isn’t available yet in the index).

Ideally, you would be able to write code changes to accommodate stale data in the search index, but sometimes you can’t avoid it. In that case, you might be better served by extending the deployment window to include enough time to completely reindex the search content.

Also, be mindful of how long it takes to reindex your content. If it takes hours or days to reindex, you may want to increase the frequency of that cron task, increase the batch size for each cron run, or just plan for the reindex to be part of your deployment window.

Need to Preserve State in Pre-Production

Imagine you have a module that uses the OAuth protocol. You do not want the production and pre-production environments to use the same refresh tokens because one of the environments will use that token first and the other will receive unauthenticated errors. Some people will simply manually re-authenticate the environment after a deployment to pre-production, but is there an automated way?

Yes! In your pre-production deployment you can add something like the following as the first line:

DRUPAL_REFRESH_TOKEN=$(drush sget mymodule.refresh_token)

Then, after you run config import but before you disable maintenance mode, you can set those variables again:

drush sset "mymodule.refresh_token" "$DRUPAL_REFRESH_TOKEN"

When implementing this idea, you’ll want to expect a few failed deployments until you get the exact syntax right. Capturing output like this using Bash can cause some tricky issues. Also, be cautious of these commands failing. If either command fails, do you really want the deployment to fail?

Probably not, and here’s why. Imagine a scenario where you run a deployment and the deployment fails on the last step. You realize you have a small code change to make, which you do, and deploy again but it fails on the first step. In effect, you can’t deploy new code to pre-production because it can’t get past the first step! So, you may consider rewriting those lines to guarantee a zero exit code even if it fails.

Pre-Deployment Checks

We won’t cover checks that you might have in your entire release pipeline here. But, there are a couple checks that are useful to do right before you trigger a deployment to production that are worthwhile to consider.

Overridden Configurations

If you’re not using the Config Read-Only module, then there will be that one time that a configuration gets overridden on production and a deployment wipes out that change. The problem of ensuring configurations on production are never overridden can be difficult when governance involves many people with varying needs and opinions about access. Also, you’re bound to run into that rogue module that is storing transient state in config entities instead of using State API or Cache.

An easy solution to keep good hygiene is to add this line as the first step in your deployment script:

drush config-status | grep -q 'No differences'

Now, the deployment will halt at the first line if there are differences. You can check to make sure whether you want to preserve those changes by either manually re-applying them after deployment, or exporting those changes to code and rescheduling a deployment that can include those changes.

Creating Tags in the Right Branch

Have you ever accidentally created a tag from the wrong branch? It can happen to anyone and the impact is significant: you’re deploying the wrong code! There are a lot of ways to prevent that, but one way would be to confirm the tag you’re deploying is on the intended branch programmatically like so:

git branch master --contains "tags/$BITBUCKET_TAG" | grep -q master

In this example, "BITBUCKET_TAG" would be an environment variable with the name of the tag you’re deploying. Doing this check as part of your deployment may not be possible or it may be too late in the deployment pipeline. Typically, this check would go in your build step that gets triggered on tag creation.

Tag Doesn’t Exist

If you have a process that runs to build your deploy artifact and then trigger an automated deployment, you may want to make sure the deployment doesn’t run unless the expected tag is available on to your production environment. For example, it would really stink to trigger the deployment and the deployment process puts the site into maintenance mode, takes backups, and only then notifies that the deployment has failed because the tag doesn’t exist. Let’s avoid that.

Writing the check will be dependent on your hosting environment. If you’re using Acquia Cloud, there is an endpoint at "/applications/{applicationUuid}/code" that returns a list of references. If you’re doing a regular git checkout, you can use rev-parse like so:

git rev-parse "$BITBUCKET_TAG" > /dev/null 2>&1

Again, we assume "BITBUCKET_TAG" is a bash variable defined in the script’s runtime. This check is best to include as part of your deployment script.

Someone Committed a Git Submodule

I have to throw this one in here in case anyone is not able to exclude composer dependency directories (like vendor, docroot/modules/contrib, docroot/themes/contrib, etc.) from the repository you’re working out of for development. Have you or someone on your team cloned a module or repository instead of using composer and then committed that module? In effect, it creates a git submodule which can cause surprising and hard to diagnose issues.

Well, luckily there’s an easy check you can use:

git diff --quiet

This line runs a git diff to check whether there are any changes; if there are, it returns a non-zero exit code. If the commits you’re pulling (or cloning) have a submodule the diff will show that as a change which then causes the build or deployment to halt (this is assuming you aren’t running "git submodule update" beforehand).

This check is best to put in as early in your deployment pipeline as you can so that it is caught early. If you must include it as part of a deployment, you can put it immediately after the step that deploys code to the environment.

Review the Deployment Changes

Apply Linus’ Law wherever you can, which drills down to: the more people who look at code, the more obvious issues become. It’s perfectly reasonable and encouraged to review your entire release before you trigger a deployment. If you are the one responsible for issues if a deployment fails, you will want to have a survey of what all has changed so that if and when there’s an issue, you will already have a few ideas on where to look.

A Word of Caution

If you are responsible for the success of your deployment and rollback process, you have a difficult balance to keep. If you continually make changes to your deployment and rollback process, you risk introducing failures. Even a single deployment or rollback failure can shatter the trust and confidence in the technology and the people supporting it. But, if you never make changes and have no plan for handling changes to the deployment and rollback process, you’re planning to fail.

Test your deployment and rollback process with every change. This is a perfect use case for having a pre-production environment where you can use the exact same deployment and rollback scripts. Depending on the constraints of your hosting provider, you could consider other deployment models like blue-green and rolling deployments. Regardless of deployment model, ensure you are getting early feedback on any failures after a deployment so that you can fail-forward with a quick change to fix it or rollback.

Feb 16 2024
Feb 16

You might know how to deploy Drupal, but do you know how to automate Drupal deployments? Even more, do you know how to automate a rollback of a Drupal deployment? In this article, we’ll cover both topics using scripts I’ve used and proven to work well for most use cases.

Why Should You Automate, Anyway?

I encourage you to take some time to read through Code Climate’s blog post, “State of DevOps 2019 Tells Engineering Organizations to ‘Excel or Die.'” The article discusses the DORA report and arrives at an interesting statement:

“Transitioning to some form of Continuous Delivery practices shouldn’t be a question of if or even when. Rather, it should be a question of where to start.”

Automating your deployment and rollback steps is a great place to start. Doing so can bring stability and consistency to your release management process, and it’s a critical step on the way to Continuous Delivery.

Goals and Assumptions

Now, before we dig in I have a few items to cover.

First, I should outline some assumptions:

You’re using Drupal 8 or higher.

You are using Drush 9 or higher to import and export to deploy important changes to your website’s configuration entities. You are modifying the configurations on a lower environment, such as your local development environment, and exporting those changes to code that gets pushed to the canonical environment where you import the changes.

You’re deploying a build artifact. In other words, you have a build step in your delivery pipeline that creates the changeset you are deploying.

You have a canonical environment, which means a single source of truth for state that is not deployed with the code repository, such as the database and media. Usually, this is the environment called production.

You have a pre-production environment, which is an environment that closely matches production. The names Staging or Test are also commonly used to refer to this environment.

Second, we should outline some goals for deployment automation:

  1. Minimize the likelihood of bugs or downtime.
  2. A deployment should be indicated as failed if the failed step in the script cannot be successfully addressed with automation.

Last, we should outline some goals for rollback automation:

  1. The state of the system at a failed deployment should be auditable at a future time, even after rollback is complete.
  2. The rollback has a restore point at which it is restoring to and is optimized for restoring the entire system to that state.

Now, let’s look at the scripts. It’s important to keep in mind as you read through these scripts that they are intended to achieve the goals outlined above. There are cases where these scripts are unfit, so let’s consider these scripts a starting point from which you can make adjustments for those special cases.

Deploy to Production

Let’s first look at the deployment script for production and then we’ll discuss it line-by-line. As you read the script, keep in mind that if any step fails, the deployment process should halt unless otherwise specified. Here is the script:

drush sset system.maintenance_mode TRUE
# Create a restore point by taking backups of anything that is not in the code repository: database, media, cache
# Checkout the code you are deploying
drush updb
drush cim sync -y || drush cim sync -y
drush cim sync -y
drush sset system.maintenance_mode FALSE
drush cr

Line 1: We enable maintenance mode to ensure our restore point is consistent. If our restore point is not consistent we may run into bugs relating to mismatched state, e.g. an entity reference that doesn’t exist.

Line 2: Creating a restore point is highly coupled with your hosting provider. This step is expensive in terms of time, but it is critical to a successful rollback. At minimum, you’ll need to include the database. You may also consider including media (public and private files), any additional application state like caches, or your search index.

Line 3: Checking out the code you are deploying is also highly coupled with your hosting provider. You will want to make sure you are checking out a deployment artifact that has a unique pointer (e.g. a commit hash or tag) in your Version Control System (VCS). This will be useful during rollback unless you can do a complete restore of the filesystem containing the code (e.g. if using containers).

Line 4: Running Drupal’s update process should be the very first thing to run. Do not clear any caches before running updates! This may seem counter-intuitive, but it is requisite to certain types of updates like converting field definitions, see a commerce example here. Remember, the point of update hooks is to make changes to application state or configurations like container services, entity types, field definitions, block plugins, etc. If the application doesn’t have information about the previous state of things, it will try to load the new field definition from code which may not match the database schema (or a myriad of other issues) and throw an error.

If you observe that the update process is failing, double-check whether a full cache rebuild is being run by an update hook or whether a hook is trying to read and use things like field definitions. See this commit on Acquia Lightning for some commentary on what can happen in practice.

Directly related, take a look at Issue #3100553 on Drupal Commerce which explains the challenges that some updates might run into. Thank you Matt Glaman and Centarro Support for helping clarify this issue!

One last point here: did you notice we aren’t using drush entup? That’s because it is deprecated in 8.7.0, and for good reason. As discussed above, it is the responsibility of update hooks to ensure the proper state and configuration changes are applied. You can read more about this on Issue #2976035.

Line 5: Immediately after running the update process, you’ll want to import your configuration changes. You’ll notice the line has two pipe characters. There is an issue with the config import process where if a module is installed and configuration entities are being created for that module it may fail to create the config entities because of missing dependencies. This is a cache issue and we don’t want to fail the entire deployment if that is why it is failing. To achieve this, we run config import again if the first time it fails — that is what the double pipe does. See the bash reference for how the double pipe works.

Line 6: Assuming the first config import completes successfully, we need to run the config import a second time. The first config import may include changes to the behavior of config import, such as config ignore or config split. We must run config import a second time to ensure those changes take effect.

Line 7: At this point, we are done with the critical parts of deployment. Now, we disable the maintenance window to allow regular use of the site again.

Line 8: Last, we clear caches to ensure all caches are clear from any caches generated while the maintenance mode was enabled. For example, if page cache is enabled, it may have cached a page showing the maintenance page which we don’t want since the site is no longer under maintenance.

That’s it! We now have a universal starting point for a deployment script, with the caveat that lines 2 and 3 will vary per hosting provider and may not be a single line.

Deployment to Pre-Production

When we deploy to a lower environment like a pre-production environment (sometimes called staging environment), we need to adjust our deployment script slightly. Same as the production deployment script, a failure at any step should halt the deployment unless otherwise specified.
Here is the script:

drush sset system.maintenance_mode TRUE
# Create a restore point by taking backups of anything that is not in the code repository: database, media, cache
# Copy application state from the production environment to the target environment
# Checkout the code you are deploying
drush updb
drush cim sync -y || drush cim sync -y
drush cim sync -y
drush sset system.maintenance_mode FALSE
drush cr

You can see that the only difference is we have a new line added at Line 3. Recall that our canonical environment, production, has application state that is not tracked in code. Therefore, to ensure we are performing a deployment to pre-production in the exact same way we would deploy to production, we must copy all application state to the pre-production environment. At minimum, this should include the database. We need the database for the obvious reasons that it includes content and unmanaged configurations not tracked in code. Also having media (public and private files), caches, and the search index can allow us to rehearse the exact way these changes will deploy to production.

Deployment Rollback on Production

The most challenging part of rolling back a deployment is perfectly matching that point in time prior to the rollback. Since our deployment step captures all application state we have that point to roll back to! Here is what a scripted rollback looks like, and again if any step fails we halt the rollback unless specified otherwise:

drush sset system.maintenance_mode TRUE
# Create backups of anything not in the code repository for auditing later: database, media, cache (we’ll assume logs are sent to an aggregator)
# Checkout the code you are deploying (i.e. the code from the last successful deploy)
# Restore anything that is not in the code repository from the restore point: database, media, cache
drush sset system.maintenance_mode FALSE
drush cr

Line 1: Put the site into maintenance mode so that we can take backups for auditing later. If this step fails, don’t fail the rollback as it may be symptomatic of the very reason you are doing a rollback.

Line 2: Run the backups to capture all the application state for future auditing. By having the complete application state, we can potentially recreate the issue that required performing a rollback.

Line 3: Check out the code from the last successful deployment. If your backup process includes the code and filesystem together, then perhaps you can group this in with Line 4.

Line 4: Restore all application state. At minimum, this is the database. But, you may desire to include media (public and private files), caches, and the search index.

Line 5: Disable the maintenance mode to allow regular use of the site again.

Line 6: Same as a deployment, we clear caches to ensure everything is clear from the time period during which the maintenance mode was enabled.

Deployment Rollback on Pre-Production

You’ll want to be able to test your rollback process and being able to do that on pre-production has many benefits. The steps here will look the exact same as production. Since our pre-production deployment process includes a step to backup all application state we have a restore point we can use to do a rollback.

A Closing Thought

We mentioned these scripts should serve well as a starting point from which you can make adjustments according to the goals and needs of your project team. Every software project and every client has unique needs and constraints that challenge what we know as a “good” deployment or rollback process. As you go through the process of implementing automated deployment and rollback scripts, reflect on whether the process and tools being put into place are making reasonable use of our universally precious resource: time.

Feb 16 2024
Feb 16

Recognized Based on Product Expertise and Growth

CHICAGO – MARCH 4, 2020 – Bounteous today announced it has been selected as Acquia Partner of the Year for the Americas for 2019, given for its outstanding performance over the course of the past year. Bounteous is being honored as a top-performing partner for helping to power more effective digital experiences across the entire customer journey with Acquia.

Bounteous worked closely with Acquia throughout the year to surface opportunities for growth — from product advancements and Early Adopter Programs to joint prospecting, marketing, and deal partnering. Bounteous also excelled at creating joint offerings, such as AI-Driven Personalization, that pave the way in providing best-of-class digital experience solutions for clients.

Bounteous is an Acquia Gold Level Partner and maintains the most accreditations of any organization outside of Acquia with 13 Acquia Grand Master certifications and 84 Drupal developer certifications. As exemplified in award-winning client programs, Bounteous truly knows how to help their clients realize value through the Acquia Open Digital Experience Platform.

“We’ve thoroughly enjoyed partnering with Acquia this past year to provide unmatched capabilities to our clients,” said Seth Dobbs, CTO of Bounteous. “Being named as Acquia’s Partner of the Year is an incredible honor and helps solidify our standing as a top-tier partner in the industry. It’s been a very exciting time of growth for both of our companies, and we couldn’t ask to work with a more talented group of colleagues.”

Acquia recognized 15 partners across five global regions based on their overall revenue performance, overall growth with Acquia, and number of new customers secured last year.

“Bounteous is to be commended. 2019 was an incredible year for Acquia and our partners, with demand for our world-class digital experience solutions driving significant growth,” said Joe Wykes, SVP, global channels and sales, at Acquia. “2020 promises to be another amazing year, and together we’ll help our customers set the bar for delivering impactful customer experiences across channels.”

Bounteous has paved the way in the industry as the first Acquia Lift Solutions Partner and continues to power more effective digital experiences across the entire customer journey. As part of Acquia’s Partner Advisory Board, Bounteous invests heavily in beta testing to help advance products with Acquia and ensure their teams are at the forefront of industry trends and solution offerings. Most recently, Bounteous has been supporting the Acquia team developing playbooks and demos for Lift, Cohesion, and AgilOne.

Leaders in digital experience delivery, Acquia partners support the world’s leading brands in facilitating amazing customer experiences. A full list of Acquia Partner Award winners can be seen here

About Bounteous

Founded in 2003 in Chicago, Bounteous creates big-picture digital solutions that help leading companies deliver transformational digital brand experiences. Our expertise includes Strategy, Experience Design, Technology, Analytics and Insight, and Marketing. At Bounteous, we form problem-solving partnerships with our clients to envision, design, and build digital futures. For more information, please visit www.bounteous.com.

For the most up-to-date news, follow Bounteous on Twitter, LinkedIn, Facebook, and Instagram.

About Acquia

Acquia is the open digital experience company. We provide the world’s most ambitious brands with technology that allows them to embrace innovation and create customer moments that matter. At Acquia, we believe in the power of community - giving our customers the freedom to build tomorrow on their terms. To learn more, visit acquia.com.

Feb 16 2024
Feb 16

Takeaway: Governments should adopt policies to govern web content since various kinds of web content are essential for community engagement. We need to guarantee accurate information reaches constituents.

Drupal provides versatile content oversight tools, including permission settings and review processes. With Provus®, organizations can develop a governance model to meet their requirements.

Here we explore Drupal’s capabilities that empower governments to govern site content, like flexible role management and built-in workflows. We also touch on how Promet extends Drupal functionalities relevant to content approvals and oversight through solutions like Provus®.

With Drupal, you can create and manage a modern, engaging municipal website.

Key aspects of a content governance framework

Content governance is an organization's management of digital content assets and documents. It includes creating, reviewing, approving, distributing, storing, securing, and managing the content lifecycle. Effective governance requires a clear strategy, policies, procedures, and tools.

Clearly defined roles and responsibilities for managing website content

A content governance policy designates clear-cut roles and responsibilities for governing website content. What are the responsibilities your content managers, editors, subject matter experts, and leadership stakeholders play to ensure your content guidelines and standards are upheld?

For example:

  • The content manager acts as the central oversight hub. They are accountable for establishing policies, enabling CMS tools and automation, delivering training programs, aligning content standards, and reviewing published content.
  • The editors work with subject matter experts within city departments. They develop the departmental website content. They also verify accuracy and solicit ongoing guidance from their department.
  • The subject matter experts impart crucial knowledge. They provide relevant evaluations and feedback on draft content brought forward by the editors. This bridges department insights with public information.
  • High-level stakeholders, ranging from the Mayor to department heads, shape the overarching website strategy. They appoint capable editors and content managers responsible for managing each department’s web content.

Established review and approval workflows

Rigorous review and approval workflows requiring compliance before any content is published live to the website is an important aspect of a content governance framework. Here are some examples:

  • All new or updated content must go through departmental review first via the tiered CMS permissions system. Writers submit to editors, and editors publish after approval.
  • Once the editor publishes approved content, the content manager is notified to give their review. With this layered system combining departmental responsibilities and city oversight, subject expertise and standards alignment can be achieved.
  • Delegating publishing capabilities based on the tiered writer, editor, and administrator roles allows for better content oversight. By withholding universal publishing permissions, government teams can maintain visibility over the information that reaches the public website.

Standards around content freshness and accuracy

Creating clear standards that ensure both content freshness and accuracy through multiple mechanisms ensures the public receives accurate information. This also maintains public trust for being proactive in keeping information fresh and factual. For example:

  • Establishing regular content reviews by both subject matter experts and editors allows them to evaluate ongoing relevance and update details that may have changed over time.
  • Setting notifications for content expiration dates provides proactive reliability. These notifications force re-evaluation to decide if time-sensitive data needs updates instead of assessing old information after the fact. This prevents outdated or incorrect content from lingering on the website.

Central oversight and support resources

Having a position that holds broad responsibilities spanning from policy setting to team training and standards alignment shows a commitment to centralizing governance oversight and resources. For example:

  • To ensure content excellence, it's best to have one dedicated content manager with defined duties acting as a consolidated hub.
  • This dedicated content manager has access to automation, notifications, and permissions management capabilities built into existing CMS. This access facilitates smooth governance execution.
  • Proper governance requires investing in stewardship. The content manager has expansive duties, accessibility scope, and access to robust resources and support. This signifies the government’s commitment to human and technological investment.

Leadership stakeholder commitment

Adding high-level leadership stakeholders to the content governance policy ensures active governance participation. It also ensures ultimate accountability, public trust, and constituent engagement. Modern cities need executives to prioritize digital governance. For example:

  • Rather than relegating website oversight solely to writers, adding leadership involvement in upholding content standards and selecting the qualified editors and managers themselves.
  • This approach holds department heads accountable for performance and monitors how well they translate strategic goals to the public.
  • The Mayor holds overall accountability for both website direction and actual published content. This highest-level ownership sets the tone across the organization that content excellence maintains an integral city priority.

How Drupal supports content governance

Drupal empowers content governance in numerous ways:

  • Flexible role management
  • Built-in workflows
  • Search capabilities
  • Calendar integration
  • Alert integration
  • Social media integration
  • Reduced IT dependence

Flexible role management

The versatile role management allows configuring access levels. It offers graduated permissions for authors, editors, and administrators. We can dial in precise oversight scopes fitting organizational structures.

Here’s a sample blueprint we’ve worked on with a client:

Content roles and permissions

Built-in workflows

Additionally, Workbench modules enable routing content through multi-stage reviews, approvals, and revisions before public release. We construct governance pipelines aligned to internal protocols.

Search capabilities

The built-in Apache Solr search functionality simplifies content discovery. It does this without external tools through custom indexes. Staff and stakeholders can pinpoint information on-site through user-friendly interfaces.

You can check Apache Solr capabilities in our case study with the South Florida Water Management District.

Calendar integration

Modules like the Fullcalendar View also readily support governance aims like events publishing and community participation. Embedding a calendar boosts involvement by displaying upcoming civic events since visitors can check upcoming civic events while administrators maintain oversight.

Alert integration

Alert integration represents another useful capability for broadcasting important public updates. Websites like Orange County have customized alerts reaching select or all departments as needed.

Orange County alert headers

Social media integration

Some officials prefer having their social media feed show up on the website. Drupal can make that happen. Social media integration supports broader community engagement.

Reduced IT dependence

After the initial web development and design, Drupal's governance functionality reduces dependence on IT. It allows writers and editors across departments to self-manage changes following configured protocols. This empowers them to provide timely updates.

How Provus® streamlines content governance in Drupal

Provus® works within Drupal to simplify constructing content governance models. It accelerates building oversight workflows, permissions roles, and review processes. It also speeds up other tasks by combining common combinations into reusable use cases.

A content manager could, for example, apply a ready-made localized governance framework. The framework would have corresponding content stages. They could map user roles to those steps and automate routing through review. This replaces manually rebuilding those capabilities.

Additionally, Provus® centralizes settings for brand governance requirements, accessibility standards, and other specs. It propagates these settings across components. Here at Promet Source, we test all themes we build for accessibility. This maintains consistency of branding and accessibility at scale.

The capabilities of Provus® allow us to quickly build governance foundations on Drupal. This streamlines creation, editing, approval, and oversight at scale.

BOOK A FREE PROVUS® DEMO

Content governance best practices

Andy Kucharski, our CEO, has great suggestions on the best practices for content governance in Drupal:

  • Develop a content governance strategy
  • Create a content style guide and template
  • Collaborate with other users for content reviews
  • Conduct accessibility and compliance reviews
  • Take advantage of the audit trails
  • Schedule publishing and archiving

Develop a content governance strategy

Andy notes that effective content governance starts with strategy and creating and identifying teams or roles within the organization.

He elaborates that different governance models allocate roles like content creators, reviewers, and publishers/editors in various configurations from decentralized to centralized.

You may have essentially a top-down approach where you have editors who only have publishing rights and then content creators who push those for review and don't have the ability to publish. Then there are team approaches for an organization that is maybe heavily regulated or an organization that requires a number of steps before content is published.

So a key initial best practice per Andy is strategizing around the teams and roles needed to match the organization's governance requirements. This shapes the subsequent Drupal workflow aligned to that content strategy.

Create a content style guide and template

A content style guide is a document that outlines the writing style, tone, and voice. It should be used when creating content. It should also include guidelines for formatting, punctuation, and grammar. This ensures that all content on your website is consistent and on-brand.

We also suggest creating content templates, made easier by Provus®. Content templates are pre-designed layouts. Building them ensures consistent content across your website. This guarantees that all content on your website follows a consistent format and includes all necessary elements.

Collaborate with other users for content reviews

Andy emphasizes leveraging Drupal's built-in tools to facilitate content review and collaboration between identified roles. Specifically, he called out the Workbench module as a key means to administer review workflows:

The Workbench module came into existence out of the need for the community and they've been widely adopted, even in Drupal 7, because of the external need.

He recommends constructs like the Workbench module to assign content to users based on their role. The constructs also maintain audit records through each step of assessment or modification and ensure final approval before publication.

Conduct accessibility and compliance reviews

As strong advocates for accessibility, we recommend conducting accessibility checks on content. Following accessibility guidelines guarantees that your constituents can access the information and services they need with as minimal barriers as possible.

UNSURE HOW TO CONDUCT AN ACCESSIBILITY CHECK? CONSULT WITH OUR EXPERTS!

The great thing about having workflows and the ability to assign roles as well is that compliance reviews can be assigned to the right people. For example, if you are publishing a statement, you can assign the content to your legal team for review.

Take advantage of the audit trails

Andy recommends taking advantage of Drupal's built-in tracking functionality to:

  • Track content status across various stages like draft, under review, approved, published, etc.
  • Maintain a revision history of all changes applied to content over time.
  • Log all approvals/reviews along with the user, timing, and approval outcome at each step.

Drupal is very good because it provides functionality for audit trails, provides for managing that workflow, and those different roles have the ability to see the content that's assigned to them and what the tasks for that specific piece of content are.

Robust audit trails give organizations visibility into the lifecycle of content before public publishing. Andy advises configuring workflows in Drupal to capitalize on recording this key data to support content governance aims.

Schedule publishing and archiving

As mentioned earlier, Drupal allows for publication scheduling which is necessary to keep the site updated and the information secure.

Sometimes there's also cases where we need to keep content private before certain dates because the content has big implications. For example, when you have a piece of content you’ve been working on for months that has a major impact on the economy. Well, that number could be within expectations of the markets or not, and that swings the markets. And if somebody had that information prior to its release, they stand to gain millions if not billions of dollars.

It’s also important that reviewing and archiving content should be part of the protocol. This allows administrators to set specific future dates and times for content changes and updates to go live without burdening the team.

You can also sunset the content so it's no longer there. Of course, once you release it to the internet, it's never really gone.

While indicating content truly never disappears from the internet, Andy recommends always having a sunset or archive plan to intentionally unpublish/delete outdated content.

These functionality uses enable governance over the entire content lifecycle—not just creation and updates.

Transform your municipal website into a hub of local information and engagement

Implementing a content governance policy transforms city and county websites into dynamic hubs. It keeps communities informed and engaged.

With Drupal, you can manage large municipal platforms. It has features like permission controls, built-in workflows, and role-based access. Combining these features with solutions like Provus® helps speed up configuration, content creation, and management.

Don't settle for outdated content management systems. Connect with our team today to create a dynamic and engaging website that connects with your audience.

Feb 16 2024
Feb 16

We embark on a journey, guided by a Tugboat, through the evolving landscape of Drupal development. This episode of the Lullabot Podcast dives deep into the world of Tugboat's seamless integration on Drupal.org. It's a pivotal tool that's redefining the paradigms of building, testing, and deploying Drupal projects.

Our voyage is enriched by the insights of a distinguished Drupal Core Committer, who unveils the myriad development challenges Tugboat adeptly navigates and resolves. Joining the conversation is the Captain of Tugboat himself, offering a rare glimpse into the mechanics behind Tugboat's ability to streamline the development workflow and foster unprecedented levels of collaboration among developers.

And there's more on the horizon—this episode is also the launch pad for our new podcast host!

Feb 15 2024
Feb 15

Drupal added support for IIS in 2010 and we have supported that and WAMP (Running Apache and PHP on Windows). Unfortunately, we have never been able to provide automated testing for these environments.  And since 2010, the use of Microsoft products for hosting websites has declined. Because of this, the Drupal core committers propose drop support for Windows when used on production web sites in Drupal 11.

Support for development on Windows will continue. Drupal will continue to accept bug reports for Windows used in development environments,

The following links show the usage statistics used when making this proposal.

Comment period

Community feedback is sought on the proposed process. Please use this issue to add your input. The feedback period will last until Friday March 8 2024.

Feb 15 2024
Feb 15

We need you!

If you've been looking for non-code opportunities to give back to the Drupal community, we have the perfect solution! Volunteer to help out at MidCamp 2024.  

We’re looking for amazing people to help with all kinds of tasks throughout the event including: 

Setup/Teardown

  • For setup, we need help making sure registration is ready to roll, getting hats ready to move, and getting the rooms and walkways prepped for our amazing sessions.

  • For teardown, we need to undo all the setup including packing up all the rooms, the registration desk, cleaning signage, and making it look like we were never there.

Registration and Ticketing

Session Monitors

  • Help us to count heads, introduce the speakers and make sure they have what they need to thrive, and help with the in-room A/V (by calling our Fearless Leader / A/V Genius)

Choose Your Own Adventure

  • We won't turn away any help, so if there's something you'd like to do to help, just let us know!

Every volunteer will receive credit on Drupal.org, so be sure to include your profile name when you sign up to volunteer.

If you’re interested in volunteering or would like to find out more, please reach out to the #volunteers channel on the MidCamp Slack.

There will be a brief, online orientation leading up to the event to go over the volunteer opportunities in more detail. 

Sign up to Volunteer!

Questions?

tweet: @midcamp
email: [email protected]

Do volunteers still need a ticket?

Yes. While we appreciate your help, we still ask volunteers to purchase a ticket. We have flexible ticket options for you to choose from. As a volunteer you'll also get dibs on the MidCamp Archives... our seemingly endless box of vintage swag.

Feb 14 2024
Feb 14

Authored by: Nadiia Nykolaichuk.

Today, you’ll find more and more simple and intuitive interfaces in Drupal for doing tasks that previously required strong technical skills. One of the great examples of that — and also one of Drupal trends to watch out for in 2024 — is a brand-new way of adding modules to your website provided by Project Browser. In this post, we’ll take a closer look at what Project Browser is all about and, of course, see the features of this amazing new tool in action. 

The essence of Project Browser

Project Browser provides an interface inside the Drupal admin dashboard where users can browse for the needed modules and install them with one click of a button. There’s no need to leave the Drupal website for module discovery or use command-line tools like Composer for module installation.

Furthermore, Project Browser guides users through the browsing process, making it as comfortable, safe, and efficient as possible, and helping them find what suits their website best. This is thanks to sensible default filters and user-friendly search options, and we’ll show all that in more detail later in this article.

The main goals of the Project Browser Initiative are to make it easy for users to find and install modules, as well as help the community showcase great modules. The key target audience includes:

  • people who are new to Drupal
  • site builders

Thanks to all the hard work of the Project Browser Initiative team, the tool is expected to appear in the Drupal 10.3 core, which means the chances are very high that we will see it in the year 2024. However, all kinds of code and non-code contributions are more than encouraged to help the maintainers make it happen.

A glimpse at traditional ways of finding and installing modules

Most people who have ever tried extending their Drupal website with contributed modules must be familiar with how this process traditionally looks. You start by searching for modules on drupal.org or just use Google search to type what kind of Drupal module you are looking for. Chris Wells (chrisfromredfin), Project Browser co-lead, discussed that at the DrupalCon Pittsburgh 2023’s presentation that he gave together with co-lead Leslie Glynn (leslieg).

When searching on drupal.org, users are presented with a bunch of module filters to narrow down the search. The filters are a little overwhelming to people who are new to Drupal and don’t know what “stability” or “security advisory coverage” means, said Chris. There is also some inconsistency in how the module descriptions are laid out because it’s up to the module maintainers to decide what’s important. Some of the module descriptions end up being highly technical and not necessarily written for the target audience, noted Chris Wells.  

The slide showing the traditional way of searching for Drupal modules.The slide showing the traditional way of searching for Drupal modules.

But even when the module is found, it's just the beginning of the real technical challenge, which is its installation. Starting with Drupal 8.0, the recommended best practice of installing modules is by using the command-line Composer tool. Composer is a package manager that finds and installs modules together with all dependencies they might have such as third-party libraries and other modules. Unfortunately, using the command line is far from being comfortable for everyone.  

“We’re really looking at getting back to some of the Drupal routes of low-code/no-code, making it easy to expand your platform without having to be super technical, without having to know a ton of Composer incantations.”

- Chris Wells, Project Browser co-lead, 

at DrupalCon Pittsburgh 2023

Indeed, Composer commands (for example “composer require drupal/modulename” for installing modules) might sound a little like incantations. But all the magic instantly vanishes when errors in the command line might start to overwhelm a not-too-technical user (and even the most technical user sometimes).

As an alternative, many non-tech users traditionally rely on the “Add new module” button on the “Extend” tab of the Drupal admin dashboard. However, when using it, you first have to download the module file archive from drupal.org. Furthermore, it won’t install any dependencies the module might have, or provide consistency in versions like Composer does.

Luckily, Project Browser is here to successfully resolve all of the above-mentioned issues and more. To see how exactly it can do it, let’s go on a little test drive of the tool below.  

A way to test Project Browser

While Project Browser is not yet part of Drupal core, it is being developed as a contributed module. In addition to the possibility of installing the module via Composer, the Project Browser team has created a very convenient way to test it out and report issues if found. This way involves deploying a temporary Drupal website with Project Browser on the Gitpod cloud environment. Here are the steps to do it:

The “Try it now” button on the Project Browser module’s page.The “Try it now” button on the Project Browser module’s page.
  • Continue by logging in with your GitHub account.
  • You’ll be offered to create a workspace, and there will be 3 options, so click on the first one (ddev-gitpod-pd) and hit the “Continue” button.
Clicking to create a new workspace on Gitpod to test Project Browser.Clicking to create a new workspace on Gitpod to test Project Browser.
  • The installation will start, and you’ll need to wait a couple of minutes for it to complete.
  • When you see a question in the right-bottom corner about whether you want to open a workspace, confirm this.
  • When a fragment of the Drupal admin dashboard appears in the top right corner, click the “Expand” icon to expand it to the entire tab. Alternatively, the Drupal installation can also be reached at the link at the bottom of the console output.
Expanding the new Drupal installation to test Project Browser.Expanding the new Drupal installation to test Project Browser.
  • Log in to the Drupal installation as admin/admin.
  • Explore Project Browser’s features.

The features of Project Browser in action

  • The new “Browse” tab and the Project Browser UI

The Project Browser journey starts with the new “Browse” tab on the “Extend” page of the Drupal admin dashboard. 

The new “Browse” tab on the “Extend” page of the Drupal admin UI.The new “Browse” tab on the “Extend” page of the Drupal admin UI.

By opening this tab, you can see the Project Browser interface. From the start, it shows you suggested modules with logos, brief descriptions, and the number of installs, even if you haven’t yet defined your search criteria. You can choose what number of modules to display per page, use the Previous/Next/Last navigation labels, and configure the UI to see modules as a grid or a list.

Project Browser’s main UI.Project Browser’s main UI.
  • Default filters and sorts

By default, the modules are sorted by popularity, but you can change the sorting order by using the “Sort by” dropdown. For example, you might choose to sort the results alphabetically.

As far as the filters, you see only the modules that meet the following criteria:

  • are maintained
  • are covered by security policy
  • are compatible with your website’s core version (this filter is not visible but it is there)

You might also click “Clear filters” and expand the range of modules that will be displayed to you. However, the default filters look sensible indeed, allowing you to only find the modules that are completely ready to go. 

Default filters and sorts in Project Browser.Default filters and sorts in Project Browser.
  • Search by category

In the left part of the Project Browser UI, you can see the module categories as a select list. Using them will narrow the search down to specific categories. They are different from those on drupal.org and are one of the priorities for the Project Browser team to be re-organized in the most user-friendly way. 

Category search in Project Browser.Category search in Project Browser.
  • Using the search bar

The search bar above the suggested projects enables you to type in either module names or keywords associated with the purpose of the modules you want to find.

Search by module name or keyword in Project Browser.Search by module name or keyword in Project Browser.
  • Module descriptions

In addition to a brief summary, you can see a fuller module description page by clicking on a specific module. The page tells you the module is compatible with your Drupal installation, shows you the number of websites it is used on, and notifies you it has a stable release covered by Drupal security policy. The main part includes a logo, the maintainers’ names, and a user-friendly description of what the module does. Considering there are tens of thousands of modules available, providing descriptions and logos is an important contribution opportunity. 

A module page in Project Browser.A module page in Project Browser.
  • Module installation in one click

Every module has an “Add and install” button next to it.

The “Add and install” button in Project Browser.The “Add and install” button in Project Browser.

You click it and see a series of messages replacing each other about the installation progress. It is a matter of seconds to finally see the “Installed” status next to the module. This looks almost unbelievable, making you want to go and check on the “List” tab whether the new module is truly installed — and it is.

A collage of the module installation progress steps in Project Browser.A collage of the module installation progress steps in Project Browser.

This is even more incredible because module installation with a click of a button wasn’t part of the initial MVP (minimum viable product) for Project Browser. The tool was only supposed to provide Composer instructions to install a module chosen by the user.

What’s under the hood of Project Browser

  • Package Manager

Since Project Browser runs Composer behind the scenes, it installs modules together with all the dependencies. Chris Wells explained that the “Add and install” functionality works thanks to Package Manager from Automatic Updates — a kind of submodule that Project Browser relies on. It runs the Composer commands in the background. According to Chris, Package Manager is the “glue” between the Drupal site and Composer, so if you have Package Manager enabled and you meet some of the other requirements that allow it to be enabled, module installation with a click of a button will work for you.

  • Svelte JS framework

The frontend of Project Browser is written in the Svelte JavaScript framework, creating a decoupled setup. Svelte is an open-source framework that enables developers to create fast static web applications.

Svelte JS framework.Svelte JS framework.
  • A pluggable “source” system

Also, Chris Wells introduced a pluggable “source” system which allows you to fetch and display the list of modules on your Drupal site in a decoupled way. This means that you actually make requests to your Drupal website from your browser as the frontend, and then it goes and fetches data from somewhere else in order to figure out which data to display. 

The data fetching process is implemented as a plugin, and you can extend Project Browser by writing your own source backend. For example, if you're an educational institution with a specific set of approved modules, you can create a custom source plugin that will make sure only the approved list of modules is shown to your website’s users. There is an example code in Project Browser docs provided for that purpose.

Final thoughts

The sneak peak pics of Project Browser above might make you look forward to its actual release. Indeed, the tool looks impressive and is set to become a true game-changer in how users extend their Drupal websites with new features. Project Browser, just like all other innovations on the way from Drupal 10 to 11, can become part of your website if you stay up-to-date with the new Drupal core releases.

Feb 14 2024
Feb 14

With a career spanning over two decades, Mariano Crivello has established himself as an expert Solutions Architect in the realm of software development. His journey in the software industry began in 1997 in highschool where he built the schools first website. Since then, Mariano has dedicated himself to mastering the intricacies of technology, software, and now artificial intelligence. Since embracing Drupal in 2007, his career has taken him around the world and back where he has had the opportunity to work with brilliant minds and challenging projects throughout.

Beyond his professional pursuits, Mariano is a dedicated family man who balances his tech-driven career with his love for the environment and adventure. As an avid waterman, he travels the world in search of challenging waves to surf, while also committing to beach cleanups and work with various environmental organizations.

Mariano's unique blend of technical prowess, environmental consciousness, and family values make him not just a skilled Solutions Architect, but also a well-rounded individual dedicated to making a positive impact both in the digital and natural worlds.

Feb 14 2024
Feb 14

We’re back with our monthly overview of the top Drupal blog posts! Here are our favorites from January.

Introducing: the bounty program

First up, we have an announcement from the Drupal Association’s Alex Moreno about the introduction of a credit bounty program intended to maximize the impact of contributions from members of the Drupal community.

He lists 4 critical issues that will be included in the pilot version of the program and which maintainers need the most help with. For these issues, the reward will be 5 times the regular amount.

As Alex states, if the initial phase of the bounty program is impactful enough, the plan is to tweak and expand it going forward; he invites everyone to reach out to him with any questions and/or suggestions.

Read more about the credit bounty program

How to set up a local development environment (LDE) for Drupal

Moving on with our selection for this month, we have a guide showing how to set up a local development environment (LDE) for Drupal from David Rodríguez Vicente on his blog The Russian Lullaby. He first explains what an LDE is and what readers will accomplish by going through his guide, then proceeds with the software requirements.

The first thing to do is to set up a lightweight local environment for PHP, and then a heavyweight local environment based on software containers with DDEV. Finally, you need to set up an IDE for Drupal development, which includes VSCode, XDebug and PHP Codesniffer.

Read more about setting up a LDE for Drupal

Spotlight on Symfony in Drupal

Next up, we have a four part series of articles about using Symfony in Drupal by Blake Hall of Drupalize.Me. Part one focuses on the HttpKernel Component, a key Symfony component for coordinating the request/response cycle in Drupal. Part two is then dedicated to Symfony’s event dispatcher.

In part three, Blake takes a look at Symfony’s Routing component and how it can be enhanced with Drupal. In the fourth and final post, he focuses on utility components, i.e. other Symfony components which provide useful functionality, such as Console, Yaml, Polyfill, Serializer, etc.

Read part one about Symfony in Drupal

Planning Your Drupal 7 Migration: The to-do list you can't do without

Fourth on this month’s list, this article from Stella Power of Annertech provides 6 key tips to ensure a smooth and successful migration from Drupal 7 to Drupal 10. The first step is conducting a comprehensive content audit, followed by an assessment of your current Drupal 7 website and choosing the migration approach which is the best fit for you.

Step four demands evaluating the suitability of your existing modules and finding alternatives for unsuitable ones. The next crucial step is planning for your data migration. The sixth and final step then involves thorough testing and other post-migration activities such as SEO optimization.

Read more about planning a Drupal 7 migration

Why you need to start upgrading from Drupal 7 now

This next post comes from Oliver Davies who also wrote about upgrading from Drupal 7 to a more modern version of the platform. Even though the Drupal 7 end of life is almost a year away (the final extension lasts until January 2025), Oliver emphasizes the need to start planning the migration now rather than waiting.

More than 330,000 websites are still on Drupal 7, and upgrading requires a lot of time and effort, particularly due to the abundance of custom modules and themes, many of which may be contributed by community members and potentially outdated since the priority has been functionality for newer Drupal versions.

Read more about the need to upgrade from Drupal 7

Symfony Messenger

We continue with another series of articles related to Symfony and Drupal, this time coming from PreviousNext’s Daniel Phin and focusing on Symfony Messenger. Part one covers the features of Symfony Messenger; part two focuses on message and message handlers, as well as how Messenger compares to Drupal’s @QueueWorker.

Part three of the series covers the Consume command and prioritized messages. Part four then covers automatic message scheduling and how Symfony Scheduler can replace Drupal’s hook_cron. In part five, Daniel shows how to add real-time processing to QueueWorker plugins. 

Part six shows how to integrate Messenger with Symfony Mailer. Part seven focuses on displaying notifications of processed Messages. Finally, part eight takes a look at the future of Symfony Messenger in Drupal.

Read part 1 of Symfony Messenger in Drupal

How Much Does it Cost to Migrate from Drupal 7 to 10

Nearing the end of our selection from January, we have another article about upgrading from Drupal 7, this one written by Promet Source’s Sonal Bendle and focusing specifically on the costs of migrating from Drupal 7 to the current latest version, Drupal 10.

Sonal’s article starts off with a breakdown of what a Drupal migration is and the different elements that it requires. As she points out, the factors concerning design and/or themes are particularly important to keep in mind when planning the migration.

A key section of the article is dedicated to an overview of the essential migration scope, complete with both a pre-migration assessment table as well as a D7 to D10 content migration cost estimation table.

Read more about the costs of upgrading from D7 to D10

A Selfish Exercise in Selfless Commitment: Conversation with Michael Anello

For the final piece on this month’s selection, we have an interview with Michael Anello / ultimike, by Alka Elizabeth of The DropTimes, in which he shares his journey of Drupal contribution and mentoring. As he states in this interview, teaching others has been a bit of a selfish exercise for him, since it made him learn things much more in depth.

Michael also speaks more generally about his introduction to and experience with Drupal, as well as some of the contributions that he is most proud of. He finishes by reflecting on the biggest challenges of the past year and looking ahead to what this year will bring.

Read more about Michael Anello and his Drupal journey

Single bird flying through cloudy sky

That’s it for this month’s selection of articles. Stay tuned for more Drupal-related news and interviews coming soon to our blog!

Feb 13 2024
Feb 13

Symfony Messenger and the SM integration permits time savings and improved DX for developers, making the pipeline and workers more flexible.

Real business value is added when messages can be processed in real-time, providing potential infrastructure efficiencies by permitting rescaling resources from web to CLI. End user performance is improved by offloading processing out of web threads.

Further integrations provide feedback to end users such as editors, informing them in real-time when relevant tasks have been processed.

Messenger messages are an opportunity to offload expensive business logic, particularly those which would be found in hooks. Instead of using the batch API, consider creating messages and deferring UI feedback to Toasty rather than blocking a user’s browser.

Given these points, integration with Drupal core could prove quite valuable.

The design of the SM project focuses on core integration from the beginning. The highest priority has been to maintain the original Symfony Messenger services and configuration, while minimising the introduction of new concepts.

In the context of core integration, niceties like SM’s queue interception feature may initially be left on the factory floor, but could prove useful when deprecating @QueueWorker plugins.

Concepts like the consume command may be a tricky requirement for some, but there are always workarounds that can be built, in the same vein as request-termination cron. Workarounds wouldn’t allow for the main advantage of Messenger, which is its real-time processing capabilities.

Next steps for the SM project and the Messenger ecosystem include

Read the other posts in this series:

  1. Introducing Symfony Messenger integrations with Drupal 
  2. Symfony Messenger’ message and message handlers, and comparison with @QueueWorker
  3. Real-time: Symfony Messenger’ Consume command and prioritised messages
  4. Automatic message scheduling and replacing hook_cron
  5. Adding real-time processing to QueueWorker plugins
  6. Handling emails asynchronously: integrating Symfony Mailer and Messenger
  7. Displaying notifications when Symfony Messenger messages are processed
  8. Future of Symfony Messenger in Drupal
Feb 13 2024
Feb 13

Join us THURSDAY, February 15 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)

This month we'll be giving an update on our plans for DrupalCon Portland, including the Nonprofit Summit and the recently announced discount for nonprofit attendees!

And we'll of course also have time to discuss anything else that's on our minds at the intersection of Drupal and nonprofits -- including our plans for NTC in next month.  Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google doc: https://nten.org/drupal/notes!

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

This free call is sponsored by NTEN.org and open to everyone. 

  • Join the call: https://us02web.zoom.us/j/81817469653

    • Meeting ID: 818 1746 9653
      Passcode: 551681

    • One tap mobile:
      +16699006833,,81817469653# US (San Jose)
      +13462487799,,81817469653# US (Houston)

    • Dial by your location:
      +1 669 900 6833 US (San Jose)
      +1 346 248 7799 US (Houston)
      +1 253 215 8782 US (Tacoma)
      +1 929 205 6099 US (New York)
      +1 301 715 8592 US (Washington DC)
      +1 312 626 6799 US (Chicago)

    • Find your local number: https://us02web.zoom.us/u/kpV1o65N

  • Follow along on Google Docs: https://nten.org/drupal/notes

View notes of previous months' calls.

Feb 12 2024
Feb 12

A trio of modules provide a unique experience for sites utilising SM and Symfony Messenger messages: Common Stamps, Pusher Mini, and Toasty. This combination ultimately shows UI toasts in the browser when Symfony messages relevant to the user are processed.

This post is part 7 in a series about Symfony Messenger.

  1. Introducing Symfony Messenger integrations with Drupal
  2. Symfony Messenger’ message and message handlers, and comparison with @QueueWorker
  3. Real-time: Symfony Messenger’ Consume command and prioritised messages
  4. Automatic message scheduling and replacing hook_cron
  5. Adding real-time processing to QueueWorker plugins
  6. Making Symfony Mailer asynchronous: integration with Symfony Messenger
  7. Displaying notifications when Symfony Messenger messages are processed
  8. Future of Symfony Messenger in Drupal
toast

The full picture

From beginning to end, when a message is dispatched additional metadata is applied to the message. Later, after the message has been successfully processed, the message is captured. Metadata is extracted from the message and transformed. Recipients for the message are determined, usually based on the user who was logged in when the message was dispatched. The processed metadata is dispatched to a Pusher-compatible websocket server, which pushes the processed metadata to web browser sessions of the message recipients.

The message

A message is constructed and dispatched as usual, though to utilise the UI the Description , Action , and Link stamps provided by the Common Stamps module are applied. These stamps provide additional user-friendly context describing what the message is doing, and subsequently display text, links, and buttons once the message has been successfully processed.

use \Drupal\stamps\Messenger\Stamp\DescriptionStamp;
use \Drupal\stamps\Messenger\Stamp\ActionsStamp;
use \Drupal\Core\Link;
use \Symfony\Component\Messenger\Envelope;

$envelope = new Envelope($testMessage, [
  DescriptionStamp::create(title: 'A custom message!', description: '...and a description.'),
  ActionsStamp::fromLinks(primary: [
    Link::fromTextAndUrl('Go home', Url::fromRoute(''))
  ]),
]);
$bus->dispatch($envelope);

This code produces toasts like:

ToastyToasty displaying notifications.

For example, a message may be scheduled for a future date (via \Symfony\Component\Messenger\Stamp\DelayStamp) to publish content. Metadata stamps could include information such as the content title, a description like The content was published, and a handy action link to the content itself.

Bus, middlewares, and processing

Once a message has been dispatched to the bus, middleware from both Common Stamps and Toasty will modify and send notifications respectively. The current user middleware from Common Stamps will apply a user stamp tracking which user, if any, was authenticated while the message was dispatched.

Later, after the message handler has successfully processed the message, Toasty’s’ middleware intercepts the message. Metadata like description, links, buttons, and recipients are compiled by reading the stamps applied to the message. This metadata is subsequently transmitted to the websocket server.

The websocket server

Toasty includes a Vue app to render the user interface and connects to the websocket server. It establishes a persistent connection and listens for notifications dispatched by Toasty to the websocket server.

The websocket server can be the official Pusher.com service, which provides 200,000 free messages per day. Alternatively, you can choose to self-host using Soketi, an open source app that is compatible with the Pusher API.

Pusher Mini combines Key, which is used for storing secrets, with the Pusher PHP library to connect to Pusher.com or a Soketi server.

After installation, a key is set up with app secrets. The Pusher Mini module configuration sets up routing, non-secrets, and connection details for both the client-websocket server, and another for the application-websocket server.

Key configPusher mini config

Once websockets are configured, messages will be pushed to the browser as soon as the middleware of Messenger and Toasty has processed the message.

Received messages are transformed into small notification toasts for the user to take action on or ignore.

toast

The next and final post in this Symfony Messenger series covers how we could bring the integration components introduced by SM into Drupal core.

Feb 12 2024
Feb 12

Today we are talking about sending email with Drupal, The Easy Email Module, and Drupal Mail Best Practices with guest Wayne Eaker. We’ll also cover Content Access by Path as our module of the week.

For show notes visit:
www.talkingDrupal.com/437

Topics

  • Current state of email
  • What happened to swiftmailer
  • Do you still need the mailsystem module
  • Why Symfony Mailer
  • New dependency in core
  • Difference between Symfony Mailer module and the Symfony Mailer Lite module
  • How does the Easy email module make it easier
  • What are the features of Easy Email
  • Why not use PHP mail
  • JMAP
  • Do you have a roadmap
  • How do we communicate the different module options
  • Are you looking for help

Resources

Guests

Wayne Eaker - drupaltutor.com zengenuity

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan
John Picozzi - epam.com johnpicozzi
Ivan Stegic - ten7.com ivanstegic

MOTW

Correspondent

Martin Anderson-Clutz - mandclu

  • Brief description:
    • Have you ever wanted to grant users access to edit content based on the path alias of the content? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in the past month by Mark Conroy of Annertech, who is also a core subsystem maintainer for the Umami profile
    • Versions available: a stable 1.0.0, created in the past week, that works with Drupal 10
  • Maintainership
    • Actively maintained
    • Doesn’t have a user guide yet, but the module’s README does include some FAQs, and the project page includes a link to a YouTube video that demonstrates how to install and use the module
    • Number of open issues: 2, one of which is a bug
  • Usage stats:
  • Module features and usage
    • When installed, the module adds a new taxonomy vocabulary to your site. You can add terms to this vocabulary to define sections by path
    • Users on the site will have a new field, where you can reference one or more of the section terms, granting the user access to edit any content with a path that matches the section
    • The module also includes a submodule called Content Access by Path Admin Content. When installed, users who go to the admin/content listing will only see content listed that they can edit, based on either the sections they’ve been assigned, or their ownership of the content.
    • Granting edit permissions to a “section” of the website is a common ask for site owners, so I’m excited that this module makes it easy to set that up. There are solutions in the contrib ecosystem based on taxonomy for access control, and back in episode #414 we talked about Access Policy as a very flexible way to grant edit permissions, but in my mind those all require more set-up, and may require an extra step during content creation to make sure the right access is available. Content Access by Path, along with something like the near-ubiquitous Pathauto, can make it pretty painless to set up and use section-specific edit permissions
Feb 12 2024
Feb 12
Onwards to Drupal 11 - ways to get involved | Gábor Hojtsy on Drupal

Breadcrumb

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web