Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 12 2023
Sep 12

Sub-theming has been an integral part of Drupal development over the years. It’s a process of inheriting a parent theme’s resources. 

The Starterkit theme for Drupal 10 adds a new method for developers to implement a starting point for a project’s theme. 

Prior to Starterkit, a developer would create a sub-theme either manually or in some cases, contrib themes provided a drush command to automate the process. Now that Starerkit is in core, this brings theme creation to a wider audience. 

Starterkit architecture 

A default Starterkit generated theme inherits from a core theme called “starterkit_theme” which in turn uses the core theme called “Stable9” as its base theme. The end result for your Starterkit generated theme is similar markup and CSS previously provided by the Classy base theme. However, instead of inheriting the markup and CSS from a base theme, the Starterkit theme generator copies the defaults into your new theme. 

Why Starter Kit

  1. Drupal core’s Classy theme is deprecated in Drupal 10 and is now a contrib theme for existing projects that use it as a dependency.  The Classy project page states that for Drupal 10, Starterkit is the way forward if you want to have all the benefits of what used to be Classy in core.
  1. Starterkit gives a better onboarding experience for new developers by reducing the efforts needed to build a project theme starting point. 
  1. Core maintainership is changed to individual theme maintainers which will help with faster innovation and frequent updates.

Prerequisites

    • A Drupal 10 installation 
    • PHP installed with a minimum version set to PHP 8.1. Starterkit uses PHP to generate your new theme. 

Theme generation using command line interface.

Run the below command relative to your project’s root (docroot, web). This will create a new theme with the name specified under “themes/ my_new_theme” 

php core/scripts/drupal generate-theme my_new_theme

You can also provide a specific custom theme path by using the “–path” argument in the script command.  

php core/scripts/drupal generate-theme my_new_theme --path themes/custom

The Output will be as below. 

Starterkit-folder-structure

On the other hand, You can get reference to see all configuration options 

You can see all possible Starterkit options by running the help command below 

php core/scripts/drupal generate-theme --help

Custom starterkit theme generation

In the previous example it was using the default theme i.e., “starterkit_theme” this was part of Drupal core.

But In order to use the custom/contrib theme as starter kit then It should have the line containing “starterkit: true” to the theme’s source_theme_name.info.yml file. 

In the example below, we use Olivero as an example starting point, but noting that there is a core issue open to make Olivero Starterkit ready. 

Olivero starterkit

The command will look as below with the option “–starterkit” and using olivero as a starterkit generator theme 

php core/scripts/drupal generate-theme customstarterkit_theme --path themes/custom --starterkit olivero

And the theme will look like this: 

Olivero generated theme

Conclusion

With the advent of Starterkit in Drupal 10 core, this brings vast improvements to theming by reducing dependencies such as Classy. Starterkit provides a great starting point for themers and implies that Drupal is a continuously evolving CMS and on the right path. Yet another enhancement to make a Drupal developer’s workflow easier. 

  1. Starterkit theme https://www.drupal.org/docs/core-modules-and-themes/core-themes/starterkit-theme
  2. Allow starterkit theme generator tool to clone Olivero   https://www.drupal.org/project/drupal/issues/3301173
  3. New starterkit will change how you create themes in Drupal 10 https://www.drupal.org/about/core/blog/new-starterkit-will-change-how-you-create-themes-in-drupal-10 
Jul 17 2023
Jul 17

Introduction

If you’ve ever used Drupal’s Migrate API, you know that failure is frequent. What’s more, debugging migrations is notoriously slow, requiring frequent rollbacks, resets, cache clears, and config imports. In this post I will highlight a tool that can help you get around all of these bottlenecks and fail faster: Migrate Sandbox.

Setup

If you want to follow along, you should spin up a new D9 site using a standard install with the following contrib projects included via composer:

  1. migrate_plus
  2. migrate_sandbox
  3. yaml_editor

(Note that as of this writing, yaml_editor does not have a D10-compatible release. Migrate Sandbox works in D10, but it’s not as user-friendly without yaml_editor.)

Enable migrate_example (which is part of migrate_plus), migrate_sandbox, and yaml_editor. This will automatically enable a few other migration modules as well (including migrate and migrate_plus). You should log in as an admin and navigate to the Migrate Sandbox UI (Admin > Configuration > Development > Migrate Sandbox).

What Happens in the Sandbox Stays in the Sandbox

Populate the Sandbox

Migrate Sandbox offers a friendly UI where you can debug and prototype migrations. In this post, we will use Migrate Sandbox as a tool to work with the beer_user migration provided by migrate_example. Once in the sandbox, we can populate the source and process pipeline from that migration. We just open the “Populate from a real migration” drawer, enter beer_user, and click “Populate”.

A form for populating the sandbox from a migration.Opening the “Populate from a real migration” drawer allows us to populate the various sections of the Migration Sandbox UI from any active migration.

Now we see what the first row of data looks like, and we also see that the process pipeline has been populated.

The process pipeline in yaml notation.The editable process pipeline after populating Migrate Sandbox from the beer_user migration.

That process pipeline is an editable form. This post focuses on how we can edit that process pipeline directly within the Migrate Sandbox UI in order to save time.

Sandbox Escape Warnings

Now that the sandbox is populated, we can process the row to see the results. But first, if you scroll toward the bottom of the sandbox you’ll note that we have a sandbox escape warning.

A screen capture of the Sandbox Escape Warning.The Sandbox Escape Warning should appear near the “Process Row” button in the Migrate Sandbox UI.

One of the goals of Migrate Sandbox is to produce no side-effects outside of the sandbox. If your migration includes a process plugin that is known to potentially cause side-effects, a sandbox escape warning appears. In this case we can simply scroll to the process section within Migrate Sandbox and edit the process pipeline at line 32.

field_migrate_example_favbeers:
  plugin: migration_lookup
  source: beers
  migration: beer_node
  no_stub: true

Now when we process the row by clicking the “Process Row” button near the bottom of the UI, there will be absolutely no effect outside the sandbox. That’s awesome because it means we won’t have to do any rollbacks as we’re playing in the sandbox.

Process the Sandbox Pipeline

After clicking “Process Row” we can view the results near the bottom of Migrate Sandbox, output either as YAML or an array.

The results of the process pipeline are shown in array notation.The results appear near the bottom of the sandbox.

Where the Sandbox Shines

What About Migrate Devel?

Everything up to this point can be done in the terminal with Drush along with the indispensable Migrate Devel module. Sometimes that’s all you need when debugging a migration, and I use it frequently. But maybe the simple fact that Migrate Sandbox is in the browser rather than the terminal is appealing. Or maybe you, like me, find it easier to trigger Xdebug breakpoints when working in the browser. Regardless, we’re going to see that Migrate Sandbox has some features that set it apart.

Validation of Content Entities

We start to see the power of Migrate Sandbox when we change the destination to migrate into a temporary content entity. In this case we’re migrating into a user.

A form for configuring the destination plugin.Choosing to migrate into a content entity requires a bit more configuration (i.e. specifying entity type and possibly the bundle) but it gives us some extra validation.

This entity won’t be saved; it just exists temporarily for the purposes of validation. If we process the row by clicking “Process Row”, we notice an error message in the Migrate Sandbox UI:

(Migrate Message) [user]: roles.0.target_id=The referenced entity (user_role: 2) does not exist.

It turns out the process pipeline is a little broken! We need to change how roles get set. Let’s edit the process pipeline at line 7 within Migrate Sandbox to use authenticated as the default_value.

roles: 
  plugin: default_value 
  default_value: authenticated

Now when we process the row by clicking “Process Row”, our validation error is gone. Neat!

In-Your-Face Error Messages

Now let’s really start failing. I don’t like how created is being set using the callback process plugin. It seems a little fragile.

created: 
  plugin: callback 
  callable: strtotime 
  source: registered

I want to update that part of the process pipeline to use the core format_date process plugin. (This is one of my favorite process plugins to mess up with!) First, we need to know the format the source date is in. The first source row has the value 2010-03-30 10:31:05. That’s not totally conclusive. Let’s scroll up to the “Populate from a real migration” drawer and fetch the next row. Be sure to uncheck the “Update Process Pipeline” box since we’ve been editing the pipeline within the sandbox.

A form used to fetch the next row of the migration.By using “Fetch next row” or directly specifying a source ID (or IDs) we can gain insight into the particulars of any row of the source.

We see that the second row of data has the time 2010-04-04 10:31:05. Between those two dates we can be fairly confident that the source format is Y-m-d H:i:s. Let’s go for it!

created:
  plugin: date_format
    from_format: Y-m-d H:i:s
    source: registered

We process the row… and I made a booboo.

(Yaml Validation) process: A colon cannot be used in an unquoted mapping value at line 17 (near " source: registered").

Ah, I should not have put that extra indentation on lines 16 and 17. (It felt correct in the moment!) Writing migrations is just about the only time I find myself writing yaml by hand. Migrate Sandbox saves me a lot of time by calling out my invalid yaml. That’s an easy fix.

created:
  plugin: date_format
  from_format: Y-m-d H:i:s
  source: registered

We process the row… another problem.

(Uncaught Throwable) Drupal\Component\Plugin\Exception\PluginNotFoundException: The "date_format" plugin does not exist. Valid plugin IDs for Drupal\migrate\Plugin\MigratePluginManager are: block_plugin_id,…

You better believe I make a lot of typos like this. Typically, we’d have to reset the status of the migration after triggering an exception like this. In the sandbox, however, we can forego that step. We can quickly edit date_format to read format_date within the sandbox pipeline.

created:
  plugin: format_date
  from_format: Y-m-d H:i:s
  source: registered

We process the row… Oops! I made yet another mistake.

(Migrate Message) migrate_sandbox:created:format_date: Format date plugin is missing to_format configuration.

I guess I figured Drupal would handle that by magic. This kind of error would normally be buried in a migrate message table, but Migrate Sandbox shows it to us front-and-center. Most dates in Drupal are in the format of Y-m-d\TH:i:s, so let’s try that.

created:
  plugin: format_date
  from_format: Y-m-d H:i:s
  to_format: Y-m-d\TH:i:s
  source: registered

We process the row… and we’re not quite there.

(Migrate Message) [user]: created.0.value=This value should be of the correct primitive type.

That’s a validation error, which is something Migrate Sandbox exposes to us very clearly. I forgot that created is a timestamp. We can change to_format to U easily enough.

created:
  plugin: format_date
  from_format: Y-m-d H:i:s
  to_format: U
  source: registered

We process the row… and it finally processes! We see in the results that created has the expected value of 1269945065. Success!

Don’t Forget to Save

Be aware that the updates you make within Migrate Sandbox don’t get saved anywhere. At this point, we could copy/paste the modified part of the pipeline from the sandbox into the appropriate yaml file and be on our way.

Recap

Let’s recap how Migrate Sandbox helped us fail fast:

  1. We saw all error messages directly on the page instead of having to search through migrate_message tables or db logs.
  2. We never had to reset the status of a migration before we could run it again.
  3. We never had to sync configuration or clear cache.
  4. We never had to roll back a migration.

And if you think this example was contrived and that nobody really makes this many errors in a migration, then you’ve never done a migration! You’re going to fail, so you might as well fail fast.

Jul 30 2019
Jul 30

Testing integrations with external systems can sometimes prove tricky. Services like Acquia Lift & Content Hub need to make connections back to your server in order to pull content. Testing this requires that your environment be publicly accessible, which often precludes testing on your local development environment.

Enter ngrok

As mentioned in Acquia’s documentation, ngrok can be used to facilitate local development with Content Hub. Once you install ngrok on your development environment, you’ll be able to use the ngrok client to connect and create an instant, secure URL to your local development environment that will allow traffic to connect from the public internet. This can be used for integrations such as Content Hub for testing, or even for allowing someone remote to view in-progress work on your local environment from anywhere in the world without the need for a screen share. You can also send this URL to mobile devices like your phone or tablet and test your local development work easily on other devices.

After starting the client, you’ll be provided the public URL you can plug into your integration for testing. You’ll also see a console where you can observe incoming connections.

Resources

Jun 20 2019
Jun 20

We’ve been starting many of our projects using Acquia’s Lightning distribution. This gives a good, consistent starting point for and helps speed development through early adoption of features that are still-in-the-works for Drupal 8. Like other distributions, Lightning bundles Drupal Core with a set of contributed modules and pre-defined configuration.

While Lightning is a great base to start from, sometimes you want to deviate from the path it provides. Say for example you want to use a Paragraphs based system for page components, your client has a fairly complex custom publishing workflow, and you also have different constraints for managing roles. Out-of-the-box, Acquia Lightning has a number of features you may find yourself in conflict with. Things like Lightning Layout provide a landing page content type that may not fit the needs for the site. Lightning Roles has a fairly hard-coded set of assumptions for role generation. And while it is a good solution for many sites, Lightning Workflow may not always be the right fit.

You may find yourself tempted to uninstall these modules and delete the configuration they brought to the party, but things are not always that simple. Because of the inter-relationships and dependencies involved, simply uninstalling these modules may not be possible. Usually, all looks fine, then when it comes time for a deployment things fall apart quickly.

This is where sub-profiles can save the day. By creating a sub-profile of Acquia Lightning you can tweak Lightning’s out-of-the-box behavior and include or exclude modules to fit your needs. Sub-profiles inherit all of the code and configuration from the base profile they extend. This gives the developer the ability to take an install profile like Acquia Lightning and tweak it to fit her project’s needs. Creating a sub-profile can be as easy as defining it via a *.info.yml file.

In our example above, you may create a sub-profile like this:

name: 'example_profile'
type: profile
description: 'Lightning sub-profile'
core: '8.x'
type: profile
base profile: lightning
themes:
  - mytheme
  - seven
install:
  - paragraphs
  - lightning_media
  - lightning_media_audio
  - lightning_media_video
exclude:
  - lightning_roles
  - lightning_page
  - lightning_layout
  - lightning_landing_page

This profile includes dependencies we’re going to want, like Paragraphs – and excludes the things we want to manage ourselves. This helps ensure that when it comes time for deployment, you should get what you expect. You can create a sub-profile yourself by adding a directory and info.yml file in the “profiles” directory, or if you have Drupal Console and you’re using Acquia Lightning, you can follow Acquia’s instructions. This Drupal Console command in Lightning will walk you through a wizard to pick and choose modules you’d like to exclude.

Once you’ve created your new sub-profile, you can update your existing site to use this profile. First, edit your settings.php and update the ‘install_profile’ settings.

$settings['install_profile'] = 'example_profile';

Then, use Drush to make the profile active.

drush cset core.extension module.example_profile 0

Once your profile is active and in-use, you can export your configuration and continue development.

Jun 11 2019
Jun 11

Every once in a while you have those special pages that require a little extra something. Some special functionality, just for that page. It could be custom styling for a marketing landing page, or a third party form integration using JavaScript. Whatever the use case, you need to somehow sustainably manage JavaScript or CSS for those pages.

Our client has some of these special pages. These are pages that live outside of the standard workflow and component library and require their own JS and CSS to pull them together.  Content authors want to be able to manage these bits of JavaScript and CSS on a page-by-page basis. Ideally, these pages would go through the standard development and QA workflow before code makes it out to production. Or perhaps you need to work in the opposite direction, giving the content team the ability to create in Production, but then capture and pull them back into the development pipeline in future deployments?

This is where Drupal 8’s Configuration Entities become interesting. To tackle this problem, we created a custom config entity to capture these code “snippets”. This entity gives you the ability to enter JavaScript or CSS into a text area or to paste in the URL to an externally hosted resource. It then gives you a few choices on how to handle the resulting Snippet. Is this JavaScript, or CSS? Do you want to scope the JavaScript to the Footer or the Header? Should we wrap the JavaScript in a Drupal Behavior?

Once the developer makes her selections and hits submit, the system looks at the submitted configuration and if it’s not an external resource, it writes a file to the filesystem of the Drupal site.

Now that you’ve created your library of Snippets, you can then make use of them on your content. From either your Content Type, Paragraph, or other Content Entity – simply create a new reference field. Choose “Other”, then on the next page scroll through the entity type list till you get to the configuration section and select JSnippet. Your content creators will then have access to the Snippets when creating content.

By providing our own custom Field Formatter for Entity Reference fields, we’re then able to alter how that snippet is rendered on the final page. During the rendering process, when the Snippet reference field is rendered, the custom field formatter loads the referenced configuration entity and uses its data and our dynamically generated library info to attach the relevant JavaScript or CSS library to the render array. During final rendering, this will result in the JavaScript or CSS library being added to the page, within its proper scope.

Because these snippets are configuration entities, they can be captured and exported with the site’s configuration. This allows them to be versioned and deployed through your standard deployment process. When the deployed configuration is integrated, the library is built up and any JS or CSS is written to the file system.

Want to try it out? Head on over to Drupal.org and download the JSnippet module. If you have any questions or run into any issues just let us know in the issue queue.

Apr 05 2019
Apr 05

In this project we had built a collection of components using a combination of Paragraphs and referenced block entities. While the system we built was incredibly flexible, there were a number of variations we wanted to be able to apply to each component. We also wanted the system to be easily extensible by the client team going forward. To this end, we came up with a system of configuration entities that would allow us to provide references to classes and thematically name these styles. We built upon this by extending the EntityReferenceSelections plugin, allowing us to customize the list of styles available to a component by defining where those styles could be used.

The use of configuration entities allows the client team to develop and test new style variations in the standard development workflow and deploy them out to forward environments, giving an opportunity to test the new styles in QA prior to deployment to Production.

The Styles configuration entity

This configuration entity is at the heart of the system. It allows the client team to come in through the UI and create the new style. Each style is comprised of one or more classes that will later be applied to the container of the component the style is used on. The Style entity also contains configuration allowing the team to identify where this style can be used. This will be used later in the process to allow the team to limit the list of available styles to just those components that can actually make use of them.

The resulting configuration for the Style entity is then able to be exported to yml, versioned in the project repository and pushed forward through our development pipeline. Here’s an example of a Style entity after export to the configuration sync directory.

uuid: 7d112e4e-0c0f-486e-ae36-b608f55bf4e4
langcode: en
status: true
dependencies: {  }
id: featured_blue
label: 'Featured - Blue'
classes:
  - comp__featured-blue
uses:
  rte: rte
  cta: cta
  rail: '0'
  layout: '0'
  content: '0'
  oneboxlisting: '0'
  twoboxlisting: '0'
  table: '0'

Uses

For “Uses” we went with a simple configuration form. The result of this is form is stored in the key value store for Drupal 8. We can then access that configuration from our Styles entity and from our other plugins in order to retrieve and decode the values. Because the definition of each use was a simple key and label, we didn’t need anything more complex for storage.

Assigning context through a custom Selection Plugin

By extending the core EntityReferenceSelection plugin, we’re able to combine our list of Uses with the uses defined in each style component. To add Styles to a component, the developer would first add a entity reference field to the the Styles config entity to the component in question. In the field configuration for that entity reference field, we can chose our custom Selection Plugin. This exposes our list of defined uses. We can then select the appropriate use for this component. The end result of this is that only the applicable styles will be presented to the content team when they create components of this type.

getConfiguration()['uses'];

    if ($options) {
      $form['uses'] = [
        '#type' => 'checkboxes',
        '#title' => $this->t('Uses'),
        '#options' => $options,
        '#default_value' => $uses,
      ];
    }
    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public function getReferenceableEntities($match = NULL, $match_operator = 'CONTAINS', $limit = 0) {
    $uses_config = $this->getConfiguration()['uses'];

    $uses = [];
    foreach ($uses_config as $key => $value) {
      if (!empty($value)) {
        $uses[] = $key;
      }
    }

    $styles = \Drupal::entityTypeManager()
      ->getStorage('styles')
      ->loadMultiple();

    $return = [];
    foreach ($styles as $style) {
      foreach ($style->get('uses') as $key => $value) {
        if (!empty($value)) {
          if (in_array($key, $uses)) {
            $return[$style->bundle()][$style->id()] = $style->label();
          }
        }
      }
    }
    return $return;
  }

}

In practice, this selection plugin presents a list of our defined uses in the configuration for the field. The person creating the component can then select the appropriate use definitions, limiting the scope of styles that will be made available to the component.

Components, with style.

The final piece of the puzzle is how we add the selected styles to the components during content creation. Once someone on the content team adds a component to the page and selects a style, we then need to apply the style to the component. This is handled by preprocess functions for each type of component we’re working with. In this case, Paragraphs and Blocks.

In both of the examples below we check to see if the entity being rendered has our ‘field_styles’. If the field exists, we load its values and the default class attributes already applied to the entity. We then iterate over any styles applied to the component and add any classes those styles define to an array. Those classes are merged with the default classes for the paragraph or block entity. This allows the classes defined to be applied to the container for the component without a need for modifying any templates.

/**
 * Implements hook_preprocess_HOOK().
 */
function bcbsmn_styles_preprocess_paragraph(&$variables) {
  /** @var Drupal\paragraphs\Entity\Paragraph $paragraph */
  $paragraph = $variables['paragraph'];
  if ($paragraph->hasField('field_styles')) {
    $styles = $paragraph->get('field_styles')->getValue();
    $classes = isset($variables['attributes']['class']) ? $variables['attributes']['class'] : [];
    foreach ($styles as $value) {
      /** @var \Drupal\bcbsmn_styles\Entity\Styles $style */
      $style = Styles::load($value['target_id']);
      if ($style instanceof Styles) {
        $style_classes = $style->get('classes');
        foreach ($style_classes as $class) {
          $classes[] = $class;
        }
      }
    }
    $variables['attributes']['class'] = $classes;
  }
}

/**
 * Implements hook_preprocess_HOOK().
 */
function bcbsmn_styles_preprocess_block(&$variables) {
  if ($variables['base_plugin_id'] == 'block_content') {
    $block = $variables['content']['#block_content'];
    if ($block->hasField('field_styles')) {
      $styles = $block->get('field_styles')->getValue();
      $classes = isset($variables['attributes']['class']) ? $variables['attributes']['class'] : [];
      foreach ($styles as $value) {
        /** @var \Drupal\bcbsmn_styles\Entity\Styles $style */
        $style = Styles::load($value['target_id']);
        if ($style instanceof Styles) {
          $style_classes = $style->get('classes');
          foreach ($style_classes as $class) {
            $classes[] = $class;
          }
        }
      }
      $variables['attributes']['class'] = $classes;
    }
  }
}

Try it out

We’ve contributed the initial version of this module to Drupal.org as the Style Entity project. We’ll continue to refine this as we use it on future projects and with the input of people like you. Download Style Entity and give it a spin, then let us know what you think in the issue queue.

Mar 11 2019
Mar 11
Photo by Bureau of Reclamation https://www.flickr.com/photos/usbr/12442269434

You’ve decided to use Acquia DAM for managing your digital assets, and now you need to get those assets into Drupal where they can be put to use. Acquia has you covered for most use cases with the Media: Acquia DAM module. This module provides a suite of tools to allow you to browse the DAM for assets and associate them to Media entities. It goes a step farther by then ensuring that those assets and their metadata stay in synch when updates are made in the DAM.

This handles the key use case of being able to reference assets to an existing entity in Drupal, but what if your digital assets are meant to live stand-alone in the Drupal instance? This was the outlying use case we ran into on a recent project.

The Challenge

The customer site had the requirement of building several filterable views of PDF resources. It didn’t make sense to associate each PDF to a node or other entity, as all of the metadata required to build the experience could be contained within the Media entity itself. The challenge now was to get all of those assets out of the DAM and into media entities on the Drupal site without manually referencing them from some other Drupal entity.

The Solution

By leveraging the API underlying the Media: Acquia DAM module we were able to create our own module to manage mass importing entire folders of assets from Acquia DAM into a specified Media bundle in Drupal. This takes advantage of the same configuration and access credentials used by Media: Acquia DAM and also leverages that module for maintaining updates to metadata for the assets post-import.

The Acquia DAM Asset Importer module allows the site administrator to specify one or more folders from Acquia DAM to import assets from. Once configured, the module runs as a scheduled task through Drupal’s cron. On each cron run, the module will first check to see if there are any remaining import tasks to complete. If not, it will use the Acquia DAM API to retrieve a list of asset IDs for the specified folders. It compares that to the list of already imported assets. If new assets exist in the folders in Acquia DAM, they’re then added to the module’s Queue implementation to be imported in the background.

The QueueWorker implementation that exists as part of the Acquia DAM Asset Importer will then process it’s queue on subsequent cron runs, generating a new Media entity of the specified bundle, adding the asset_id from Acquia DAM and executing save() on the entity. At this point the code in Media: Acquia DAM takes over, pulling in metadata about the asset and synching it and the associated file to Drupal. Once the asset has been imported into Drupal as a Media entity, the Media: Acquia DAM module keeps the metadata for that Media Entity in synch with Acquia DAM using its own QueueWorker and Cron implementations to periodically pull data from DAM and update the Media entity.

Try it out

Are you housing assets in Acquia DAM and need to import them into your Drupal site? We’ve contributed the Acquia DAM Asset Importer module on Drupal.org. Download it here and try it out.

Mar 06 2019
Mar 06

Using Paragraphs to define components in Drupal 8 is a common approach to providing a flexible page building experience for your content creators. With the addition of Acquia Lift and Content Hub, you can now not only build intricate pages – you can personalize the content experience for site visitors.

Personalization with Acquia Lift and Content Hub

Acquia Lift is a personalization tool optimized for use with Drupal. The combination of Acquia Lift and Content Hub allows for entities created in Drupal to be published out to Content Hub and be made available through Lift to create a personalized experience for site visitors. In many instances, the personalized content used in Lift is created by adding new Blocks containing the personalized content, but not all Drupal sites utilize Blocks for content creation and page layout.

Personalizing paragraph components

To personalize a Paragraph component on a page, we’ll need to create a new derivative of that component with the personalized content for export to Content Hub. That means creating duplicate content somewhere within the Drupal site. This could be on a different content type specifically meant for personalization.

To make this process easier on our content creators we developed a different approach. We added an additional Paragraphs reference to the content types we wanted to enable personalization on. This “Personalized Components” field can be used to add derivatives of components for each segment in Acquia Lift. The field is hidden from display on the resulting page, but the personalized Paragraph entities are published to Content Hub and available for use in Lift. This allows the content team to create and edit these derivatives in the same context as the content they’re personalizing. In addition, because Paragraphs do not have a title of their own, we can derive a title for them from combination of the title of their parent page and the type of component being added. This makes it easy for the personalization team to find the relevant content in Acquia Lift’s Experience Builder.

In addition to all of this, we also added a “Personalization” tab. If a page has personalized components, this tab will appear for the content team allowing them to review the personalized components for that page.

Keeping the personalized experience in the context of the original page makes it easier for the entire team to build and maintain personalized content.

The technical bits

There were a few hurdles in getting this all working. As mentioned above, Paragraph entities do not have a title property of their own. This means that when their data is exported to Content Hub, they all appear as “Untitled”. Clearly this doesn’t make for a very good user experience. To get around this limitation we leveraged one of the API hooks in the Acquia Content Hub module.

loadEntityByUuid($cdf->getType(), $cdf->getUuid());

  /** @var \Drupal\node\Entity\Node $node */
  $node = _get_parent_node($paragraph);
  $node_title = $node->label();

  $paragraph_bundle = $paragraph->bundle();
  $paragraph_id = $paragraph->id();

  $personalization_title = $node_title . ' - ' . $paragraph_bundle . ':' . $paragraph_id;

  if ($cdf->getAttribute('title') == FALSE) {
    $cdf->setAttributeValue('title', $personalization_title, 'en');
  }
}

/**
 * Helper function for components to identify the current node/entity.
 */
function _get_parent_node($entity) {
  // Recursively look for a non-paragraph parent.
  $parent = $entity->getParentEntity();
  if ($parent instanceof Node) {
    return $parent;
  }
  else {
    return _get_parent_node($parent);
  }
}

This allows us to generate a title for use in Content Hub based on the title of the page we’re personalizing the component on and the type of Paragraph being created.

In addition to this, we also added a local task and NodeViewController to allow for viewing the personalized components. The local task is created by adding a mymodule.links.task.yml and mymodule.routing.yml to your custom module.

*.links.task.yml:

personalization.content:
  route_name: personalization.content
  title: 'Personalization'
  base_route: entity.node.canonical
  weight: 100

*.routing.yml:

personalization.content:
  path: '/node/{node}/personalization'
  defaults:
    _controller: '\Drupal\mymodule\Controller\PersonalizationController::view'
    _title: 'Personalized components'
  requirements:
    _custom_access: '\Drupal\mymodule\Controller\PersonalizationController::access'
    node: \d+

The route is attached to our custom NodeViewController. This controller loads the latest revision of the current Node entity for the route and builds rendered output of a view mode which shows any personalized components.

entityManager->getStorage('node')
      ->revisionIds($node);
    $last_revision_id = end($revision_ids);
    if ($node->getLoadedRevisionId() <> $last_revision_id) {
      $node = $this->entityManager->getStorage('node')
        ->loadRevision($last_revision_id);
    }
    $build = parent::view($node, $view_mode, $langcode);
    return $build;
  }

  /**
   * Custom access controller for personalized content.
   */
  public function access(AccountInterface $account, EntityInterface $node) {
    /** @var \Drupal\node\Entity\Node $node */
    $personalized = FALSE;
    if ($account->hasPermission('access content overview')) {
      if ($node->hasField('field_personalized_components')) {
        $revision_ids = $this->entityManager->getStorage('node')
          ->revisionIds($node);
        $last_revision_id = end($revision_ids);
        if ($node->getLoadedRevisionId() <> $last_revision_id) {
          $node = $this->entityManager->getStorage('node')
            ->loadRevision($last_revision_id);
        }
        if (!empty($node->get('field_personalized_components')->getValue())) {
          $personalized = TRUE;
        }
      }
    }
    return AccessResult::allowedIf($personalized);
  }
}

The controller both provides the rendered output of our “Personalization” view mode, it also uses the access check to ensure that we have personalized components. If no components have been added, the “Personalization” tab will not be shown on the page.

Mar 04 2019
Mar 04


Bitbucket Pipelines is a CI/CD service, built into Bitbucket and offers an easy solution for building and deploying to Acquia Cloud for project’s whose repositories live in Bitbucket and who opt out of using Acquia’s own Pipelines service. Configuration of Bitbucket Pipelines begins with the creation of a bitbucket-pipelines.yml file and adding that file to the root of your repository. This configuration file details how Bitbucket Pipelines will construct the CI/CD environment and what tasks it will perform given a state change in your repository.

Let’s walk through an example of this configuration file built for one of our clients.

bitbucket-pipelines.yml

image: geerlingguy/drupal-vm:4.8.1
clone:
  depth: full
pipelines:
  branches:
    develop:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
           - scripts/ci/deploy.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer
    test/*:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer
  tags:
    release-*:
      - step:
          name: "Release deployment"
          script:
            - scripts/ci/build.sh
            - scripts/ci/test.sh
            - scripts/ci/deploy.sh
          services:
            - docker
            - mysql
          caches:
            - docker
            - node
            - composer
definitions:
  services:
    mysql:
      image: mysql:5.7
      environment:
        MYSQL_DATABASE: 'drupal'
        MYSQL_USER: 'drupal'
        MYSQL_ROOT_PASSWORD: 'root'
        MYSQL_PASSWORD: 'drupal'

The top section of bitbucket-pipelines.yml outlines the basic configuration for the CI/CD environment. Bitbucket Pipelines uses Docker at its foundation, so each pipeline will be built up from a Docker image and then your defined scripts will be executed in order, in that container.

image: geerlingguy/drupal-vm:4.8.1
clone:
  depth: full

This documents the image we’ll use to build the container. Here we’re using the Docker version of  Drupal VM. We use the original Vagrant version of Drupal VM in Acquia BLT for local development. Having the clone depth set to full ensures we pull the entire history of the repository. This was found to be necessary during the initial implementation.

The “pipelines” section of the configuration defines all of the pipelines configured to run for your repository. Pipelines can be set to run on updates to branches, tags or pull-requests. For our purposes we’ve created three pipelines definitions.

pipelines:
  branches:
    develop:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
           - scripts/ci/deploy.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer
    test/*:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer

Under branches we have two pipelines defined. The first, “develop”, defines the pipeline configuration for updates to the develop branch of the repository. This pipeline is executed whenever a pull-request is merged into the develop branch. At the end of execution, the deploy.sh script builds an artifact and deploys that to the Acquia Cloud repository. That artifact is automatically deployed and integrated into the Dev instance on Acquia Cloud.

The second definition, “test/*”, provides a pipeline definition that can be used for testing updates to the repository. This pipeline is run whenever a branch named ‘test/*’ is pushed to the repository. This allows you to create local feature branches prefixed with “test/” and push them forward to verify how they will build in the CI environment. The ‘test/*’ definition will only execute the build.sh and test.sh scripts and will not deploy code to Acquia Cloud. This just gives us a handy way of doing additional testing for larger updates to ensure that they will build cleanly.

The next section of the pipelines definition is set to execute when commits in the repository are tagged.

tags:
  release-*:
    - step:
        name: "Release deployment"
        script:
          - scripts/ci/build.sh
          - scripts/ci/test.sh
          - scripts/ci/deploy.sh
        services:
          - docker
          - mysql
        caches:
          - docker
          - node
          - composer

This pipeline is configured to be executed whenever a commit is tagged with the name pattern of “release-*”. Tagging a commit for release will run the CI/CD process and push the tag out to the Acquia Cloud repository. That tag can then be selected for deployment to the Stage or Production environments.

The final section of the pipelines configuration defines services built and added to the docker environment during execution.

definitions:
  services:
    mysql:
      image: mysql:5.7
      environment:
        MYSQL_DATABASE: 'drupal'
        MYSQL_USER: 'drupal'
        MYSQL_ROOT_PASSWORD: 'root'
        MYSQL_PASSWORD: 'drupal'

This section allows us to add a Mysql instance to Docker, allowing our test scripts to do a complete build and installation of the Drupal environment, as defined by the repository.

Additional resources on Bitbucket Pipelines and bitbucket-pipelines.yml:

Scripts

The bitbucket-pipelines.yml file defines the pipelines that can be run, and in each definition it outlines scripts to run during the pipeline’s execution. In our implementation we’ve split these scripts up into three parts:

  1. build.sh – Sets up the environment and prepares us for the rest of the pipeline execution.
  2. test.sh – Runs processes to test the codebase.
  3. deploy.sh – Contains the code that builds the deployment artifact and pushes it to Acquia Cloud.

Let’s review each of these scripts in more detail.

build.sh

#!/bin/bash
apt-get update && apt-get install -o Dpkg::Options::="--force-confold" -y php7.1-bz2 curl && apt-get autoremove
curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
apt-get install -y nodejs
apt-get install -y npm
cd hive
npm install
npm install -g gulp
cd ..
composer install
mysql -u root -proot -h 127.0.0.1 -e "CREATE DATABASE IF NOT EXISTS drupal"
export PIPELINES_ENV=PIPELINES_ENV

This script takes our base container, built from our prescribed image, and starts to expand upon it. Here we make sure the container is up-to-date, install dependencies such as nodejs and npm, run npm in our frontend library to build our node_modules dependencies, and instantiate an empty database that will be used later when we perform a test install from our codebase.

test.sh

#!/bin/bash
vendor/acquia/blt/bin/blt validate:phpcs --no-interaction --ansi --define environment=ci
vendor/acquia/blt/bin/blt setup --yes  --define environment=ci --no-interaction --ansi -vvv

The test.sh file contains two simple commands. The first runs a PHP code sniffer to validate our custom code follows prescribed standards. This command also runs as a pre-commit hook during any code commit in our local environments, but we execute it again here as an additional safeguard. If code makes it into the repository that doesn’t follow the prescribed standards, a failure will be generated and the pipeline will halt execution. The second command takes our codebase and does a complete Drupal installation from it, instantiating a copy of Drupal 8 and importing the configuration contained in our repository. If invalid or conflicting configuration makes it into the repository, it will be picked up here and the pipeline will exit with a failure. This script is also where additional testing could be added, such as running Behat or other test suites to verify our evolving codebase doesn’t produce regressions.

deploy.sh

#!/bin/bash
set -x
set -e

if [ -n "${BITBUCKET_REPO_SLUG}" ] ; then

    git config user.email "[email protected]"
    git config user.name "Bitbucket Pipelines"

    git remote add deploy $DEPLOY_URL;

    # If the module is -dev, a .git file comes down.
    find docroot -name .git -print0 | xargs -0 rm -rf
    find vendor -name .git -print0 | xargs -0 rm -rf
    find vendor -name .gitignore -print0 | xargs -0 rm -rf

    SHA=$(git rev-parse HEAD)
    GIT_MESSAGE="Deploying ${SHA}: $(git log -1 --pretty=%B)"

    git add --force --all

    # Exclusions:
    git status
    git commit -qm "${GIT_MESSAGE}" --no-verify

    if [ $BITBUCKET_TAG ];
      then
        git tag --force -m "Deploying tag: ${BITBUCKET_TAG}" ${BITBUCKET_TAG}
        git push deploy refs/tags/${BITBUCKET_TAG}
    fi;

    if [ $BITBUCKET_BRANCH ];
      then
        git push deploy -v --force refs/heads/$BITBUCKET_BRANCH;
    fi;

    git reset --mixed $SHA;
fi;

The deploy.sh script takes the product of our repository and creates an artifact in the form of a separate, fully-merged Git repository. That temporary repository then adds the Acquia Cloud repository as a deploy origin and pushes the artifact to the appropriate branch or tag in Acquia Cloud. The use of environment variables allows us to use this script both to deploy the Develop branch to the Acquia Cloud repository as well as deploying any tags created on the Master branch so that those tags appear in our Acquia Cloud console for use in the final deployment to our live environments. For those using BLT for local development, this script could be re-worked to use BLT’s internal artifact generation and deployment commands.

Configuring the cloud environments

The final piece of the puzzle is ensuring that everything is in-place for the pipelines to process successfully and deploy code. This includes ensuring that environment variables used by the deploy.sh script exist in Bitbucket and that a user with appropriate permissions and SSH keys exists in your Acquia Cloud environment, allowing the pipelines process to deploy the code artifact to Acquia Cloud.

Bitbucket configuration

DEPLOY_URL environment variable

Configure the DEPLOY_URL environment variable. This is the URL to your Acquia Cloud repository.

  1. Log in to your Bitbucket repository.
  2. In the left-hand menu, locate and click on “Settings.”
  3. In your repository settings, locate the “Pipelines section” and click on “Repository variables.”
  4. Add a Repository variable:
    1. Name: DEPLOY_URL
    2. Value: The URL to your Acquia Cloud repository. You’ll find the correct value in your Acquia Cloud Dashboard.

SSH keys

Deploying to Acquia Cloud will also require giving your Bitbucket Pipelines processes access to your Acquia Cloud repository. This is done in the form of an SSH key. To configure an SSH key for the Pipelines process:

  1. In the “Pipelines” section of your repository settings we navigated to in steps 1-3 above, locate the “SSH keys” option and click through.
  2. On the SSH keys page click the “Generate keys” button.
  3. The generated “public key” will be used to provide access to Bitbucket in the next section.

Acquia Cloud configuration

For deployment to work, your Bitbucket Pipelines process will need to be able to push to your Acquia Cloud Git repository. This means creating a user account in Acquia Cloud and adding the key generated in Bitbucket above. You can create a new user or use an existing user. You can find more information on adding SSH keys to your Acquia Cloud accounts here: Adding a public key to an Acquia profile.

To finish the configuration, log back into your Bitbucket repository and retrieve the Known hosts fingerprint.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web