Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Apr 12 2021
Apr 12

At Morpht, we have been busy experimenting and building proof-of-concept personalisation features on Drupal 8. We started off with content recommendations as one of the cogs in a personalisation machine. Recombee stood out as a great AI-powered content recommendations engine and we decided to develop some contributed modules to integrate it with Drupal.

We have implemented three modules which work together:

  • Search API Recombee indexes content and pushes it across to Recombee.
  • Recombee module tracks users and displays recommendations.
  • JSON Template module defines a plugin system for transformers and templates and controls how JSON can be transformed client side.

The video below is a demonstration of these three modules and how they combine to deliver a power recommendations system capable of providing content to anonymous and logged in users alike. 

[embedded content]

Download the slides from the presentation
(PDF 785KB)

Let's talk

Find out how personalisation can help you increase audience engagement and lift user experience.

Contact us

Apr 06 2021
Apr 06

It seems that with each passing year there is a new paradigm for how content can be arranged and organised in Drupal. Over the years a number of approaches have moved in and out of being in vogue: Panels, Displya Suite, IPE, Bricks and Paragraphs to name a few. Some change has been positive, providing leaps forward in flexibility or control. Other developments have not lived up to their promise.

In February 2021 I presented a new module, Layout Paragraphs, to the Sydney Meetup. The slides and video have been provided below. This presentation demonstrates Layout Paragraphs in action and how offers some advanced layout options for Paragraphs. Conceptually it is similar to Layout Builder in many respects, however, it performs its magic on the Node Edit page, integrating with the natural content editing environment for site editors.

Layout Paragraphs offers a new way forward for the following reasons:

  • Editing happens on node edit, rather than the layouts page. Better for editors.
  • Paragraphs can be placed into Layout regions to bring more flexibility to Paragraphs. This is similar to what Bricks was doing.
  • Nicer UI for Paragraph selection.
  • Nicer UI for Paragraph display - no need for Preview view mode any more.

A bit of history

It is worth reviewing a little history to see where Layout Paragraphs fits in. The presentation takes a look at some of the popular combinations over the years and gives them over all scores, weighted by functionality and editor experience. Here is a spoiler of what is covered in the video:

Recipe Year Score Pure template 2010 65 Display Suite 2010 58 Panelizer 2012 69 Panelizer and Paragraphs 2014 73 Panelizer and IPE 2016 39 Panelizer, Bricks and Paragraphs 2017 63 Layout Builder and Blocks 2018 70 Layout Builder and Paragraphs 2019 78 Layout Builder, Layout Paragraphs, Paragraphs 2021 81

The scores were calculated from a weighted average of various aspects of the techniques: flexibility, control, editor experience, etc. Watch the video for the details.

Conclusion

You can see that Layout Paragraphs is the latest in the line of approaches and that it is scoring quite well. A recioe based around Lout Builder, Layout Paragraphs and Paragraphs seems to work quite will. Layout Builder remains the domain of the sitebuilder, using it to define the basic layouts for the page. With Layout Paragraphs, a new set of simpler layouts can be used by the editor for their paragraphs.

I think that the approach holds a lot of promise moving forward and it is good enough for Morpht to be considering it as a standard part of our editor toolkit. All up we have found the module to be usable and a definite improvement on editor experience. We are adopting it into projects where we can.

Watch the video and let us know what you think in the comments below.

Watch the video

Apr 06 2021
Apr 06

How does it stack up

Those of you who work with Drupal, you are probably familiar with the combination of using Search API with a search backend such as MySQL or Solr. A pluggable architecture makes Search API a good choice for indexing content in Drupal.

For a long time MySQL and Solr were the popular choices. MySQL was an easy choice as performance was good and results were OK. For those working with large datasets and many concurrent facets, Solr made more sense. Most Drupal hosting companies provide it as a service for this reason. As the search market has matured, other backends have become available, including one for Sajari.

The table below compares these three options and highlights the strengths and weaknesses of each.

Feature Database Solr Sajari

Separate service

No

Built into Drupal.

Yes

Drupal hosting companies provide a Solr as SaaS.

Yes

Sajari is available as a SaaS.

Full text search

Yes

Yes

Yes

Facets

Yes

Yes

Yes

More like this

No

Yes

A useful feature for providing item recommendations based on similarity.

No

Result quality

OK

Good

Very good

Performant

Partial

Slow with many filters over large datasets with facets.

Yes

Yes

Easy install

No

Requires a module such as Search API Database to push data across to Solr.

No

Requires a module such as Search API Solr to push data across to Solr.

Yes

We can configure Sajari in the Sajari UI to run from metadata on the page. Sajari provides an embeddable widget.

We recommend the Search API Sajari module approach.

Search API Integration

Yes

Search API Database module

Yes

Search API Solr module

Yes

Search API Sajari module

Federation

No

No

Yes

A site parameter can be passed into the index for easy filtering.

ReactJS components

No

No

Yes

Interface is faster than Search API as server round trips are not needed.

Result tracking

No

No

Yes

Built-in metrics understand page trends and poorly performing keywords to help you see what searches led your users to individual pages, or which content visitors are searching for but can’t find.

Reporting

No

Reports can be set up in analytics software.

No

Reports can be set up in analytics software.

Yes

Sajari provides logs and charts of search requests.

Autocomplete - suggestions

Yes

Extra module can be installed.

Yes

Extra module can be installed.

Yes

Synonyms

No

No

Yes

Libraries of synonyms can be uploaded via Sajari UI.

Typos

No

No

Yes

Support for misspelled words.

Boosting

Limited

Limited

Yes

Advanced rules can be defined on certain plans.

Machine learning

No

No

Yes

Sajari will learn which results are more or less relevant, promoting the best results to the top.

Pricing

Free

Database comes with Drupal hosting.

Included

Solr server comes built in with typical Drupal hosting.

Free and up

Starts free for smaller sites and then increases.

https://www.sajari.com/pricing

Summary

An easy, low cost search solution.

A more scalable solution with handy features such as “more like this”.

A fast system with smart results helpful for those looking for synonyms, results boosting, tracking and reporting.

Sajari is a viable alternative for clients who are looking for more insights into how their audience use the search on their site and more control over the delivery of the results. This is the case for content driven sites as well as for ecommerce configurations where preferences play a big role.

Integrating Sajari with Drupal

The Sajari Widgets

It is possible to implement Sajari search into any website without the need for the addition of modules or custom code in the backend. Sajari provides a set of widgets which will allow search to operate without the need for much technical knowledge.

Firstly, a Javascript tracking code will allow for “instant indexing”. When a user visits a page, the code fires up and tells Sajari about the page. Sajari can then visit and index the page to update its index. This approach is simple to set up but has its downsides - freshly updated or deleted content will not make it into the index immediately. If this is a concern, then using Search API Sajari, below, would be an alternative.

Secondly, Sajari offers a tool in the admin UI to define a search form and results. It covers things such as the search query, filters, tabs, result counts and result display. It is very easy to configure. The result is a snippet we can embed onto your search page. A set of ReactJS components drive the search and return results in lightning speed, leading to a good experience for users.

Drupal Module: Search API Sajari

For those looking for a tighter integration between their Drupal site and Sajari, it is possible to use their API to push updated content across. The Search API Sajari module , authored by the developers at Morpht, provides a backend to the venerable Search API module  This will update Sajari when content is updated on your Drupal site.

The main advantages of this approach are:

  • Content is indexed instantly, even when no one views it;
  • Deleted content is removed from the index immediately;
  • The tools within Search API allow for the fine tuning of the various fields;
  • There is support for sending a site name across in the result, allowing for federation of results.

Drupal Module: Sajari

The widgets provided by Sajari offer a quick way to get up and running with a search page. However, there are some limitations in the way they work. At the time of writing (early 2021) the widgets did not support the definition of facets.

In order to overcome this shortcoming, Morpht developed a ReactJS library which sits on top of the components provided by Sajari. It has quite a number of configuration options for queries, result counts, filters, tabs and facets. It even has the ability to customise the results through the provision of a callback function which can convert the JSON result to HTML. This code is available at Sajari Configuartor.

The Sajari module makes use of Sajari Configuartor to power the way search is implemented. The module provides a block for defining how the search will operate. The configuration is then passed through to the Sajari Configurator and the UI and results are then shown.

The Sajari module also makes use of the JSON Template module which allows for different handlebars templates to be defined by the themer. These templates can then be selected by an editor during the block creation process. The select template then forms the basis for the callback which is passed into the Sajari Configuartor. The result is that editors can select how to show results. There is no need to alter the ReactJS templates which are in the library.

A recipe

If you are looking to get up and running with Sajari, we recommend this process:

  • Sign up for a free account at Sajari;
  • Set up an initial collection in Sajari, but add no fields;
  • Install JSON Template, Sajari and Search API Sajari;
  • Configure Search API Sajari with your collection details in a new Server;
  • Define your Node Index and assign it to the Sajari server you have just created. The schema will be updated automatically in Sajari with the changes you make in Drupal;
  • Confirm that content is being indexed properly;
  • Add a Sajari search block to your search page and configure it. Be sure to use the correct pipeline and get the field names right;
  • Test the search and confirm it is working.

Conclusion

Sajari is an up-and-coming search provider offering a new breed of search which can utilise human behaviour to improve the results it shows. It's useful for content heavy and ecommerce sites which have a strong need for good search results. There are now integration modules for Drupal to get you up and running with Sajari easily.

Is Sajari right for you?

If you currently have a Drupal site based on a different engine and are interested in what Sajari can offer you, please get in touch with us to discuss it further.

Apr 06 2021
Apr 06

A case study on how we configured GovCMS8 SaaS platform to handle bulk uploads and the assignment of metadata. 

The Attorney-General’s Department supports the technical and content management for several Royal Commissions. A common request is for the timely publishing and management of numerous documents tendered before or at public hearings - 200 documents or more in some instances. With a series of hearings taking place all around Australia, the pressure on the publishing team to manage these loads presents challenges.

The Bulk Upload solution addresses some key requirements:
    

  • Classify and sort documents in one bulk upload process,
  • Manage different file formats of the same document,
  • Run a publishing workflow to gain legal sign off before publishing,
  • Handle the versioning of files when a legal update is required,
  • Display those documents in different views on different pages: hearings, document library, and
  • Make the complex simple and work around limitations.
     
Apr 06 2021
Apr 06

Morpht is located on the traditional lands of the Gadigal people of the Eora Nation as the traditional custodians of this place we now call Sydney. We pay our respects to Elders both past and present and recognise Aboriginal and Torres Strait Islander people as the Traditional Custodians of the land.

Apr 06 2021
Apr 06

Drupal 8 is built on PHP, but using new architecture paradigms that can be difficult to grasp for developers coming from a Drupal 7 background. The Typed Data API lies at the core of Drupal 8, and provides building blocks used throughout the Drupal 8 architecture. In this presentation, Jay Friendly, Morpht's Technical Director, dives into the Typed Data API, what it is, how it works, and why it is so awesome!

Apr 06 2021
Apr 06

Guzzle makes HTTP requests easy. When they work, it's like magic. However, as with all coding, getting something to work requires debugging, and this is where the Drupal implementation of Guzzle has a major usability problem - any returned messages are truncated, meaning that with the default settings, error messages that can help debug an issue are not accessible to the developer. This article will show developers how they can re-structure their Guzzle queries to log the full error to the Drupal log, instead of a truncated error that does not help fix the issue.

Standard Methodology

Generally, when making a Guzzle request, it is made using a try/catch paradigm, so that the site does not crash in the case of an error. When not using try/catch, a Guzzle error will result in a WSOD, which is as bad as it gets for usability. So let's take a look at an example of how Guzzle would request a page using a standard try/catch:

try {
  $client = \Drupal::httpClient();
  $result = $client->request('GET', 'https://www.google.com');
}
catch (\Exception $error) {
  $logger = \Drupal::logger('HTTP Client error');
  $logger->error($error->getMessage());
}

This code will request the results of www.google.com, and place them in the $result variable. In the case that the request failed for some reason, the system logs the result of $error->getMessage() to the Drupal log.

The problem, as mentioned in the intro, is that the value returned from $error->getMessage() contains a truncated version of the response returned from the remote website. If the developer is lucky, the text shown will contain enough information to debug the problem, but rarely is that the case. Often the error message will look something along the lines of:

Client error: `POST https://exaxmple.com/3.0/users` resulted in a `400 Bad Request` response: {"type":"http://developer.example.com/documentation/guides/error-glossary/","title":"Invalid Resource","stat (truncated...)

As can be seen, the full response is not shown. The actual details of the problem, and any suggestions as to a solution are not able to be seen. What we want to happen is that the full response details are logged, so we can get some accurate information as to what happened with the request.

Debugging Guzzle Errors

In the code shown above, we used the catch statement to catch \Exception. Generally developers will create a class that extends \Exception, allowing users to catch specific errors, finally catching \Exception as a generic default fallback.

When Guzzle hits an error, it throws the exception GuzzleHttp\Exception\GuzzleException. This allows us to catch this exception first to create our own log that contains the full response from the remote server.

We can do this, because GuzzleException provides the response object from the original request, which we can use to get the actual response body the remote server sent with the error. We then log that response body to the Drupal log.

use Drupal\Component\Render\FormattableMarkup;
use GuzzleHttp\Exception\GuzzleException;
try {
  $response = $client->request($method, $endpoint, $options);
}
// First try to catch the GuzzleException. This indicates a failed response from the remote API.
catch (GuzzleException $error) {
  // Get the original response
  $response = $error->getResponse();
  // Get the info returned from the remote server.
  $response_info = $response->getBody()->getContents();
  // Using FormattableMarkup allows for the use of

 tags, giving a more readable log item.
  $message = new FormattableMarkup('API connection error. Error details are as follows:
@response
', ['@response' => print_r(json_decode($response_info), TRUE)]);
  // Log the error
  watchdog_exception('Remote API Connection', $error, $message);
}
// A non-Guzzle error occurred. The type of exception is unknown, so a generic log item is created. catch (\Exception $error) {
  // Log the error.
  watchdog_exception('Remote API Connection', $error, t('An unknown error occurred while trying to connect to the remote API. This is not a Guzzle error, nor an error in the remote API, rather a generic local error ocurred. The reported error was @error', ['@error' => $error->getMessage()));
}

With this code, we have caught the Guzzle exception, and logged the actual content of the response from the remote server to the Drupal log. If the exception thrown was any other kind of exception than GuzzleException, we are catching the generic \Exception class, and logging the given error message.

By logging the response details, our log entry will now look something like this:

Remote API connection error. Error details are as follows:

stdClass Object (
  [title] => Invalid Resource
  [status] => 400
  [detail] => The resource submitted could not be validated. For field-specific details, see the 'errors' array.
  [errors] => Array (
    [0] => stdClass Object (
      [field] => some_field
      [message] => Data presented is not one of the accepted values: 'Something', 'something else', or another thing'
    )
  )
)

* Note that this is just an example, and that each API will give its own response structure.

This is a much more valuable debug message than the original truncated message, which left us understanding that there had been an error, but without the information required to fix it.

Summary

Drupal 8 ships with Guzzle, an excellent HTTP client for making requests to other servers. However, the standard debugging method doesn't provide a helpful log message from Guzzle. This article shows how to catch Guzzle errors, so that the full response can be logged, making debugging of connection to remote servers and APIs much easier.

Happy Drupaling!

Apr 06 2021
Apr 06

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it's fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

This article will focus specifically on how developers can manage, declare, and debug configuration in their custom modules.

Configuration Schema

Configuration schema describes the type of configuration a module introduces into the system. Schema definitions are used for things like translating configuration and its values, for typecasting configuration values into their correct data types, and for migrating configuration between systems. Having configuration in the system is not as helpful without metadata that describes what the configuration is. Configuration schemas define the configuration items.

Any module that introduces any configuration into the system MUST define the schema for the configuration the module introduces.

Configuration schema definitions are declared in [MODULE ROOT]/config/schema/[MODULE NAME].schema.yml, where [MODULE NAME] is the machine name of the module. Schema definitions may define one or multiple configuration objects. Let's look at the configuration schema for the Restrict IP module for an example. This module defines a single configuration object, restrict_ip.settings:

restrict_ip.settings:
  type: config_object
  label: 'Restrict IP settings'
  mapping:
    enable:
      type: boolean
      label: 'Enable module'
    mail_address:
      type: string
      label: 'Contact mail address to show to blocked users'
    dblog:
      type: boolean
      label: 'Log blocked access attempts'
    allow_role_bypass:
      type: boolean
      label: 'Allow IP blocking to be bypassed by roles'
    bypass_action:
      type: string
      label: 'Action to perform for blocked users when bypassing by role is enabled'
    white_black_list:
      type: integer
      label: 'Whether to use a path whitelist, blacklist, or check all pages'
    country_white_black_list:
      type: integer
      label: 'Whether to use a whitelist, blacklist, or neither for countries'
    country_list:
      type: string
      label: 'A colon separated list of countries that should be white/black listed'

The above schema defines the config object restrict_ip.settings which is of type config_object (defined in core.data_types.schema.yml).

When this module is enabled, and the configuration is exported, the filename of the configuration will be restrict_ip.settings.yml. This object has the keys enable, mail_address, dblog etc. The schema tells what type of value is to be stored for each of these keys, as well as the label of each key. Note that this label is automatically provided to Drupal for translation.

The values can be retrieved from the restrict_ip.settings object as follows:

$enable_module = \Drupal::config('restrict_ip.settings')->get('enable');
$mail_address = \Drupal::config('restrict_ip.settings')->get('mail_address');
$log = \Drupal::config('restrict_ip.settings')->get('dblog');

Note that modules defining custom fields, widgets, and/or formatters must define the schema for those plugins. See this page to understand how the schema definitions for these various plugins should be defined.

Default configuration values

If configuration needs to have default values, the default values can be defined in [MODULE ROOT]/config/install/[CONFIG KEY].yml where [CONFIG KEY] is the configuration object name. Each item of configuration defined in the module schema requires its own YML file to set defaults. In the case of the Restrict IP module, there is only one config key, restrict_ip.settings, so there can only be one file to define the default configuration, restrict_ip/config/install/restrict_ip.settings.yml. This file will then list the keys of the configuration object, and the default values. In the case of the Restrict IP module, the default values look like this:

enable: false
mail_address: ''
dblog: false
allow_role_bypass: false
bypass_action: 'provide_link_login_page'
white_black_list: 0
country_white_black_list: 0
country_list: ''

 

As can be seen, each of the mapped keys of the restrict_ip.settings config_object in the schema definition are added to this file, with the default values provided for each key. If a key does not have a default value, it can be left out of this file. When the module is enabled, these are the values that will be imported into active configuration as defaults.

Debugging Configuration

When developing a module, it is important to ensure that the configuration schema accurately describes the configuration used in the module. Configuration can be inspected using the Configuration Inspector module. After enabling your custom module, visit the reports page for the Configuration Inspector at /admin/reports/config-inspector, and it will list any errors in configuration.

The Configuration Inspector module errors in configuration schema definitions

Clicking on 'List' for items with errors will give more details as to the error.

The 'enable' key has an error in schema. The stored value is a boolean, but the configuration definition defines a string

Using the Configuration Inspector module, you can find where you have errors in your configuration schema definitions. Cleaning up these errors will correctly integrate your module with the Configuration API. In the above screenshot, then type of data in the active schema is a boolean, yet the configuration schema defines it as a string. The solution is to change the schema definition to be a boolean.

Summary

In this final article of this series on the Drupal 8 Configuration API, we looked at configuration schema, how developers can define this schema in their modules and provide defaults, as well as how to debug configuration schema errors. Hopefully this series will give you a fuller understanding of what the Configuration API is, how it can be managed, and how you can use it effectively in your Drupal projects. Happy Drupaling!

Apr 06 2021
Apr 06

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it's fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

Part 1 gives the background of the Configuration API, as well as discusses some terminology used within this article, and Part 2 describes how the API works, and Part 3 explains how to use functionality provided by core, so they are worth a read before beginning this article. 

Read-only configuration

In some situations, site builders may want to prevent any configuration changes from being made on the production environment, preventing changes that may cause unexpected issues. For example, clients with admin access could log into the production server, and make what they think is an innocent configuration change, that results in unexpected and drastic consequences. Some site builders consider it to be a best practice to prevent configuration changes on the production server altogether, under the idea that only content should be editable on the production server, and configuration changes should only be made in development and/or staging environments before being tested and pushed to production.

The Config Readonly module, allows for configuration changes through the UI to be disabled on a given environment. It does this by disabling the submit buttons on configuration pages. The module also disables configuration changes using Drush and Drupal console.

A configuration form that has been disabled with the Configuration Readonly module

Note: some configuration forms may still be enabled when using this  module. Module developers must build their forms by extending ConfigFormBase for the Configuration Readonly module to do its magic. If the developer has built the form using other means, the form will not be disabled, and the configuration for that form can be changed through the admin UI.

To set up an environment as read-only, add the following line to settings.php, then enable the module:

$settings['config_readonly'] = TRUE;

After an environment is set as read-only, changes to configuration can only be made on other environments, then migrated and imported into the active configuration on the read-only environment.

Complete split (blacklist) configuration

Sometimes configuration needs to exist on some environments, but not exist in other environments. For example, development modules, like the Devel module, or UI modules like Views UI (Drupal core) and Menu UI (Drupal core) should not be enabled on production environments, as they add overhead to the server while being unnecessary since the production server should not be used for development.

A problem arises when configuration is exported from one environment, and imported into the production environment. All the configuration from the source environment is now the active configuration on the production environment. So any development modules that were enabled on the source environment are now enabled on the production environment. In the case of development modules like Devel, this may only add some overhead to the server, but imagine a module like the Shield module, which sets up environments to require a username and password before even accessing the site. If this module is accidentally enabled upon import on production, it will block the site from public access - a disaster!

The solution to this situation is to blacklist configuration. Blacklisted configuration is blacklisted (removed) from configuration upon export. This functionality is provided by the Configuration Split module. This module allows for black-listing configuration either by module, by individual configuration key(s), and/or by wildcard.

Note that more detailed directions for creating blacklists can be found on the documentation page. The following is meant to give an overview of how black lists work.

Blacklists are created as part of a configuration profile. Configuration profiles allow for 'splitting' (a divergence in) configuration between environments. Profiles may be created for environment types such development, staging and production allowing for configuration specific to those types of environments. Or profiles could be set up for public non-production environments, that would have the Shield module enabled and configured. While a development profile may apply to all development environments, not all development environments are on publicly accessible URLs, and therefore may not need the Shield module enabled.

When setting up a configuration profile, note that the folder name must be the same as the machine_name of the profile.

Configuration split profile settings

Note that you must manually create the folder specified above, and that folder can and should be tracked using Git, so it can be use on any environment that enables the profile.

Configuration can then be set up to be blacklisted either by module, by configuration key, or by wildcard:

Complete split (blacklist) can be set by module, configuration key, or by wildcard

Finally, environments need to be set up to use a given profile. This is handled by adding the following line to settings.php on the environment:

$config['config_split.config_split.PROFILEMACHINENAME']['status'] = TRUE;

Where PROFILEMACHINENAME is the machine_name from the profile you created.

Although blacklisted configuration does not become part of the exported archive, it is not ignored altogether. When an environment has the profile enabled, upon export, blacklisted configuration is extracted, then written to the folder specified in the profile. The remaining configuration is written to the default configuration directory. When importing configuration, environments with the profile enabled will first retrieve the configuration from the default configuration directory, then apply any configuration from the folder specified in the profile. Environments not set up to use the profile ignore the configuration in the blacklisted directory altogether on both import and export.

This means that a developer can enable the Devel module on their local environment, blacklist it, then export their configuration. The blacklisted configuration never becomes part of the default configuration, and therefore the module will not accidentally be installed on environments with the configuration profile enabled.

Conditional split (grey list) configuration

Grey lists, also provided by the Configuration Split module, allow for configuration to differ by environment. With a blacklist (previous section), the configuration only exists in the active database configuration for environments that are set up to use the configuration profile containing the blacklisted configuration. With a grey list, the configuration exists in the active configuration in all environments, but the configuration profiles can be set up to allow environments to use differing values for the configuration.

Imagine an integration with a remote API requiring a username, password, and endpoint URL. The production server needs integrate with the remote API's production instance, while other environments will integrate with the remote API's sandbox instance. As such, the values to be used will differ by environment:

Production Environment:

remoteapi.username: ProductionUsername
remoteapi.password: ProductionPassword
remoteapi.endpoint: https://example.com/api

Other Environments:

remoteapi.username: SandboxUsername
remoteapi.password: SandboxPassword
remoteapi.endpoint: https://sandbox.example.com/api

A grey list allows for the setup of these values by configuration profile.

You may be remembering that Part 3 of this series of articles discussed overriding configuration in settings.php, and thinking that a grey list sounds like the same thing. After all, the default values for the sandbox instance of the API could be set up as the configuration values, and the production values could be overridden in settings.php on the production environment, with the same end-result.

The difference is that with a grey list, the remote API values are saved to the configuration profile folder, which is tracked by Git, and therefore can be tracked and migrated between environments. When grey listed configuration is exported, the grey listed configuration is written to the configuration profile folder, in the same manner as blacklisted configuration. When configuration is imported, the default values are retrieved, and the grey list values are used to override the default values, after which the configuration is imported into active configuration.

With the configuration override method using settings.php, site builders need to store the various configuration values somewhere outside the project, communicating environment-specific configuration values to each other through some means, to be manually entered on the relevant environment(s). With a grey list, the configuration values are managed with Git, meaning site builders do not need to record them outside the project, nor communicate them to each other through some other means. Site builders simply need to enable the relevant configuration profile in settings.php, and the environment-specific values can then be imported into active configuration from the configuration profile directory. This means that the sandbox API values can be set up as the values used by default on all environments, and a production configuration profile can be enabled on the production environment using the values to connect to the production instance of the remote API.

Conditional split items can be selected either from a list, or by manually entering them into the configuration profile:

Conditional split (grey list) settings can be selected or manually entered

Finally, note that grey lists can actually be used in conjunction with configuration overrides in settings.php. Grey lists are applied during import and export of configuration from the database. Values in settings.php are used at runtime, overriding any active configuration. So a developer could choose to set up their local instance of the system to connect to an entirely different instance of the remote API altogether by overriding the values in settings.php.

Ignoring configuration (overwrite protection)

Sometimes developers will want to protect certain configuration items in the database from ever being overwritten. For example imagine a site named Awesome Site, with a module that supplies the core of the site, named awesome_core. Since this module provides the core functionality of the site, it should never be disabled under any circumstances, as that would disable the core functionality of the site. In this case, the configuration for this module can be set to be 'ignored'. Any attempts to import ignored configuration from the file system to the active configuration in database will be skipped, and not imported.

Configuration can be ignored using the Config Ignore module. The functionality this module provides is similar to the functionality provided by the Config Readonly module discussed earlier, however the Config Readonly module covers the entire configuration of an environment, while the Config Ignore module allows for choosing configuration that should be protected. This configuration is protected by ignoring it altogether on import.

Configuration can be ignored as follows:

  1. Enable Config Ignore module on all environments.
  2. Navigate to the config ignore UI page, and set the configuration item to be ignored. In the case of preventing the awesome_core module from being disabled, the following would be added:
    core.extension:module.awesome_core Configuration to be ignore is entered one item per line. Wildcards can be used.

This setting will ensure that any attempts to change or remove core.extension:module.awesome_core upon configuration import will be ignored. So if the module is enabled on production, and a developer pushes configuration changes that would uninstall this module, those changes will be ignored, and the module will still be set as enabled after import.

Summary

In this article, we looked at various modules that extend the Configuration API, use cases behind these modules, and how the modules worked. We looked at the Config Readonly module, the Configuration Split module, and the Config Ignore module, and how to use these modules to manage configuration differences between environments. In the next, final fifth part of this series, we will look at configuration management for module developers, and how developers can define the schema for the configuration in modules they develop.

Apr 06 2021
Apr 06

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it's fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

Part 1 gives the background of the Configuration API, as well as discusses some terminology used within this article, so it's worth a read before beginning this article.

Active configuration is in the database

In Drupal 8, configuration used at runtime is stored in the database. The values in the database are known as active configuration. In Drupal 7, configuration was known as settings, and stored in the {variable} table. In Drupal 8, configuration is stored in the {config} table. The active configuration is used at runtime by Drupal when preparing responses.

Configuration is backed up to files

The Configuration API enables the ability to export the active database configuration into a series of YML files. These files can also be imported into the database. This means that a developer can create a new Field API field on their local development environment, export the configuration for the new field to files, push those files to to the production environment, then import the configuration into the production environment's active configuration in the database.

The configuration values in the database are the live/active values, used by Drupal when responding to requests. The YMLfiles that represent configuration are not required, and are not used at run-time. In fact, in a new system the configuration files don't even exist until/unless someone exports the active configuration from the database. The configuration files are a means to be able to back up and/or migrate configuration between environments. Configuration files are never used in runtime on a site.

Configuration architecture

Let's look at the Configuration API on a more technical level, using a real-world example. The Restrict IP module allows users to set a list of rules that whitelist or blacklist users based on their IP address. Upon visiting the module settings page, users are presented with a checkbox that allows them to enable/disable the module functionality.

From a data standpoint, checkboxes are booleans; they represent either a true or false value. When exporting the configuration of a site with the Restrict IP module enabled, the relevant configuration key will be saved with a value of either true or false to a .yml file. Modules are required to define the schema for any configuration the module creates. Developers can look at the configuration schema declarations to understand what file(s) will be created, and what values are accepted.

Modules declare the schema for their configuration in the [MODULE ROOT]/config/schema directory. In the case of the Restrict IP module, the schema file is restrict_ip/config/schema/restrict_ip.schema.yml. This file contains the following declaration:

restrict_ip.settings:
  type: config_object
  label: 'Restrict IP settings'
  mapping:
    enable:
      type: boolean
      label: 'Enable module'

Schema declarations tell the system what the configuration looks like. In this case, the base configuration object is restrict_ip.settings, from the first line. When this configuration is exported to file, the file name will be restrict_ip.settings.yml. In that file will be a declaration of either:

enable: true

Or:

enable: false

When the file restrict_ip.settings.yml is imported into the active configuration in another environment's database, the value for the enable key will be imported as defined in the file.

On top of this, enabled modules are listed in core.extension.yml, which is the configuration that tracks which modules are enabled in a given environment. When the Restrict IP module is enabled in one environment, and configuration files exported from that environment are imported into a different Drupal environment, the Restrict IP module will be enabled due to its existence in core.extension.yml, and the setting enable will have a value of either true or false, depending on what the value was exported from the original environment.

Note that if you were to try to import the configuration without having the Restrict IP module in the codebase, an error will be thrown and the configuration import will fail with an error about the Restrict IP module not existing.

Summary

In this article, we looked at how the Drupal 8 Configuration API works on a technical level. We looked at how active configuration lives in the database, and can be exported to files which can then be imported back into the database, or migrated and imported to other Drupal environments. In part 3 of the series, Using the API, we will look at how to actually use the Configuration API, as well as some contributed modules that extend the functionality of the Configuration API, allowing for more effective management of Drupal 8 projects.

Feb 26 2021
Feb 26

Bright vivid colours

While this trend might be counteracted by another - brutalism - you’ll find a gluttony of sites brandishing a set of bright colours emboldened further with soft gradients. This of course poses some accessibility challenges with text overlapping such backgrounds. Check out Stripe.

Animated background

The more courageous of the brands push their vivid colours even further with background gradient animations. Check out Qoals.

Glassmorphism

Peering through glass-like interface elements might hark back to Windows Vista times, and more recently with the latest builds of iOS. UI designers are taking it further with 'glassy' overlays to help text become a bit more accessible over gradients and bright backgrounds. Check out DesignCode.

Other trends to get you inspired

Stay tuned

We’ll keep you updated as we release some of these and more on Convivial

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web