Feeds

Author

Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jan 07 2024
Jan 07

Drupal has a quick and convenient way of creating ajax dialogs that gives users the ability to embed links in content that open up dialog boxes when clicked. This is a useful way of presenting a bit of content to a user without them navigating away from the page.

I have previously written in detail about creating ajax dialogs in Drupal, and I refer back to that article quite often when the need arises.

The simplest way of creating an ajax dialog is by adding the class "use-ajax" and the data-dialog-type attribute, which can be one of dialog, dialog.off_canvas, dialog.off_canvas_top and modal. Using the "use-ajax" class tells Drupal that this is an ajax link and to intercept the click to perform an ajax request.

node in dialog

You can also inject options into the HTML to change some of the options in the dialog. For instance, if we wanted to set the width of the dialog to be 70% of the screen size then we would add the data-dialog-options attribute to the link.

node in dialog

Both of these links will open the page at "/node/1" in a dialog window instead of taking the user to the new page.

I recently had a requirement on a site that needed a dialog to re-open if the page was bookmarked or shared with another user. The page in question had a number of dialogs that presented short form content to the user without them needing to reload the page. By default, there is no state set when opening a dialog so I needed to add extra functionality to the dialog system to provide this feature.

I found that the best way to add this state was by appending the hash value like "#node/123" to the end of the URL. This meant that as the page loaded I could look for this hash value and load the ajax dialog for the user. Unlike query parameters, hash values are ignored by Drupal and so I wouldn't have to worry about filtering them out or causing unintended side effects.

The first step was to add a custom JavaScript library called "mymodule/node_modal" to every page load using the hook_page_attachments() hook.

/**
 * Implements hook_page_attachments().
 */
function mymodule_page_attachments(array &$attachments) {
  $attachments['#attached']['library'][] = 'mymodule/node_modal';
}

The node_modal library has a pretty simple structure. We just want to inject a bit of custom JavaScript code into the page and ensure that the JQuery and Drupal ajax libraries are present on the page as well.

node_modal:
  version: 1.0
  js:
    js/node_modal.js: {}
  dependencies:
    - core/jquery
    - core/drupal.ajax

The JavaScript library contains a bit of complexity, so I'll break this down bit by bit.

The first thing to do is to create a "Drupal.behaviors" area that will contain all of the custom JavaScript.

(function nodeModalControl($, Drupal) {
  'use strict';

  Drupal.behaviors.nodeModalControl = {
    attach(context, settings) {

      // All JavaScript goes here.

    }
  };
})(jQuery, Drupal);

When the page is first loaded we need to go through all the elements on the page that contain the data-dialog-type data attribute and add a click event. This click event will append the path of the link that was clicked to the end of the URL as a hash value.

// Find all modal links on the page and attach a click event to them.
const modalLinks = document.querySelectorAll('[data-dialog-type]');
for (let i = 0; i < modalLinks.length; i++) {
  modalLinks[i].addEventListener('click', function openDialogClick(event){
    // When the link is clicked, add the path to the URL as a hash value.
    history.pushState('', '', `#${this.pathname}`);
    event.preventDefault();
  });
}

With this in place, when a user is on the URL "/node/2" and clicks on a link that looks like this.

node in dialog

The URL will change to "/node/2#/node/1" before the dialog is opened. This gives a convenient way of storing what dialog box is currently open that can be easily sent to other users by just sharing the URL.

Of course, we don't want to keep the hash in the URL so we need a mechanism to remove this once the dialog has been closed. Thankfully, the dialog comes with a number of events that we can use to trigger our own code. When a dialog closes the event "dialog:afterclose" is triggered, which we can then listen to and reset the URL back to its original state.

// When the dialog is closed then remove the hash from the URL.
$(window).on('dialog:afterclose', (e, dialog,$element) => {
  history.pushState("", document.title, window.location.pathname + window.location.search);
});

The parameter "window.location.pathname" property contains the current path of the page and the "window.location.search" property contains any query parameters that might exist on the page. By doing this we preserve any functionality that might depend on query strings being present in the page.

Finally, we need a way of detecting the presence of our hash value on page load and triggering our dialog to appear.

This block of code provides this functionality. Here, we detect the presence of the hash in the URL, extract it, and then use the Drupal.ajax() function to trigger the dialog. The settings we pass to the Drupal.ajax() function here are essentially the same options we use for the original dialog link, just as an array of options.

// Run once the document has loaded.
once('init-once', context === document ? 'html' : context)
  .forEach(function initOnce(doc) {
    if (context.hasOwnProperty('location') === false) {
      // If the context has no location then this is a modal window then
      // we do nothing.
      return;
    }
    if (context.location.hash !== '') {
      // Extract the hash value from the URL.
      const hash = location.hash.substring(1);
      // Create the settings required for the ajax callback.
      var ajaxSettings = {
        'url': `${hash}`,
        'dialogType': 'dialog',
        'dialog': {
          'width': '70%'
        },
      };
      // Create the ajax callback object and execute it.
      var modalAjaxObject = Drupal.ajax(ajaxSettings);
      modalAjaxObject.execute();
    }
  });

Critically important to this is to detect if the current context being addressed to determine if it has a location property. This is because the dialog is loaded every time the page loads, including any ajax events that might be triggered. If the context does not have a location then we are looking at the HTML fragment being loaded inside the ajax dialog and can ignore it. Without this simple check in place the dialog would trigger recursively until the browser ran out of memory.

With this JavaScript in place the user will now be presented with an ajax dialog as they visit the page with a hash link to another item of content in it.

Note that this only works with node links. If you want to allow this for different types of content or custom modal links then you'll need to change the 'url' setting you pass to the Drupal.ajax() function and create some sort of controller that will react to the URLs being passed to it.

One limitation of this approach is that any path can be appended to the URL to force it to load onto the page. Drupal's permissions system is still used here so it's not actually possible to load protected content in this way, but you might see an error message that states "Oops, something went wrong. Check your browser's developer console for more details." if there was an error when loading the page. The Cross-Origin resource sharing (CORS) permissions system prevents arbitrary full URLs from being passed to the hash value as well, although the same error is produced.

In the project I created I solved these issues by using a custom controller to listen to the dialog callbacks, which simplifies the ajax settings a little since the dialog options are set in the controller.

// Run once the document has loaded.
once('init-once', context === document ? 'html' : context)
  .forEach(function initOnce(doc) {
    if (context.hasOwnProperty('location') === false) {
      // If the context has no location then this is a modal window then
      // we do nothing.
      return;
    }
    if (context.location.hash !== '') {
      // Extract the hash value from the URL.
      const hash = location.hash.substring(1);
      // Create the settings required for the ajax callback.
      var ajaxSettings = {
        'url': `/some/ajax/endpoint/${hash}`
      };
      // Create the ajax callback object and execute it.
      var modalAjaxObject = Drupal.ajax(ajaxSettings);
      modalAjaxObject.execute();
    }
  });

Using this mechanism I could control how the ajax request was responded to and could therefore control the validation of the input and what sort of data was returned from the request. This did mean that the ajax dialog links need to point to a different URL, and they don't use the "data-dialog-type" attribute any more since the link would always return an ajax dialog. Instead the links just have a class that can be easily added to generate the dialog link. To avoid any confusion, I abstracted the creation of the link away from the users so that they didn't need to worry about the implementation detail of creating the links.

Dec 24 2023
Dec 24

Routes in Drupal can be altered as they are created, or even changed on the fly as the page request is being processed.

In addition to a routing system, Drupal has a path alias system where internal routes like "/node/123" can be given SEO friendly paths like "/about-us". When the user visits the site at "/about-us" the path will be internally re-written to allow Drupal to serve the correct page. Modules like Pathauto will automatically generate the SEO friendly paths using information from the item of content; without the user having to remember to enter it themselves.

This mechanism is made possible thanks to an internal Drupal service called "path processing". When Drupal receives a request it will pass the path through one or more path processors to allow them to change it to another path (which might be an internal route). The process is reversed when generating a link to the page, which allows the path processors to reverse the process.

It is possible to alter a route in Drupal using a route subscriber, but using path processors allows us to change or mask the route or path of a page in a Drupal site without actually changing the internal route itself.

In this article we will look what types path processors are available, how to create your own, what sort of uses they have in a Drupal site, and anything else you should look out for when creating path processors.

Types Of Path Processor

Path processors are managed by the Drupal class \Drupal\Core\PathProcessor\PathProcessorManager. When you add your a path processor to a site this is the class that manages the processor order and calling the processors.

There are two types of path processor available in Drupal:

  • Inbound - Processes an inbound path and allows it to be altered in some way before being processed by Drupal. This usually occurs when a user sends a request to the Drupal site to visit a page. Inbound path processors can also be triggered by certain internal processes, for example, when using a path validator. The path validator will pass the path to the inbound path processor in order to change it to ensure that it has been processed correctly.
  • Outbound - An outbound path is any path that Drupal generates a URL. The outbound path processor will be called in order to change the path so that the URL can be generated correct.

Basically, the inbound processor is used when responding to a path, the outbound processor is called when rendering a path.

Let's go through a couple of examples of each to show how they work.

Creating An Inbound Processor

To register an inbound service with Drupal you need to create a service with a tag of path_processor_inbound, and can optionally include a priority. This let's Drupal know that this service must be used when processing inbound paths.

It is normal for path processor classes to be kept in the "PathProcessor" directory in your custom module's "src" directory.

services:
  mymodule.path_processor_inbound:
    class: Drupal\mymodule\PathProcessor\InboundPathProcessor
    tags:
      - { name: path_processor_inbound, priority: 20 }

The priority you assign to the path_processor_inbound tag will depend on your setup. The internal inbound processor that handles paths in Drupal has a priority of 100, so any setting less than 100 will cause the processing to be performed before Drupal's internal handler is called.

The InboundPathProcessor class we create must implement the \Drupal\Core\PathProcessor\InboundPathProcessorInterface interface, which requires a single method called processInbound() to be added to the class. Here are the arguments for that method.

  • $path - This is a string for the path that is being processed, with a leading slash.
  • $request - In addition to the path, the request object is also passed to the method. This allows us to perform any additional checks on query strings on the URL or other parameters that may have been added to the request.

The processInbound() method must return the processed path as a string (with the leading slash). If we don't want to alter the path then we need to return the path that was passed to the method.

To create a simple example let's make sure that when a user visits the path at "/some-random-path" that we translate this internally to be "/node/1", which is not the internal route for this page. In this example, if the path passed into the method isn't our required path then we just return it, effectively ignoring any path but the one we are looking for.

Now, when the user visits the path "/some-random-path" they will see the output of the page at "/node/1". It is still possible to view the page at "/node/1/" and see the output, so we have just created a duplicate path for the same page.

This is a simple example to show how the processInbound() method works, we'll look at a more concrete example later.

Creating An Outbound Processor

The outbound processor is defined in a similar way to the inbound processor, but in this case we tag the service with the tag path_processor_outbound.

services:
  mymodule.path_processor_outbound:
    class: Drupal\mymodule\PathProcessor\OutboundPathProcessor
    tags:
      - { name: path_processor_outbound, priority: 250 }

The priory of the path_processor_outbound is more or less the opposite of the inbound processor in that you'll generally want your outbound processing to happen later in the callstack. The internal Drupal mechanisms for outbound processor is set at 200, so setting our priory to 250 means that we process our outbound links after Drupal has created any aliases.

The OutboundPathProcessor class we create must implement the \Drupal\Core\PathProcessor\OutboundPathProcessorInterface interface, which requires a single method called processOutbound() to be added to the class. Here are the arguments for that method.

  • $path - This is a string for the path that is being processed, with a leading slash.
  • $options - An associative array of additional options, which includes things like "query", "fragment", "absolute", and "language". These are the same options that get sent to the URL class when generating URLs and allow us to update the outbound path based on the passed options.
  • $request - The current request object is also sent to the method and can make decisions based on the parameters passed to the current path.
  • $bubbleable_metadata - An optional object to collect path processors' bubbleable metadata so that we can potentially pass cache information upstream.

The processOutbound() method must return the new path, with a starting slash. If we don't want to change the path then we just return the path that was sent to us, otherwise we can make any change we require and return this string.

Taking a simple example in the inbound processor further, let's change the path "/node/1" to be "/some-random-path". In this example we are looking for the internal path of "/node/1", and if we see this path then we return our new path.

With this in place, when Drupal prints out a link to "/node/1" it will render the path as "/some-random-path".

On its own this example doesn't do much; we are just rewriting a path for a single page. The real power is when we combine inbound processing and outbound processing together. Let's do just that.

Creating A Single Class For Path Processing

It is possible to combine the inbound and outbound processors together into a single class by combining the tags in a single service. This can be done by combining the path processors together in the module's services file.

services:
  mymodule.path_processor:
    class: Drupal\mymodule\PathProcessor\MyModulePathProcessor
    tags:
      - { name: path_processor_inbound, priority: 20 }
      - { name: path_processor_outbound, priority: 250 }

The class we create from this definition implements both the InboundPathProcessorInterface and the OutboundPathProcessorInterface, and as such it includes both of the processInbound() and processOutbound() methods.

Now all you need to do is add in your path processing.

It's a good idea to create a construct like this so that you translate the path going into and coming out of Drupal. This creates a consistent path model and prevents duplicate content issues where different pages have the same path.

The Redirect Module

If you are planning to use the inbound path processor system then you should be aware that the Redirect module will attempt to redirect your inbound path processor changes to the rewritten paths. The Redirect module is a great module, and I install it on every Drupal site I run, but in order to prevent this redirect you'll need to do something extra, which we'll go through in this section.

To prevent the Redirect module from redirecting a path you need to add the attribute _disable_route_normalizer to the route before the kernel.request event triggers in the Redirect module's RouteNormalizerRequestSubscriber class. We do this by creating our own event subscriber and giving it a higher priority.

The first thing to do is add our event subscriber to our custom module services.yml file.

  mymodule.prevent_redirect_subscriber:
    class: Drupal\mymodule\EventSubscriber\PreventRedirectSubscriber
    tags:
      - { name: event_subscriber }

The event subscriber itself just needs to listen to the kernel.request event, which is stored in the KernelEvents::REQUEST constant. We need to trigger our custom module before the redirect module event, and so we set the priority of the event to be 40. This is higher than the Redirect module event, which is set at 30.

All the event subscriber needs to do is listen for our path and then set the _disable_route_normalizer attribute to the route if it is detected.

getRequest()->getPathInfo() === '/en/some-random-path') {
      $event->getRequest()->attributes->set('_disable_route_normalizer', true);
    }
  }

}

When the Redirect module event triggers it will see this attribute and ignore the redirect.

This will only happen if you are changing the path of an entity of some kind using only the inbound path processor. Creating only the inbound processor creates an imbalance between the outer path and the translated inner path, which we then need to let the Redirect module know about to prevent the redirect. If we also translated the outbound path in the same (and opposite) way then the redirect wouldn't occur.

Doing Something Useful

We've looked at swapping paths and preventing redirects, but let's do something useful with this system.

I was recently tasked with creating a module that would allow any page to be rendered as RSS. It wasn't that we needed a RSS feed, but that each individual page should have an RSS version available.

This was required as there was an integration with an external system that was used to pull information out of a Drupal site for newsletters. Having RSS versions of pages made it much easier for the system to parse the content of the page and so produce the newsletter. This also meant that if the theme changed the system wouldn't be effected as it wouldn't be using the theme of the site.

Essentially, the requirement meant that we needed to add "/rss" after any page on the site and it would render the page accordingly.

The resulting module was dubbed "Node RSS" and made extensive use of path processors to produce the result.

The first step was to create a controller that would react to path like "/node/123/rss" to render the page as an RSS feed. This required a simple route being set up to allow Drupal to listen to that path and also to inject the current node object into the controller. The route also contains a simple permission, which provided a convenient way of activating the system when it was ready.

node_rss.view:
  path: '/node/{node}/rss'
  defaults:
    _title: 'RSS'
    _controller: '\Drupal\node_rss\Controller\NodeRssController::rssView'
  requirements:
    _permission: 'node.view all rss feeds'
    node: \d+
  options:
    parameters:
      node:
        type: entity:node

The rssView action of the NodeRssController just needs to render the node and return it as part of an RSS document. Using this we can now go to a node page at "/node/123/rss" and see an RSS version of the page.

I won't go into detail about producing the RSS version of the page here as it contains a lot of boilerplate code that goes beyond the scope of this article.

So far we only have half the functionality required. Seeing an RSS version of the page via the node ID is fine, but what we really want is to visit the full path of the page with "/rss" appended to the end.

The next step is to setup our path processor so that we can change the paths on the fly. In addition to the tags we are also passing in two other services for us to use in the class. These services are the path_alias.manager service for translating paths and the language_manager to ensure that we get the path with the correct language.

services:
  node_rss.path_processor:
    class: Drupal\node_rss\PathProcessor\NodeRssPathProcessor
    arguments:
      - '@path_alias.manager'
      - '@language_manager'
    tags:
      - { name: path_processor_inbound, priority: 20 }
      - { name: path_processor_outbound, priority: 220 }

The processInbound() method looks for the "/rss" string at the end of the passed path. If this is found then we remove that from the path and try to find the internal path of the page in the site. If we do find the path then it will be returned as "/node/123" instead of the full path alias and this means we can just append "/rss" to the end of the path to point the path at our NodeRssController::rssView action.

  public function processInbound($path, Request $request): string {
    if (preg_match('/\/rss$/', $path) === 0) {
      // String is not an RSS feed string.
      return $path;
    }

    $nonRssPath = str_replace('/rss', '', $path);
    $internalPath = $this->pathAliasManager->getPathByAlias($nonRssPath, $this->languageManager->getCurrentLanguage()->getId());

    if ($internalPath === $nonRssPath && preg_match('/^node\//', $internalPath) === 0) {
      // No matching path was found, or, it wasn't a node path that we have.
      return $path;
    }

    return $internalPath . '/rss';
  }

The opposite process needs to happen for the processOutbound() method. In this case we look for a path that looks like "/node/123/rss" and convert this back into the full path alias of the page. If we find an alias for that path then we append "/rss" to the path and return it.

  public function processOutbound($path, &$options = [], Request $request = NULL, BubbleableMetadata $bubbleable_metadata = NULL): string {
    if (preg_match('/^\/node\/.*?\/rss?$/', $path) === 0) {
      // String is not an RSS feed string.
      return $path;
    }

    $nonRssPath = str_replace('/rss', '', $path);
    $alias = $this->pathAliasManager->getAliasByPath($nonRssPath, $this->languageManager->getCurrentLanguage()->getId());

    if ($nonRssPath === $alias) {
      // An internal alias was not found.
      return $path;
    }

    return $alias . '/rss';
  }

We now have an RSS feed for any content path on the website (as long as it is a node page of some kind).

If we attempted to visit the RSS output of any other kind of page (like a taxonomy term) then we would receive a 404 error. This is possible thanks to the route we have in place as the parameter will only accept node paths.

As we have translated the path completely we do not need the Redirect module overrides here since there is a coherent input/output mechanism for these paths. It's only when there is an imbalance in the paths that we need to override the Redirect module to prevent redirects.

Don't worry if you are looking for the full source code for the above module as I have recently released the Node RSS module on Drupal.org. It only has a dev release for the time being as I would like to add the ability to pick what content types are available for the feeds. I'm also testing it with different setups to make sure that the feed works in different situations. Let me know if it is useful for you and please create a ticket if you have any issues.

If you want to see another module that makes use of this technique then there is the Dynamic Path Rewrites module. This allows the rewriting of any content path on the fly without creating path aliases. This is an alternative to using modules like Path Auto without actually creating path aliases within your system and uses a nice caching system to speed up the responses.

Conclusion

The path processing system in Drupal is really quite powerful and can be used to build some interesting features that rewrite paths on the fly. We can take any incoming request and redirect it to any path we like on the fly.

Without this system in place we would need to generate additional aliases for every path we wanted and add them to the database before we would be able to use the system. That is fine on smaller sites, but I manage sites with millions of nodes and that amount of data would bloat the database and probably not be used all that much.

Path processing does have some interactions with other modules (like Redirect) but these problems are easily overcome. Perhaps the most complex part of this is ensuring that you have the right weights to some of the interactions here as getting things wrong will likely lead to unwanted interactions.

Nov 12 2023
Nov 12

There are a number of different tools that allow you to validate and test a Drupal site. Inspecting your custom code allows you to adhere to coding standards and ensure that you stamp our common coding problems. Adding tests allows you to make certain that the functionality of your Drupal site works correctly.

If you have tests in your Drupal project then you ideally need to be running them at some point in your development workflow. Getting GitHub to run the tests when you push code or create a pull request means that you can have peace of mind that your test suite is being run at some point in workflow. You also want to allow your tests to be run locally with ease, without having to remember lots of command line arguments.

In this article I will show how to set up validation and tests against a Drupal site and how to get GitHub to run these steps when you create a pull request. This assumes you have a Drupal 10 project that is controlled via composer.

Let's start with creating a runner using Makefile.

Makefile

A Makefile is an automation tool that allows developers to create a dependency structure of tasks that is then run by using the "make" command. This file format was original developed to assist with compiling complex projects, but it can easily be used to perform any automation script you need.

For example, let's say that we want to allow a command to be run that has a number of different parameters. This might be a curl command or even an rsync command, where the order of the parameters are absolutely critical. To do this you would create a file called "Makefile" and add the following.

sync-files:
	rsync -avzh source/directory destination/directory

To run this you just need to type "make" followed by the name of the command.

make sync-files

You now have a repeatable task that will run a set bash script exactly the same way every time.

This is preferable to creating single shell scripts that run each action as with Makefile you can create dependencies for each of your tasks. So, in the above example we could say that before we run the rsync command we need to create the destination directory. All we have to do is create another task that will perform this action and set this as a prerequisite of the sync-files command.

create-destination-directory:
    mkdir -p destination/directory

sync-files: create-destination-directory
	rsync -avzh source/directory destination/directory

One thing I use a quite often is the "@" symbol at the start of the commands. This tells make to run the command, but not to print out the command being run on the command line. This cleans up the output a little, but is is down to personal preference really. Here's the same rsync command with this option added.

sync-files:
	@rsync -avzh source/directory destination/directory

There's a lot more to Makefiles than I can cover here, but this is essentially the basic setup. Whilst it is a little tricky to get into the syntax of a Makefile they can be useful for quickly running tasks that would otherwise mean looking up parameters or copying from a text file of useful commands.

If you want to know more about make then I can recommend reading https://makefiletutorial.com/ as this will take you through all of the syntax of a Makefile in simple to understand examples.

The idea behind using Makefiles here is to simplify the process of running commands on GitHub, but also to make it easier for developers to run the same commands. Makefiles makes it easy to group everything under a single command using the prerequisites feature. Doing this will allow you to install Drupal and run entire testing stack using just a single command.

Alternatively, you can use composer actions or some other automated script to perform the tasks, although composer actions don't support dependencies so you might need to create a series of bash scripts to perform the actions. It's also possible to use something like Robo to run tasks for you, and I have experimented with this in the past. Ultimately, you need some way of installing your PHP dependencies before you can run them, which means you need a Makefile or script somewhere in your workflow.

Whatever technology you select, the key to simplifying your GitHub workflows is weighting the commands more on the Makefile side, which means your GitHub actions can be nice and concise.

DDEV

In order to simplify the tasks being run (and the environment they are run on) I tend to use DDEV. Using this platform allows for a consistent and repeatable environment that you can easily configure to have different setups. The rest of the examples in this article will feature the "ddev" command (where appropriate) that will execute the command within the docker environment created by DDEV.

Using a docker environment also means that all of the paths for the system will be the same on every machine that runs the environment, which helps to simplify the setup process.

DDEV is useful when you want to perform updates and need to ensure that they function correctly. For example, if you want to see if your site will function on a new version of PHP then you just need to make that change in the configuration and create a pull request. The GitHub actions will find the new configuration and perform your tests with the new version in mind.

Install Drupal

The first task to perform with any setup is to install the Drupal, first by installing the composer packages and any node packages we may require. When we start a DDEV environment it will automatically copy the Drupal settings.php file into the correct place, so we don't need to worry about that here.

Once the Drupal codebase is in place you can then install Drupal and compile any theme assets required. The following make command will install composer and node packages and then hand off the Drupal install and theme compile tasks to secondary make commands.

setup-drupal:  ## Install dependencies, install Drupal, and compile the theme.
	@ddev composer install --prefer-dist --no-progress
	@ddev exec --dir=/var/www/html/docroot/themes/custom/my_custom_theme npm install
	@ddev exec npm install
	$(MAKE) site-install
	${MAKE} themebuild

The site install command will install Drupal using Drush. I have found from experience that dropping and re-installing the database entirely helps ensure that the environment is clean. For example, when running migration tests you might find that if you don't drop all tables first then some of the migration tables will be present after you re-install the site. We also perform a cache clear as well as an additional configuration import to make sure that the site is up to date.

site-install: ## Install the Drupal site.
	@ddev drush sql-drop --yes
	@ddev drush si standard --existing-config --yes --account-name=admin --account-pass=admin
	@ddev drush cr
	@ddev drush cim -y

This does assume that you are using the standard install profile to install your site (not always the case) and that you have some configuration to import. If you are using multi-site setups then you'll need to change this to install one or more variants of the site for testing.

Once that task is complete the Drupal site will be running.

It's at this point that you might want to think about using Default Content Deploy to inject some testing content into your site. This isn't a requirement, except if you are going to perform any behavioural or regression testing on the site. Having content present for these types of test is essential and Default Content Deploy is the best way that I have found to do this.

The final step here is to build the theme assets, which will entirely depend on what package you use to manage your theme. I use grunt on a couple of projects so this is an example of using grunt to compile the theme assets.

themebuild: ## Build the theme.
	@ddev exec --dir=/var/www/html/docroot/themes/custom/my_custom_theme npx grunt

I should note that there's no extra installation steps to be performed before we can run npm or npx as it these packages come pre-installed with DDEV.

Validation

Before we start testing the code we need to make sure that it is valid. I normally separate out the validation and the testing workflows as there is no point in wasting time on running a full test suite if some of the code in your codebase is invalid.

There are a number of things we can do to ensure that a Drupal codebase is valid, starting with validating the composer files.

Composer Validate

The simplest validation task we can run is to validate the main composer.json and composer.lock files, which is achieved with the command "composer validate".

composer-validate: ## Validate Drupal composer.json and composer.lock.
	@ddev composer validate

Having invalid composer files can often mean that something went wrong during the composer workflow and can cause problems later down the line when you attempt to update composer packages again. 

PHP Code Sniffer

PHP Code Sniffer allows you to check your Drupal custom code for Drupal coding standards. There's a lot of reasons why you want to use coding standards in your project, the least of which is to ensure that common bugs and security issues are corrected before they reach your production environment. PHP Code Sniffer will also check your Drupal YAML configuration files to ensure that no common issues are found.

To install PHP Code Sniffer on a Drupal codebase you can follow along with my article detailing how to install and run the tool.

Once installed you can run the phpcs command to inspect your Drupal codebase. As this requires a fair amount of arguments to achieve we create a make command to do this for us.

phpcs: ## Run phpcs analysis.
	@ddev exec vendor/bin/phpcs --standard=Drupal,DrupalPractice --exclude=SlevomatCodingStandard.Namespaces.AlphabeticallySortedUses --extensions=php,module,inc,install,test,profile,theme,info,txt,yml --ignore=node_modules,bower_components,vendor web/modules/custom web/themes/custom web/profiles

Remember that we are only interested in the PHP code we have written ourselves, which means we specifically point the phpcs command at our custom codebase. There's no point in inspecting the entire Drupal core and contributed codebase as this will have already been checked by the tools available on drupal.org.

PHP Code Sniffer also comes with the PHP Code Beautifier and Fixer tool, which can be run with the phpcbf command.

phpcbf: ## Run phpcbf.
	@ddev exec vendor/bin/phpcbf --standard=Drupal,DrupalPractice --extensions=php,module,inc,install,test,profile,theme,info,txt,yml web/modules/custom web/themes/custom web/profiles

The phpcbf tool can be used to fix a lot of coding standards errors quickly, so it's useful to add to your make file so that you can easily run it.

Note that all of the paths in the above command must exist in order for the tool to run correctly. You can remove "web/profiles" if you are not making use of install profiles on your site.

PHPStan

PHPStan is a tool that will statically analyse PHP code to look for common problems that might cause bugs. It needs a couple of helper packages to install the tool, but I have written all about how to install and use PHPStan in a Drupal codebase. You also need to create a phpstan.neon configuration file, which is automatically picked up by the tool when run.

Once installed and configured, the tool can be run through make.

phpstan: ## Run PHPStan analysis.
	@ddev exec vendor/bin/phpstan

To make the best use of PHPStan you need to set to the right level, this is all handled in the phpstan.neon file so the make command just needs to run the tool. My advice is to start at level 0 and solve everything that is uncovered by the tool. Then, you need to agree with the rest of your team what level you want to reach so that everyone is on the same page.

Eslint

Eslint is a JavaScript static analysis tool that analyses your JavaScript for best practice and potential bugs. It can also be used to validate the syntax of your YAML files, which can catch issues that the PHP Code Sniffer inspection can miss.

Drupal comes with everything you need to get up and running with Eslint and the setup-drupal command at the start installed the tool as part of the "npm install" command.

You need to create an .eslintrc.json file in the root of your project (if this isn't already present) to configure the tool. The "rules" area of this file allows you to turn off certain inspection criteria, which is useful if you want to use things like "++" in your custom code.

Here is an .eslintrc.json file that I often use in projects. The auto-detection of the react version is also added to this file is used to correct a small warning that appears when the tool is run.

{
  "extends": "./web/core/.eslintrc.json",
  "rules": {
    "no-plusplus": "off"
  },
  "settings": {
    "react": {
      "version": "detect"
    }
  }
}

It's also a good idea to have an ignore file that you can use to skip over anything that you don't want to lint. This is the case if you have any vendor directories in your codebase that contain third-party packages in them.

docroot/modules/custom/my_custom_theme/js/vendor/

Once you have that in place you can run the eslint tool, passing in the configuration file with the -c flag and the ignore file with the --ignore-path file.

eslint: ## Run eslint.
	@ddev exec npx eslint -c .eslintrc.json --ignore-path .eslintignore web/modules/custom
	@ddev exec npx eslint -c .eslintrc.json --ignore-path .eslintignore web/themes/custom

To assist your team locally you can add an "eslint-fix" task that will attempt to fix any coding standards issues that the tool finds.

eslint-fix: ## Run eslint with the --fix flag.
	@ddev exec npx eslint -c .eslintrc.json web/modules/custom --fix
	@ddev exec npx eslint -c .eslintrc.json web/themes/custom --fix

Running eslint-fix can often solve the majority of the issues detected, which means you can concentrate on fixing the issues that matter.

Again, note that the directories here must exist before they can be scanned. You can comment them out with a "#" at the start of the line.

Testing

Ideally, you should have a number of tests in your Drupal site. These can be split into unit tests and behavioural tests, but it is essential that we can run them on any platform required.

PHPUnit

Drupal's internal testing system is powered by PHPUnit and can be used to test individual functions, service classes, or user interaction.

To get PHPUnit running on your Drupal site you need to copy the phpunit.xml.dist file from the core directory in your Drupal install into the root of your project. There's a few settings in the file that need changing so that they point at the correct place, but once done you can commit this file to your project.

The tests themselves are easily run inside the DDEV environment, but we first need to ensure that the correct output directories are in place (with the correct permissions) before we can run the test. The following make command handles this.

phpunit: ## Run the Drupal phpunit tests for custom code.
	@ddev exec mkdir -p /var/www/html/private/browsertest_output
	@ddev exec chmod -R 777 /var/www/html/private/browsertest_output
	@ddev exec mkdir -p web/sites/simpletest/browser_output
	@ddev exec chmod -R 777 web/sites/simpletest
	@ddev exec ./vendor/bin/phpunit web/modules/custom/

This will run the unit tests across all of the custom modules in the project.

Cypress

Cypress is a behavioural testing system that acts like a user would on your Drupal site, logging in and interacting with it. These types of tests are tricky as they need to be run on your local environment. Well, that's not strictly true as they can run in a headless browser on any environment, but I've often found that the best results come from running it locally.

I often install Cypress next to the Drupal web root in a directory called "tests/cypress", so the following examples take that into account.

cypress: ## Run Cypress tests
	cd tests/cypress && npx cypress run

Cypress tests don't have access to the Drupal database, so there's also the question on managing the environment for the tests themselves. I've found that re-installing Drupal can lead to timeout errors on some environments, so I tend to opt for a re-import of the content using Default Content Deploy. I have written about using Cypress and Default Content Deploy in a previous article.

One command that I also include is a shortcut to the Cypress GUI, which is a powerful development tool that shows the tests being run in real time.

cypress-gui: ## Open the Cypress GUI
	cd tests/cypress && npx cypress open

Makefile Meta Steps

To speed things up you should create meta steps in your Makefile so that you can run lots of actions at once using the prerequisites feature. We just setup a load of tasks for validating and testing the codebase and it doesn't make sense to run them one by one. Using the prerequisites feature means that we can create simple make tasks that will only run the tasks we have created.

To this end we need to make two tasks, one for validation and another for tests. The "validation" make command is perhaps the most busy:

validate: composer-validate phpcs phpstan eslint ## Validate the project.

The tests can then be run in one go with a "test" make command.

test: phpunit cypress ## Run all of the tests.

With these tasks in hand we can now create out GitHub workflows.

GitHub Workflows

At this point you should be able to run either all or part of your install and test process using the Makefile. I won't go too into detail here about the GitHub workflow file as there is already some pretty good documentation on the file itself. I will go through the creation of a workflow file that will run all of the validation and tests for our Drupal site.

The workflow YAML file needs to live in the directory ".github/workflows/". I tend to call my file "test.yml" since this perfectly describes what it is doing.

The start of the file contains the name and details about when the workflows will be run. It is possible to get GitHub to run your workflow on a variety of different events on the platform.

The following shows the start of a typical GitHub workflow file that details the name and a number of actions. In this case we will run the workflow when a commit is pushed to a branch starting with the name "feature/", or when a pull request is created against the branches "main" or "stage".

name: Run tests

on:
  push:
    branches:
      - 'feature/**'
  pull_request:
    branches:
      - main
      - stage
      - prod

Next is the section that details the jobs that must be run for this workflow. The "test" workflow detailed below will run on the latest version of Ubuntu and has a number of steps.

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
       # Steps go here...

Let's define a few steps.

First, we need to checkout the codebase we want to test, which is done using the "actions/checkout" package. There are a lot more options available in this package, but we only need the default options for our purposes.

- name: Check out repository code.
  uses: actions/checkout@v4

As we are using DDEV we also need to include a step to let GitHub know about DDEV. This is done using the ddev/github-action-setup-ddev package. Again, there are lots of options available for this system, but as the DDEV environment will be automatically run we don't need to do anything else here.

- name: Include ddev runner.
  uses: ddev/github-action-setup-ddev@v1

With the DDEV environment ready we can now start installing the site, which is done using the "make setup-drupal" command we created at the start. Once this task has finished the site will be fully running within the DDEV environment on GitHub.

- name: Setup drupal for testing.
  run: make setup-drupal

Before running the tests we need to run our validation tasks using the "make validate" command.

- name: Run validate handler.
  run: make validate

Here is where our workflow differs slightly from the local environment. The PHPUnit tests and the Cypress tests need to be run in separate tasks due to the way in which the Cypress tests are run (more on that in a minute). To run the PHPUnit tests we just call our "make phpunit" command.

- name: Run test handler.
  run: make phpunit

The best way I have found of running Cypress tests on GitHub is by using the cypress-io/github-action package. This makes ready all of the things we need for our Cypress tests to run and we only need to include the "working-directory" directive as the Cypress tests aren't in the root of our project.

- name: Run cypress tests.
  uses: cypress-io/github-action@v6
  with:
    working-directory: tests/cypress

This task will automatically trigger our Cypress tests and will return the correct failure state if one of them fails.

That's all we need to add to our GitHub workflow file, here it is in full.

name: Run tests

on:
  push:
    branches:
      - 'feature/**'
  pull_request:
    branches:
      - main
      - stage

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Check out repository code.
        uses: actions/checkout@v4

      - name: Include ddev runner.
        uses: ddev/github-action-setup-ddev@v1

      - name: Setup drupal for testing.
        run: make setup-drupal

      - name: Run validate handler.
        run: make validate

      - name: Run test handler.
        run: make phpunit

      - name: Run cypress tests.
        uses: cypress-io/github-action@v6
        with:
          working-directory: tests/cypress

The file we create here is deliberately short because we added the complexity to the Makefile, rather than to this file. It also means that the configuration for your system is part of your codebase, rather than as part of the workflows.

Now that everything is in place you should check the actions permissions of your project in GitHub to make sure that you can actually run the workflow. These are the main permissions you should be looking at (which is within the "hashbangcode" GitHub profile.

The GitHub workflow dialog in the project settings of GitHub.

This "allow all actions" is quite an open permission, but it allows us to use actions from different repositories to checkout the code, run DDEV, and perform Cypress tests.

With all this in place you can now perform validation and testing checks on your Drupal codebase by either pushing to a "feature/x" branch or by creating a pull request against the main or stage branches.

Conclusion

With this technique in hand you can now push code to GitHub and automatically run validation and testing steps on your code. This provides a reliable safety net for your code so that you can be sure that everything is complaint and works correctly with every change that is added to your system.

I wanted to provide as much detail as possible to allow anyone to create their own workflows and actions in a few minutes to get started with continuous integration on GitHub. Even if you have no tests in your Drupal project you can make a start with code validation and then start writing tests with the detail posted here. Let me know if you get stuck with any part of this, I would appreciate the feedback.

The addition of workflows also integrates nicely with the GitHub interface. All workflows that pass will receive a nice little green tick, showing that they cleared all of the validations and checks in your workflow.

It is possible to take the GitHub workflow file a lot further than I have shown here, but I've found that adding complexity to this file often causes problems when attempting to debug problems with the workflow. If you are able to go from a blank slate to a fully validated and tested environment locally using one or two make commands then there's a good chance that the same will apply on GitHub.

The GitHub workflow can be taken in other directions as well. For example, you can also create a workflow that will trigger a deployment of your code to the platform of your choice. Again, I would suggest passing the actual build process off to another application like Ansible or Deployer, rather than add that complexity to the GitHub workflow file.

Deliberately adding the complexity of the project setup and validation/testing steps to the Makefile also allows us to port this technique to other systems with relative ease. For example, if we wanted to use GitLab then we could create a ".gitlab-ci.yml" file and add the needed make commands to that file in order to trigger the same actions on that platform. You would need to account for the presence of DDEV on that environment, but there are ways around using the DDEV wrapper and opting for pure docker commands if required.

Oct 29 2023
Oct 29

This year's DrupalCon Europe was hosted between the 17th and 20th of October, in the French city of Lille. My DrupalCon adventure began early on Monday morning when Chris Maiden picked me up to drive to France via the EuroStar train. We arrived in Lille a little after 4pm, which was really good going for a nearly 400+ mile trip.

The DrupalCon Lille Logo

DrupalCon Lillie was a first for me as I was there representing a company and so spent some time on the conference floor talking about services. Code Enigma, who I work for full time, had sponsored the event and organised a booth (well, a table). The booth wasn't so that we could sell anything, it was more more because we wanted to support Drupal and the sponsorship came with a booth. Driving to Lille allowed us to fill the car with 200 coffee cups, which we gave out at the event.

Once we had found a parking spot we wondered through Lillie, found our hotel, and met up with the rest of the Code Enigma group for a dinner of "Le Welsh"; a local favourite dish.

Tuesday

The first session I managed to attend on Tuesday was the Driesnote, where Drupal founder, Dries Buytaert, talked us through some of the headline initiatives being worked on in Drupal at the moment and why these are important. The presentation started with a very highly produced story, with some fantastic artwork depicting the "Drupal fairy" and the village of Drupal, as well as some surrounding villages like Rectopia and Contentville. This was an interesting way of showing how Drupal is different from other major players in the content management sphere in the form of an analogy.

The talk soon went onto show some of the main initiatives being worked on in Drupal currently, including the project browser, the new field UI and the new administration menu. These initiatives are not only to aid people used to working with Drupal, but also to help newcomers to the platform find their way around. This is especially the case with the project browser since it is difficult to know what are the best modules available for your needs.

Dries's final message was advancing the marketing surrounding Drupal and the creation of a Drupal marketing committee. He invited some representatives of the Drupal marketing committee onto the stage for a quick question and answer. In my opinion this is a sorely needed aspect of Drupal and something the community will really benefit from.

It can't be escaped that there's a bit of negative press surrounding Drupal, which I think is mainly from the complexity of the platform in previous years. The marketing committee will help the Drupal community by generating assets that can be used to argue the use of Drupal in projects, which I'm sure many companies are creating independently.

If you are interested in watching the whole Driesnote, Dries recently uploaded a video of his DrupalCon Lille keynote to YouTube.

The Driesnote from DrupalCon Lille 2023

Due to transporting the cups and performing various booth related activities I did end up missing the Women in Drupal Awards, which was held just before the Driesnote. Congratulations to the well deserved winners Tiffany Farriss, Marine Gandy, and Lenny Moskalyk. I will make sure to catch up with this session video later (if possible).

After the Driesnote was the first proper session with Oliver Davies and TDD - Test-Driven Drupal: an introduction to automated testing and test-driven development in Drupal. I have seen variations of Oliver's TDD talk over the years and I always seem to learn something new from it, and this time was no exception. I've been writing unit tests in Drupal for years, but Oliver gave me a new technique that I will certainly be employing going forwards.

Next up was Designing for Privacy: Balancing User Needs and Data Security, with Tarkesh Deva. This was a look at some techniques and decisions to make when creating privacy focused sites. After which Tarkesh looked at the site https://iq.laaha.org/, which is a UNICEF site created by women for women. Due to the nature of the information being shared the site makes privacy the central tenet of the site. This includes doing things like creating random usernames and not requiring any personal information to make an account so that women don't need to share data to register.

Sven Van Uytfanghe and Building Engaging Communities with Gamification, Commerce, and Integrations: A Success Story was next, which was a case study on a (not yet launched) Drupal site where users could earn "coins" by commenting, posting, liking or otherwise interacting with the site. These coins could then be used to get discounts on products on the shop. An interesting case study, that got me thinking about how to create this sort of system in Drupal.

The final talk on Tuesday was Translation management strategy for editors in Drupal 10 with Jeremy Chinquist. Jeremy focused on some strategies that can be used to aid in the translation of a Drupal 10 site, including asymmetric and symmetric translation. Perhaps most importantly was the tactic of hiding anything that couldn't be translated so that user's would get confused trying to translate something that wasn't needed. Really important lessons that were clearly gained from experience.

A picture of the TDD talk from DrupalCon Lille, showing a full audience.

Wednesday

Wednesday's first session for me was a Technical Writing Workshop with AmyJune Hineline. This was a 2 hour workshop that went over how to write articles, what to write about, who to submit articles to and even a quick look at markdown. The 2 hours went very quickly and I came away with lots of ideas and thoughts about the future of this site. I think I wrote continuously for the entire workshop and need some time to go over my notes and make sense of everything. It was also great to meet AmyJune in person and she has encouraged me and supported the site in the past, so I was chuffed I got the chance to thank her.

After this I attended Akhil Bhandari's talk called Design in Figma and deploy a pixel perfect Drupal website in days, not weeks! The talk looked at the tools that him and his team had put together to simplify the process of taking a Figma design and converting it to Drupal templates, using the Civic theme. A lot of work had clearly gone into this process, and that showed in how they were able to take work from conception to implementation.

Thursday

Thursday started with Single Directory Components in Core with Mike Herchel. It's always good to see Mike talk, and I'm glad I got the opportunity at Lille. Single directory components are indented to abstract away certain parts of the site theme into separate directories in order to make it easier to find everything that is connected to that component in one place. They are currently in Drupal 10.1.0 as an experimental module and should be stable in 10.3.0, although Mike doesn't think the API's will change much (or at all) before then. Unfortunately, Mike's presentation was interrupted twice by his computer hard locking when attempting to play a video. This was heartbreaking to watch, but Mike rallied and handled it really well. I don't think he managed to say everything he wanted to say, but I'll be looking into single directory components soon.

Next up was Hosted login: The future of the login with Raul Jimenez Morales. This talk looked at an unnamed IDaaS (IDentity as a Service) provider and how it could benefit a Drupal site. The essential premise was that the Drupal site (or sites) would only need to know the essential information to perform it's duties, and that if it was hacked then the identity and security of the user wouldn't be effected (at least not much). An interesting talk, although somewhat light on the details.

After lunch I attended the Drupal Core Initiative Leads update, hosted by Gábor Hojtsy. There's some amazing work being done in Drupal at the moment and this was just a quick showcase of some of the initiatives being worked on. There was config validation, the update of localist.drupal.org (which started just last year and is going well), the UI improvements, the administration toolbar, the GitLab improvements, the project browser work, as well as the promote Drupal initiative. Sascha Eggenberger took a closer look at some of the improvements in the Drupal administration UI there were hinted at in the Driesnote, which look really nice. The changes they are proposing look like really obviously good ideas, which is the hallmark of a good design change.

The final session of the conference for me was Björn Brala talking about How JSON:API 1.1 will make Drupal better. The JSON:API specification exists outside of Drupal, but Drupal will soon be taking advantage of the new version of the specification to leverage some useful features like extensions, error objects and link objects that follow RFC8288. Ultimately, these improvements will get better JSON:API clients for Drupal and allow some neat things like exploring with links to be built into the service layer.

Thursday night was the Drupal trivia night. I formed a team with the rest of the Code Enigma people and although we didn't come last (we think) we also didn't come anywhere near first. Really good questions with an awesome host who made the evening flow really quickly. Well done to the winners, I really don't know how you scored that many points!

Conclusion

Overall, DrupalCon Lille was a great conference. This year I didn't go to as many talks and sessions as I would normally go to and instead spent quite a lot of time chatting with people in the main conference hallway. It was good seeing old friends and making some new ones. I have lots of notes from the sessions I attended and will be sure to look into incorporating some aspects of them into my day-to-day tasks in the near future.

The food at the venue wasn't particularity amazing, but it was warm and nutritious so I was happy to eat it. The food in Lille itself was really nice. As I mentioned earlier, we went out for a local dish called "Le Welsh", which is more or less a take on Welsh Rarebit. In fact, looking into the dish after the conference I found that it was imported to the region in 1544 by a Welshman, so it clearly has some Welsh influence and is a popular dish nearly 500 years later.

It was nice to hand out cups and umbrellas and see people making good use of them at the event. The rain came to Lille in earnest on the Wednesday and so the umbrellas suddenly became really popular! We were happy to give them out as that was the main reason we had bought them. As long as people make good use of them Code Enigma were happy to supply them.

See you next year at DrupalCon Europe Barcelona!

A collage of images from my time at DrupalCon Lille 2023
Sep 03 2023
Sep 03

Drupal's modular system allows for all kinds of additions to be added to your site through plugins, entities and configuration.

Thanks to Drupal's modular configuration system you can also inject custom configuration into existing configuration entities. This allows you to extend the functionality of an existing entity and ensure that any customisation are saved in the system configuration. Any configuration entity can be augmented without changing the existing configuration schema of the entity, which might cause knock on effects to other modules.

If you have spent time with Drupal configuration then you might have seen the addition of a third_party_settings block to your configuration. A good example of this is with the core Shortcut module and the third party settings it adds to the installed themes when the site is installed using the standard install profile. The Claro theme, for example, will have the following configuration out of the box.

third_party_settings:
  shortcut:
    module_link: true

This allows the Shortcut module to detect if it should show a link on the page when this theme is present on the page. The configuration for this setting then doesn't have to live in a separate configuration entity (which would be exported into a separate file); it can just live with the configuration for the theme and be loaded as part of the theme configuration.

In this article I will look at how to use additional settings to add custom configuration items to an existing configuration entity. I'll also cover a couple of use cases for this technique.

Creating A Configuration Schema File

The first step is to create a configuration schema file, which is perhaps the most difficult part of the process. This is essentially a "mini" configuration schema that has a namespace of "third_party" that will extend the existing configuration namespace that you want to augment.

The schema name should take the form of the following formula:

.third_party.:
  type: config_entity
  label "Third party settings"
  mapping:
    

For the purposes of this example I will add a third party setting to a content type (i.e. a "node" configuration entity), specially an "Article" content type. As this content type is available in the standard install profile in Drupal it should be present on most sites. The node content entity has a schema definition of "node.type.*", which means that is where we start our third party settings from.

The following schema definition is added to a custom module, called third_party_settings_example, in the directory config/schema. The file is called third_party_settings_example.schema.yml and is where we store any schema settings for modules.

node.type.*.third_party.third_party_settings_example:
  type: config_entity
  label: "Example settings"
  mapping:
    text_field:
      type: text
      label: "Text Field"

This adds the third party settings for a single text field to the content configuration entity.

With that in place we can how start using the settings to augment content configuration entities.

Injecting Settings

Since we are adding schema settings to the node type entity we need to alter the node_type form using a hook_form_FOMR_ID_alter() hook. This is the form we see when we view the "path /admin/structure/types/manage/article" and is used to configure things like preview settings, publishing options, display settings, and menu options.

Here is the code for this hook, which would like in the .module file.

/**
 * Implements hook_form_FORM_ID_alter().
 */
function third_party_settings_example_form_node_type_edit_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  $entity = $form_state->getFormObject()->getEntity();
  if ($entity && $entity->id() === 'article') {
    $exampleTextField = $entity->getThirdPartySetting('third_party_settings_example', 'text_field');

    $form['example_settings'] = [
      '#type' => 'fieldset',
      '#tree' => FALSE,
      '#title' => t('Third Party Example Settings'),
      '#description' => t('For example purposes.'),
    ];

    $form['example_settings']['text_field'] = [
      '#type' => 'textfield',
      '#title' => t('Example Text Field'),
      '#default_value' => $exampleTextField,
    ];

    $form['#entity_builders'][] = 'third_party_settings_example_node_form_builder';
  }
}

What we do here is:

  • Ensure that we are looking at an "article" content entity type.
  • Fetch the existing third party setting for "text_field" for our module.
  • Add additional fields to the content entity form to allow this setting to be changed.

The final step here is to inject a custom called "third_party_settings_example_node_form_builder" callback into the '#entity_builders' form setting, which will be triggered when the form is saved. This custom callback is used to save the added third party settings to the configuration, or clear them if the setting is empty.

/**
 * Entity builder for the node:article entity.
 */
function third_party_settings_example_node_form_builder($entity_type, $entity, &$form, \Drupal\Core\Form\FormStateInterface $form_state) {
  if ($form_state->getValue('text_field')) {
    $textFieldValue = $form_state->getValue('text_field');
    $entity->setThirdPartySetting('third_party_settings_example', 'text_field', $textFieldValue);
    return;
  }
  $entity->unsetThirdPartySetting('third_party_settings_example', 'text_field');
}

With this in place we can edit the Article content type configuration page inject some text into the configuration entity.

There are actually a few methods involved with getting and manipulating third party settings, all of which are defined in the Drupal\Core\Config\Entity\ThirdPartySettingsInterface interface. Since the core Drupal\Core\Config\Entity\ConfigEntityInterface extends this interface and that interface is used extensively in Drupal, many different configuration entities have access to these methods.

  • setThirdPartySetting($module, $key, $value) - Sets the value of a third-party setting.
  • getThirdPartySetting($module, $key, $default = NULL) - Gets the value of a third-party setting.
  • getThirdPartySettings($module) - Gets all third-party settings of a given module.
  • unsetThirdPartySetting($module, $key) - Unsets a third-party setting.
  • getThirdPartyProviders() - Gets the list of third parties that store information.

This is all that is needed to get the system working, so let's look at what happens when we export the configuration with this setting in place.

Exporting Third Party Configuration

When we export our augmented configuration Drupal will detect the presence of our third party settings and inject it into the resulting configuration schema.

Here is the new configuration file for the Article content type (stored in the file node.type.article.yml).

uuid: ebc525b7-eeb3-4a86-9113-4a9019744a64
langcode: en
status: true
dependencies:
  module:
    - menu_ui
    - third_party_settings_example
third_party_settings:
  menu_ui:
    available_menus:
      - main
    parent: 'main:'
  third_party_settings_example:
    text_field: 'Some example text.'
_core:
  default_config_hash: AeW1SEDgb1OTQACAWGhzvMknMYAJlcZu0jljfeU3oso
name: Article
type: article
description: 'Use articles for time-sensitive content like news, press releases or blog posts.'
help: ''
new_revision: true
preview_mode: 1
display_submitted: true

As you can see in this example, we have already used the Menu UI module to inject some configuration into the third_party_settings section. Our module updates this section by adding it's own configuration.

Usage

We have now added custom configuration to our Article content entity, but how can we make use of this?

Since this change effects the configuration entity we need to extract this from the content entity to make use of it. This can be done using the content entity as it stores information about the configuration entity that defined it.

As an example, the following implementation of hook_preprocess_HOOK() will inject some content into the Article content entity. We extract the third party setting for our module from the configuration entity connected to our items of content and inject it into the theme content.

/**
 * Implements hook_preprocess_HOOK().
 */
function third_party_settings_example_preprocess_node(&$variables) {
  /** @var \Drupal\node\NodeInterface $node */
  $node = $variables['elements']['#node'];
  if ($node->bundle() == 'article') {
    $additionalSetting = $node->type->entity->getThirdPartySetting('third_party_settings_example', 'text_field');

    $variables['content']['additional_title'] = [
      '#markup' => $additionalSetting,
    ];
  }
}

With this in place we will see the text we added to the article configuration entity appear on all Articles. This is a somewhat useless example, but it shows the technique in action. The setting for the text_field was only added to one place, but we have changed how all Articles are displayed across the site.

Note that the decision to add checks for the "Article" content entity here is arbitrary and is used to add to the example. We could build this module in the same way without checking for the Article content type, at which point all content entities would get this setting and gain the ability to have text injected into them like this.

Modules That Use Additional Settings

Here are a couple of modules that make use of third party settings to store configuration within other configuration entities. If you want to see this technique in action then these modules are a good place to start.

Scheduler

The Scheduler (scheduler) module is a commonly used module that allows nodes to be published a certain times. Scheduler stores some configuration for each content type in third party settings, but also allows media items, commerce products and taxonomy terms to be configured in the same way.

MaxLength

MaxLength (maxlength) module allows character limits to be set on text fields; which also includes a character count. The settings for this module are injected into the field configuration for entities and so can be added to any text field on the site.

Allowed Formats

Allowed Formats (allowed_formats) is a module gives sites the ability to hide the often confusing information about text input formats that appear below text fields. This is another field level configuration item that can be added to any text field on the site. It's worth looking into field level third party settings as they have a complex schema setup.

Conclusion

The third party setting technique discussed here has lots of applications, especially if you want to export your settings along with the configuration of your site. Small amounts of configuration items can be easily injected to any configuration entity, which will live with your site. and can be deployed just like any other configuration item.

Third party settings aren't available to all configuration entities, but if the class implements the interface Drupal\Core\Config\Entity\ThirdPartySettingsInterface then it should have this ability.

Whilst this technique is useful, you don't have to use the third party settings system every time. In fact, if your custom configuration alters lots of different types of entities or has lots of custom configuration then you should probably look at storing your configuration in separate configuration files. Whilst third party settings live in a separate namespace within a configuration entity, you should still respect that parent entity and try not to alter it with lots of data if that can be helped.

Also remember that third party settings live with your configuration entities and not your content entities. This means that any settings you add will effect all instances of that content entity. If you want to add custom configuration to individual entities (eg. certain articles, or taxonomy terms) then your should probably use the field API to do this.

I made use of this technique on a recent project by adding additional settings to a Message Template configuration entity (from the Message module). I needed a way to add some additional data to some Message entities, without adding separate fields for each type of Message Templates, and third party settings allowed for this. If the additional data was present on a Message Template configuration entity then we performed additional actions as we created the Message, otherwise the Message was created without performing any extra steps. The additional configuration could then be exported and deployed using the Drupal configuration workflows easily.

All of the code I have used in this article is available in an example third party settings Drupal module on GitHub. Feel free to use this module to create your own third party settings.

There is also a documentation page on Drupal.org that looks at this technique in relation to adding settings to a custom entity that might be worth a read if you want more context about this technique.

Aug 20 2023
Aug 20

The Group module in Drupal is a powerful way of collecting together users and content under a single entity. These arbitrary collections of entities in a Drupal site can be used for editor teams within your site or company subscriptions where users can manage themselves, or anything that requires groups of users to be created.

During a recent project that used the Group module I found myself digging deeper into the Groups permissions system in order to achieve certain tasks. This is a similar permission system to that already created in Drupal, with the exception that the permission always forms a link between a Group and a user, and an optional additional entity. Permissions being local to the group is useful if you want to create groups of users that had access to pages and other entities that are kept within the group.

Group permissions are by no means simple though, and the different layers that exist within the permissions systems can make it difficult to see what is preventing access to a particular entity. This situation is complicated by the fact that much of the documentation and third party modules are built around Group version 2, with the current release of Group being version 3. For example, there is a documentation page on extending Groups access control, but as this is only for Groups version 2.0 it doesn't help with the latest version of Groups.

In this article I will look at how to create and use permissions within the Group module to grant users certain permissions if they are members of groups. Each example will get more complex as we go through the article and I will show how to use the permission within the site to control access.

First, it's useful if we take a quick look at how the permission levels in Groups work.

How Group Permission Levels Work

The Group permissions model works with the core Drupal permissions system, but augments it so that each Group type can have a different permissions model. This means that whilst a user might have a Drupal user role, what they are able to do with a Group (or items within the Group) is dependent on the permissions the user has within the Group and if they are a member or not.

The following is a hierarchy of the different levels of permissions that exist for a user and a group.

  • Drupal role - Your normal Drupal user role give you permissions to do certain things with the Groups system. Things like creating new groups is a Drupal based permission since there is no membership system yet.
  • An outsider role - This defines a person who does not have a group membership, but has a role of some kind in Drupal. You can configure Groups to allow certain roles to have more access to things inside the group entity. This is an optional role and must be configured to be active.
  • An insider role - This is a member of a group who also has a specific Drupal role. If a user has a membership within the group then their Drupal role can be used to give them extra permissions within the Group, even if their membership doesn't allow for this. Again, this is an optional role and must be configured to be active.
  • A membership - A user who is attached to a Group is given a membership. This membership has a specific set of permissions tied to it.
  • A membership with a role - It's also possible to create roles inside the Group, which gives users different levels of access to perform actions within the Group. Out of the box, Group will give you an "Admin" role, but you can add more to fine tune the permissions.

As you can see, there are a number of different permission levels to think of here, and you can get in a bit of a mess if you don't think things through properly. This simple overview is explained in some detail in the Group permission layers explained documentation page.

A common problem I see with Group setups is that a set of permissions are created that means that if a site administrator account joins a Group they get less permissions in the Group than they had if they weren't a member. This is because the insider role permission set was used for this role and the permissions for that role don't give them the same access as their outsider role.

When you set up a group you are given the option to map certain Drupal roles to permissions on the inside of the Group. Whilst this is really useful and often how you want a site to operate, it's crucial that you get this working correctly if you want to avoid access headaches.

With this in mind, let's look at how to create a simple Group permission.

Creating A Simple Group Permission

The simplest permission you can create for a Group is by using a x.group.permissions.yml file within a custom module. This will act just like normal Drupal permissions, except are only related to Groups.

As an example, the following snippet was created in a module called custom_group_permissions, and so the permissions file created was custom_group_permissions.group.permissions.yml.

edit group title:
  title: 'Edit group title'
  description: 'Gives members the ability to edit the group title.'

This will create a permission called "edit group title", which we have passed a title and description that will be used on the group permissions page. There are a lot more options available to group permissions, but this is the simplest thing you can do to get started.

After adding this file we can see the following options appear in the group permissions page for all group types.

A section of the Group permissions page, showing that certain users can be given the ability to edit group titles.

Here, we are only giving the ability to edit group titles to the administrators of the group and users with the administrator role (both external and internal to the group).

To make this permission do something we need to create a hook that will alter the way that the group edit form works. Each Group entity has a method called hasPermission() that is used to check the permission of a user against a Group.

The following code implements the hook_form_FORM_ID_alter() hook and targets the "group_activity_edit_form", which is used to edit Groups of the type "Activity".

/**
 * Implements hook_form_FORM_ID_alter().
 */
function custom_group_permissions_form_group_activity_edit_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  // Alters the group_edit_form form for the "Activity" group type.
  $group = $form_state->getFormObject()->getEntity();
  if ($group instanceof \Drupal\group\Entity\Group) {
    $account = \Drupal::currentUser();
    if ($group->hasPermission('edit group title', $account) === FALSE) {
      $form['label']['widget']['#disabled'] = TRUE;
    }
  }
}

With this code in place, assuming that we also give this user the ability to update the Group itself, then they must also be given the permission "edit group title" to be able to change the title of the group (called "label" internally). You can also see the permission for this role when you export the configuration for this group.

Permissions Arguments

We have already seen the title and description arguments for the Group permissions in the Group permissions YAML file, but what other permissions attributes are available to us? The interface \Drupal\group\Access\GroupPermissionHandlerInterface in the Group module defines this list as the following.

  • title : The untranslated human-readable name of the permission, to be shown on the permission administration page. You may use placeholders as you would in t().
  • title_args : (optional) The placeholder values for the title.
  • description : (optional) An untranslated description of what the permission does. You may use placeholders as you would in t().
  • description_args : (optional) The placeholder values for the description.
  • restrict access : (optional) A boolean which can be set to TRUE to indicate that site administrators should restrict access to this permission to trusted users. This should be used for permissions that have inherent security risks across a variety of potential use cases. When set to TRUE, a standard warning message will be displayed with the permission on the permission administration page. Defaults to FALSE.
  • warning : (optional) An untranslated warning message to display for this permission on the permission administration page. This warning overrides the automatic warning generated by 'restrict access' being set to TRUE. This should rarely be used, since it is important for all permissions to have a clear, consistent security warning that is the same across the site. Use the 'description' key instead to provide any information that is specific to the permission you are defining. You may use placeholders as you would in t().
  • warning_args : (optional) The placeholder values for the warning.
  • allowed for : (optional) An array of strings that define which membership types can use this permission. Possible values are: 'anonymous', 'outsider', 'member'. Will default to all three when left empty.
  • provider : (optional) The provider name of the permission. Defaults to the module providing the permission. You may set this to another module's name to make it appear as if the permission was provided by that module.
  • section : (optional) The untranslated section name of the permission. This is used to maintain a clear overview on the permissions form. Defaults to the plugin name for plugin provided permissions and to "General" for all other permissions.
  • section_args : (optional) The placeholder values for the section name.
  • section_id : (optional) The machine name to identify the section by, defaults to the plugin ID for plugin provided permissions and to "general" for all other permissions.

Notice that all of the strings you pass here are untranslated string, as they are accompanied by a "_args" property that allows you to pass arguments into the translation of the string. For example, the "title" property is updated like this when being prepared for display in the Group permissions section.

$permission['title'] = $this->t($permission['title'], $permission['title_args']);

To define this in the YAML file you would do something like this.

edit group title:
  title: 'Edit group @arg'
  title_args: {
    '@arg': 'title'
  }
  description: 'Gives members the ability to edit the group title.'

The same thing happens for other parts of this permissions array that take a argument for the string.

Group Permissions On Routes

Instead of adding a hook and checking permissions directly, you can also add the requirement of _group_permission and _group_member to a route definition. The following snippet shows the use of these requirements in a route definition.

custom_group_permissions.example:
  path: '/group_reports/{group}'
  defaults:
    _title: 'Group Report'
    _controller: '\Drupal\custom_group_permissions\Controller\ReportController::report'
  requirements:
    _permission: 'administer content'
    _group_permission: 'access group reports page'
    _group_member: 'TRUE'

With this in place the Drupal will detect these requirements and add the following permission checks for this route.

  • The _permission requirement is a standard Drupal permission check, which means that we are first checking to see if this user has the "administer content" permission in order to view this content.
  • The _group_permission requirement ensures that the user has the Group permission of "access group reports page". Remember that with insider and outsider permissions this permission doesn't necessarily mean that this person is a group member.
  • The _group_member requirement ensures that the user is a member of the currently loaded group, passed to the route through the "{group}" parameter in the URL.

If any of these permission checks returns an forbidden permission check then the route not be shown to the user. The order of the checks here is important as it means that we allow wide ranging permissions first and then narrow the permission checks as we go through. In the above code we first run the _permission check, followed by _group_permission, and finally _group_member.

Alternatively, if you want to add Group permissions a pre-existing route then you can inject these parameters into the route using a route subscriber.

Creating Dynamic Group Permissions

Defining flat permissions is perfectly fine, but to create a dynamic set of permissions we need to use permission callbacks to define the permissions.

To do this we first need to inform the Group module about our dynamic permission callback class. This is done using a permission_callbacks flag in the x.group.permissions.yml file where we defined a number of different static methods in classes that define permissions.

For example, to redefine the edit group title permission as a callback we change the definition in the custom_group_permissions.group.permissions.yml file to the following.

permission_callbacks:
  - '\Drupal\custom_group_permissions\Access\CustomGroupPermissions::groupPermissions'

The CustomGroupPermissions defined above is class that contains a single method called groupPermissions(), which the Group module will call automatically when finding for permissions. This method just needs to return an array representation of the permission.

Here is the class in full, which just replicates our "edit group title" permission from earlier.

 'Edit group title',
      'description' => 'Gives members the ability to edit the group title.',
    ];

    return $perms;
  }

}

With this code in place the permissions we have available in the Group haven't changed, the only difference is that we are generating them dynamically. The real power of this technique comes from generating dynamic permissions around the entities and other data you have on your site.

As an example, let's expand this list of permissions to include all of the base fields that come with each Group type.

In order to do this we will need to inject the entity_field.manager service into the object. Drupal will check the class definition as it is instantiated and if it extends \Drupal\Core\DependencyInjection\ContainerInjectionInterface then it will call the create() method to generate the object, which allows us to inject our needed dependencies into the object.

Once that is in place, it is just a case of using the groupPermissions() method to return a permission for each of the core fields that a user might be able to change in the group definition. Here is the updated code.

setEntityFieldManager($container->get('entity_field.manager'));
    return $instance;
  }

  /**
   * Sets the entity field manager service.
   *
   * @param \Drupal\Core\Entity\EntityFieldManagerInterface $entityFieldManager
   *   The entity field manager service.
   *
   * @return self
   *   The current object.
   */
  public function setEntityFieldManager(EntityFieldManagerInterface $entityFieldManager): self {
    $this->entityFieldManager = $entityFieldManager;
    return $this;
  }

  /**
   * Returns an array of group type permissions.
   *
   * @return array
   *   The group permissions.
   */
  public function groupPermissions() {
    $perms = [];

    foreach ($this->entityFieldManager->getBaseFieldDefinitions('group') as $field => $definition) {
      if ($definition['read-only'] === TRUE) {
        continue;
      }
      $perms['edit group ' . $field] = [
        'title' => 'Edit group @fieldname',
        'title_args' => [
          '@fieldname' => $definition['label'],
        ]
      ];
    }

    return $perms;
  }

}

With this in place, our permissions list in the Group interface is now expanded to include all the base fields for the Group. Of course, we still need to code the permission checks into the form edit hook, but we now have the permissions in place.

Using this system we can create permissions for things like content, taxonomy, workflow or anything else outside of Groups that we might be interested in. There is a limitation as we can't define these permissions for different types of Group since the permissions callback accepts no arguments.

When dealing with entities it might be better to think about using the Group relationship plugins to get the job done.

Adding Entity Type Permissions To Groups

Although permission callbacks are able to create dynamic permissions, they are limited by the fact that they have no knowledge of the Group they are acting upon. This means that they apply to all group types, even if you have no intention of using them.

The solution to this is to create a Group plugin, which can be loaded into a Group and used to dynamically determine the permissions for the group. Group plugins are used to create relationships between Group entities and other entities in your site, but we can use certain parts of this relationship builder to give us a pluggable permissions system. This can augment the existing permissions on a site by applying them to the Group level. To do this we need to create a GroupRelationType plugin, which must live in the src/Plugin/Group/Relation directory inside your custom module.

The following is a typical GroupRelationType plugin class, which extends the \Drupal\group\Plugin\Group\Relation\GroupRelationBase class. This base class provides all of the functionality we might need for the plugin to work with the Group module and so is useful to extend it.

Here is an example GroupRelationType plugin that will be used to control access to the "user" entity attached to this Group.

Note that this relationship is already being done via the membership plugin, we are just adding to these permissions with our own set of permissions and access rules. It is possible to create relationship plugins without actively forming the relationship between the Group and the entity. Adding the full relationship is certainly possible, but is slightly beyond the scope of this article.

With this in place you can now activate it in the Group Content tab inside your custom group type setup at the path "admin/group/types/manage//content". You should see the following at the bottom of the screen.

The Group permissions plugin in the group available content page.

Clicking "Install" will allow the plugin to work with the Group type you are currently looking at.

To use this plugin for permissions we must create a service that the Group module will pick up and use to populate the required permissions.

The service name must follow the following format:

group.relation_handler.$handler_type.$group_relation_type_id

This means that we must call our service "group.relation_handler.permission_provider.custom_group_permissions", as it is the "permission_provider" for the "custom_group_permissions" relation plugin. The service definition for the above class would be as follows.

services:
  group.relation_handler.permission_provider.custom_group_permissions:
    class: 'Drupal\group\Plugin\Group\RelationHandlerDefault\PermissionProvider'
    arguments: ['@entity_type.manager', '@group_relation_type.manager']
    shared: false

As we want to create a permission provider the service class we create must implement the interface \Drupal\group\Plugin\Group\RelationHandler\PermissionProviderInterface. If we attempt to create a permission provider without implementing this interface then Group will throw an error. Group also comes with a trait called \Drupal\group\Plugin\Group\RelationHandler\PermissionProviderTrait that implements all of the required methods for this interface, which allows us to just write the code we need to get our permissions working.

The key part of this setup is that we inject the "group.relation_handler.permission_provider" service into the object, which we then set as the parent property within the class. This parent property is an object of \Drupal\group\Plugin\Group\RelationHandlerDefault\PermissionProvider that is used to fill in the gaps around the permission providers. Without this in place you'll find Groups throwing a few errors with regards to the parent being missing.

The custom permissions we create using this method are all based around operations that might be done on an entity of some kind. In our case, because we stipulated that the entity type we are working with is the User entity, the Group module will ask our service for permissions for view, update, delete and create operations for this entity. Group will go through the available operation for the entity and call the getPermission() method for each operation in turn, as well as defining the scope of this permission (which is either "own", or "any"). The target of the permission is either the entity itself or the relation created when that entity is added to the Group.

In the following example we are setting the single permission of "view custom_group_permissions entity" for our module. This will translate to the User entity in the permissions setup.

parent = $parent;
  }

  /**
   * {@inheritdoc}
   */
  public function getPermission($operation, $target, $scope = 'any') {
    if ($operation === 'view' && $target === 'entity' && $scope === 'any') {
      return "$operation $this->pluginId $target";
    }
  }

}

With this in place, this permission will appear in the Group permissions page and be available as a permission within the group.

It should be noted that the permissions must be to do with the operations you would perform on an entity, which means you can't return arbitrary permissions here (like "edit group title") as they will be ignored by the Group permissions system.

This permission doesn't actually do anything on it's own so we now need to implement an access check. The first thing we need to create is an access_controller plugin that will be used to perform the permission check.

  group.relation_handler.access_control.custom_group_permissions:
    class: 'Drupal\custom_group_permissions\Plugin\Group\RelationHandler\CustomGroupAccessControl'
    arguments: ['@group.relation_handler.access_control']
    shared: false

The CustomGroupAccessControl plugin class is similar to the CustomGroupPermissionProvider class, but in this case we are extending the \Drupal\group\Plugin\Group\RelationHandler\AccessControlInterface that Group provides. There is also a handy \Drupal\group\Plugin\Group\RelationHandler\AccessControlTrait that we can use to fill in any gaps that the plugin has. The parent property of "group.relation_handler.access_control" is passed to the service, which is an object of \Drupal\group\Plugin\Group\RelationHandlerDefault\AccessControl that we assign to the parent property in the class.

The CustomGroupAccessControl class we create here has quite a bit of complexity, although most of that is involved with finding the Group that relates to the entity passed to it and checking the permission of that group. We are also making sure that the user has permissions to perform the operation on the entity if they are the author of that entity (and they have the appropriate permissions).

parent = $parent;
  }

  /**
   * {@inheritdoc}
   */
  public function entityAccess(EntityInterface $entity, $operation, AccountInterface $account, $return_as_object = FALSE) {
    // Assume we will return a neutral permission check by default.
    $access = AccessResultNeutral::neutral();

    if ($this->supportsOperation($operation, 'entity') === FALSE) {
      return $access;
    }

    $storage = $this->entityTypeManager->getStorage('group_relationship');
    $groupRelationships = $storage->loadByEntity($entity);
    if (empty($groupRelationships)) {
      // If the entity does not belong to any group, we have nothing to say.
      return $access;
    }

    /** @var \Drupal\group\Entity\GroupRelationship $groupRelationship */
    foreach ($groupRelationships as $groupRelationship) {
      $group = $groupRelationship->getGroup();
      $access = AccessResult::allowedIf($group->hasPermission("$operation $this->pluginId entity", $account));

      $owner_access = $access->orIf(AccessResult::allowedIf(
        $group->hasPermission("$operation $this->pluginId entity", $account)
        && $group->hasPermission("$operation own $this->pluginId entity", $account)
        && $entity instanceof EntityOwnerInterface
        && $entity->getOwnerId() === $account->id()
      ));

      $access = $access->orIf($owner_access);

      $access->addCacheableDependency($groupRelationship);
      $access->addCacheableDependency($group);
    }

    return $access;
  }

}

This code doesn't do anything on its own, it first needs to be called from an access check situation.

To do this we need to intercept the access check using a implementation of the hook_entity_access() hook. In this hook we check for plugins that are connected to the entity type we have in hand (which in our case is User) and then load the above access control service using the "group_relation_type.manager" service. We need to load the access control service in this way as the Group module will populate the object with a lot of useful items that otherwise wouldn't exist if we just created the service on its own.

Here is the implementation of the hook_entity_access() hook.

/**
 * Implements hook_entity_access().
 */
function custom_group_permissions_entity_access(\Drupal\Core\Entity\EntityInterface $entity, $operation, \Drupal\Core\Session\AccountInterface $account) {
  if ($entity->isNew()) {
    return \Drupal\Core\Access\AccessResult::neutral();
  }
  /** @var \Drupal\group\Plugin\Group\Relation\GroupRelationTypeManagerInterface $groupRelationTypeManager */
  $groupRelationTypeManager = \Drupal::service('group_relation_type.manager');

  // Find all the group relations that define access to this entity.
  $plugin_ids = $groupRelationTypeManager->getPluginIdsByEntityTypeAccess($entity->getEntityTypeId());
  if (empty($plugin_ids)) {
    return \Drupal\Core\Access\AccessResult::neutral();
  }
  
  foreach ($plugin_ids as $plugin) {
    // Attempt to load each plugin service and check for the entity access.
    $service = "group.relation_handler.access_control.$plugin";
    if (\Drupal::hasService($service) === TRUE) {
      $pluginObject = $groupRelationTypeManager->createHandlerInstance($plugin, 'access_control');
      return $pluginObject->entityAccess($entity, $operation, $account);
    }
  }
}

Now, when a user visits the user profile page of another user this access check will trigger and either allow or deny them based on the permission setup in Drupal and the Group itself.

Whilst this works, remember that we are using an existing relationship plugin (i.e. the members of a Group, also known as users) to perform our own permission checks that augment the existing permissions that the Group has. This is an important consideration as if you want to test access for something that isn't part of a Group relationship you will need to fully implement some of the other code involved with relationships. Once that code is in place you can add those entities to your groups and perform permission checks on them in much the same way as we have done here.

Permission Decorators

Finally, it's worth talking a little bit about permission decorators. This is where we take a pre-existing permission check class and "decorate" it so that it understands Groups and can therefore perform Group level permission checks on entities. This technique can be quite complex but I'm adding this in for completeness since it has a good use case of allowing any permission to be applied to a Group, as long as there is a relationship between the Group and the permission.

The first thing we need to do is pick an access_check service that we want to decorate. Since we are dealing with User entities we need to decorate the access_check.entity service as that gives us the access check we need. This service has the following definition.

  access_check.entity:
    class: Drupal\Core\Entity\EntityAccessCheck
    tags:
      - { name: access_check, applies_to: _entity_access }

To decorate this we need to create another service definition and add the argument "decorates" with the name of the service we want to decorate. We also pass in additional arguments that allow us to use other services within this new service.

  custom_group_permissions.entity:
    class: 'Drupal\custom_group_permissions\Access\CustomGroupUserPermissions'
    arguments: ['@entity_type.manager', '@group_relation_type.manager']
    decorates: access_check.entity

With this in place we can now create the CustomGroupUserPermissions class. this class does pretty much the same task as the CustomGroupAccessControl class we defined earlier in that we perform some access checks on the entity and group. The main difference here is that we need to load the entity from the route and use a similar permission check to find out if the user has permission within any of their attached groups.

Here is the full source code of the CustomGroupUserPermissions class.

entityTypeManager = $entity_type_manager;
    $this->groupRelationTypeManager = $group_relation_type_manager;
  }

  /**
   * {@inheritDoc}
   */
  public function access(Route $route, RouteMatchInterface $route_match, AccountInterface $account) {
    $access = parent::access($route, $route_match, $account);
    if (!$access->isAllowed()) {
      // Load the entity from the route.
      $requirement = $route->getRequirement('_entity_access');
      [$entity_type, $operation] = explode('.', $requirement);

      $parameters = $route_match->getParameters();
      if ($parameters->has($entity_type)) {
        $entity = $parameters->get($entity_type);
        if ($entity instanceof EntityInterface) {
          // Get the specific group access for this entity.
          $group_access = $this->checkGroupAccess($entity, $operation, $account);
          // Combine the group access with the upstream access.
          $access = $access->orIf($group_access);
        }
      }
    }

    return $access;
  }

  /**
   * Determine group-specific access to an entity.
   *
   * @param \Drupal\Core\Entity\ContentEntityInterface $entity
   *   The entity to check.
   * @param string $operation
   *   The operation to check access for.
   * @param \Drupal\Core\Session\AccountInterface $account
   *   The user to check access for.
   *
   * @return \Drupal\Core\Access\AccessResultInterface
   *   Returns allowed access if the entity belongs to a group, and the user
   *   has both the 'view custom_group_permissions entity' and the
   *   'view own custom_group_permissions entity' permission in a group it
   *   belongs to.
   */
  protected function checkGroupAccess(ContentEntityInterface $entity, $operation, AccountInterface $account) {
    // Assume we will return a neutral permission check by default.
    $access = AccessResultNeutral::neutral();

    $storage = $this->entityTypeManager->getStorage('group_relationship');
    $groupRelationships = $storage->loadByEntity($entity);
    if (empty($groupRelationships)) {
      // If the entity does not belong to any group, we have nothing to say.
      return $access;
    }

    /** @var \Drupal\group\Entity\GroupRelationship $groupRelationship */
    foreach ($groupRelationships as $groupRelationship) {
      $group = $groupRelationship->getGroup();
      $access = AccessResult::allowedIf($group->hasPermission("$operation custom_group_permissions entity", $account));

      $owner_access = $access->orIf(AccessResult::allowedIf(
        $group->hasPermission("$operation custom_group_permissions entity", $account)
        && $group->hasPermission("$operation own custom_group_permissions entity", $account)
        && $entity instanceof EntityOwnerInterface
        && $entity->getOwnerId() === $account->id()
      ));

      $access = $access->orIf($owner_access);

      $access->addCacheableDependency($groupRelationship);
      $access->addCacheableDependency($group);
    }

    return $access;
  }

}

With these two elements in place it is possible to perform a permission check on any entity that has been added to a Group through a relationship. The Group relationship to the entity is still required for this permission check to work but it makes sense that Groups would control permissions to entities like this.

If you want to see this technique in action then it can be found in use within the Group Content Moderation module that is used to control access to revisions of entities within a Group. The module uses a slightly different technique that involves service providers to decorate the needed services so that the decoration will only happen if the content moderation module is active. I should note that it only supports Group version 2.0, although there are only minor changes required to update it to Groups version 3.0.

Conclusion

Adding permissions to Groups can be quite complex, but most of that complexity is ensuring that the access checks you want to run are performed in the appropriate place. You can create Group permissions statically, dynamically, or through a plugin permissions handler and once you have the access code working they become a useful part of your Group functionality.

It is also crucial that you understand how outsider and insider permissions work and how they can cause users to degrade in their access to groups if they join. The first hint that you've got something wrong here is when groups start disappearing from the Group entity listing page, but it can be more subtle, like losing member administration access.

Group is a powerful module and allows for all sorts of functionality to be created by joining Users with different types of entities. I have found that most of the complexity of Groups comes from ensuring that users have access to the right entities within Groups, whether they are a member of the Group or not. 

I haven't gone into creating custom relationships in Groups (other than augmenting the permissions of existing relationships) but I may follow up this article with another one detailing the process involved in that.

If you want to see this code together in a single place then I've created a github repo for the Custom Group Permissions module. Please don't install this module in production though, it's only intended to demonstrate all the elements described in this article.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web