Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jun 09 2021
Jun 09

A small leak can sink a great ship. ~ Benjamin Franklin

We have seen the basic setup and configuration for Mautic plugins that leverage the integrations bundle, in the previous blog post. The key part of any IntegrationBundle is handling the authentication mechanism.  

So in this blog post, we will be covering various types of authentication and using one authentication type in the plugin that we built in the last blog post. We will continue developing the same plugin.

IntegrationBundle from Mautic Core supports multiple authentication provider methods- like API-based authentication, Basic Auth, OAuth1a, OAuth2, OAuth2 Two-Legged, OAuth2 Three-Legged, etc. The IntegrationBundle provides all these authentication protocols to be used as Guzzle HTTP Client.

In this blog post, we will implement Basic Auth authentication with a third-party service.

The following steps enable our plugin to have the Basic Auth authentication:

  • Have a form with fields for storing the basic auth credentials Form/Type/AuthType.php.
  • Prepare a “Credentials” class to be used by the Client class.
  • Prepare a “Client” service class to be used by a dedicated APIConsumer class.
  • Use Client service and implement API call-related methods in APIConsumer service class.

Step 1

The plugin depends on third-party APIs to have data to manipulate. And these APIs are gated with the authentication and authorization mechanisms. For the course of this post, we have chosen  Basic Auth as the authentication method. 

Basic Auth needed a username and password to communicate with the API. So we need a form that accepts the username and password as a key. And this key is required when connecting with API endpoints.

Let's create a form and name it  “ConfigAuthType.php” under the “MauticPlugin\HelloWorldBundle\Form\Type” namespace. This class extends the AbstractType class of Symfony. We need to implement the "buildForm()" method to add the required field.  Example code should look like this:

You can see the full version here.

It's now time to tell Mautic to pick up this form during configuration. To do so, we have to define an Integration service class implementing ConfigFormInterface, ConfigFormAuthInterface. The ConfigFormAuthInterface is the interface that lets you specify the configuration form using the getAuthConfigFormName method. 

So we name this class "ConfigSupport" and place this under the "MauticPlugin\HelloWorldBundle\Integration\Support." Here are the snippets from the class ConfigSupport:

You can find the complete ConfigSupport class here.

Time to let the IntegrationBunddle know about our "ConfigSupport" class. To do so, add a service as integration or create a service listing with the mautic.config_integration tag. The following is the code snippet of the Config.php (the plugin configuration file).

Now, at this point, we have the following things ready:

  • A service class to register the configuration support class.
  • A class to provide the configuration.
  • One can view all the code changes for step 1 here in this commit.

Step 2

For the Basic Auth, the Integrations bundle uses the “HttpFactory” class to build the http-client. Now, this class needs an object called “Credentials,” consisting of all the required keys for authentication.

If you notice the “getClient()” method of HttpFactory class under the “Mautic\IntegrationsBundle\Auth\Provider\BasicAuth\” namespace, it needs an object of “AuthCredentialsInterface.”

So our next step will be to create a separate class for credentials and create a new custom client to use those credentials.

For that, create a new class called “Credentials” under MauticPlugin\HelloWorldBundle\Connection

The class should be like given below:

This is a trimmed version of the class, and you can find the full version here.

Now that we have completed the Credentials class, we need to create a client who will make HTTP requests. Typically, we don’t need to create a separate client class if we don’t have additional logic to handle. In such cases, we can just call HttpFactory class and get the client like:

In our case, apart from fetching data, we need to cache it and polish it to be easily used for Mautic’s Lead entity.

So we will create a new class called “Client” under the namespace MauticPlugin\HelloWorldBundle\Connection.

The job of the “Client” class is to get the object of ClientInterface (\GuzzleHttp\ClientInterface).

If you need the full class details, you can just follow this link here. Because we are kind and we want to share more, we will quickly review a few methods that interest us and work with the Credentials class we wrote previously.

Here in the “getClient()” method, we are calling the “getCredentials()” method, which is creating a “Credentials” object using API keys.

By using the credentials object, we will get the client via the HttpFactory service call.

So at the end of this phase, we have the following things:

  • Credentials object to pass into getClient() method.
  • New Client class to manipulate get() method and fetch other configuration.
  • New Config.php file inside the “HelloWorldBundle/Integrations” folder to bring configuration and different integration settings.
  • Commit.

Step 3

We are now ready with the entire setup to store credentials and send the request. Now, it is time to use them in any other class or anywhere that we want to use.

In our current plugin, we have created a separate class called “ApiConsumer.” The reason is, we have several other get methods and API calls, so consolidating all the API methods into a single class is easier to manage.

To use our Client service, created via Client.php, we need to create a service that can use this class. That way, we can reuse this class without worrying about anything else.

Create a new service called “helloworld.connection.client” and add it to the Config.php in the other services section.

Similarly, we need to add additional services for the ApiConsumer class to call from other services.

You can refer to the source code to view the entire ApiConsumer class. Here is a snippet of the get() method.

As you can see, we are directly using the Client service’s reference and call the get method from the Client.php.

So at this point, we are done with the third step, where we used our authentication mechanism to fetch the data from the API.

You can refer to the commit to see the code for this step.

Conclusion

Now that we have the plugin ready to communicate with third-party API to churn out more leads, let us thank IntegrationBundle's authentication support. 

You can find about different authentication supports here.

Also, we have the third blog post coming up about how to manage and sync data coming from API. So stay tuned!!

Jun 09 2021
Jun 09

Defining your own Drupal block plugins in custom code is really powerful, but sometimes you can feel limited by what those blocks have access to. Your block class is like a blank canvas that you know you'll be pulling data into, but how should you get that data from the surrounding page? Often you have to resort to fetching the entity for the current page out of its route parameters (e.g. on a node page), in order to get the values out of its fields that you want to display. Plugins can actually have a context passed to them directly - which can be common things like the logged-in user, or the node for the page being viewed. Let's have a look at how to tell Drupal what your plugin needs, so you don't have to do the donkey work.

If you've created a block plugin, you'll already be aware of the annotation comment just above the class name at the top of your PHP file. It might look something like this:

Spot that last property in the annotation: the context_definitions part. That's where the block is defined as requiring a node context. The 'entity:node' part tells Drupal that the context should be a node; Drupal supports it just being 'entity' or various other things, such as 'language' or even 'string'. You can very easily get hold of the context for your block, e.g. in the build() method of your class, allowing your block to adapt to its surroundings:

/**
 * {@inheritdoc}
 */
public function build() {
  $entity = $this->getContextValue('node');

  return $entity->field_my_thing->view();
}

I've used a very simple tip from our article on rendering fields for the output of this method. But the key here is the use of $this->getContextValue('node');. Our block class is extending the BlockBase base class, which gives us that getContextValue() method to use (via ContextAwarePluginTrait, which you could use directly in your plugin class if you're not extending BlockBase). The 'node' parameter that we've passed to it should match the key of the context definition array up in the class annotation - it's just a key that you could rename to anything helpful. Plugins can specify multiple contexts, so distinguish each with appropriate names.

In this basic case of using a node, the chances are that you're just wanting to use the node that the current page is for. Drupal core has 'context provider' services - one of which provides exactly that. Most basic Drupal installations probably won't have other context providers that provide nodes, so the node just gets automatically passed through, without you having to do anything else to wire it up. Brilliantly, the block will only show up when on a node page, regardless of any other visibility settings in the block placement configuration. You can bypass that by flagging that the context is optional in its definition - spot the 'required' key:

context_definitions = {
  "node" = @ContextDefinition("entity:node", required = FALSE, label = @Translation("Node")),
}

A slightly more interesting example is for users, as Drupal core can potentially provide two possible contexts for them:

  1. The currently logged-in user, or at least the entity object representing the anonymous user if no-one is logged in.
  2. The user being viewed - which will only be available when visiting an actual user profile page.

When there are more than one possible contexts available, block placement configuration forms offer the choice of which to use. So you might want a block in a sidebar on profile pages to show things to do with the user who owns that profile - in which case, select the 'User being viewed' option in the dropdown. Otherwise the data your block shows will just be about you, the logged-in user, even when looking at someone else's profile. Internally, your selection in the dropdown gets stored as the context mapping, which you can see in the exported configuration files for any context-aware block (including those automatically selected due to only one context being available).

If all this talk of Contexts is reminding you of something, it's because the concept was brought into core after being used with Panels in older versions of Drupal. Core's Layout Builder now uses it heavily, usually to pass the entity being viewed into the blocks that represent fields etc that you place in each section. For anyone that really wants to know, those blocks are defined using a plugin deriver - i.e. a separate class that defines multiple possible block plugins, dynamically. In the case of Layout Builder, that means a block is dynamically created for every field an entity can have. If you use plugin derivers, you might need dynamically set up context definitions too. So in the deriver's getDerivativeDefinitions() method, you could have something like this, the PHP equivalent of the regular block's static annotation:

/**
 * {@inheritdoc}
 */
public function getDerivativeDefinitions($base_plugin_definition) {
  $derivative['context_definitions'] = [
    'node' => new ContextDefinition('entity:node', $this->t('Node')),
  ];

  $this->derivatives['your_id'] = $derivative;
  return $this->derivatives;
}

I've only lightly touched on context provider services, but you can of course create your own too. I recently used one to provide a related 'section' taxonomy term to blocks, which pulls from an entity reference field that nearly all entity types & bundles on a site had. The blocks display fields from the current page's parent section. It made for a common interface separating that 'fetching' code from the actual block 'display' code. I recommend understanding, copying & adapting the NodeRouteContext class (and accompanying service definition in node.services.yml) for your case if you have a similar need.

I hope this awareness of context allows your blocks to seamlessly adapt to their surroundings like good chameleons, and maybe even write better code along the way. I know I've had many blocks in the past that each had their own ways of pulling relevant data for a page. Contexts seem like the answer to me as they separate fetching and display data, so each part can be done well. Getting creative with making my own context types and context providers was fun too, though probably added unnecessary complication in the end. Let me know in the comments what context-aware plugins enable you to do!

Photo by Andrew Liu on Unsplash

Jun 09 2021
Jun 09

There is three types of configuration data :

The Simple Configuration API

  • Used to store unique configuration object.

  • Are namespaced by the module_name.

  • Can contain a list of structured variables (string, int, array, ..)

  • Default values can be found in Yaml : config/install/module_name.config_object_name.yml

  • Have a schema defined in config/schema/module_name.schema.yml

Code example :

The States

  • Not exportable, simple value that hardly depend of the environment.

  • Value can differ between environment (e.g. last_cron, maintenance_mode have different value on your local and on the production site)

The Entity Configuration API

  • Configuration object that can be multiple (e.g. views, image style, ckeditor profile, ...).

  • New Configuration type can be defined in custom module.

  • Have a defined schema in Yaml.

  • Not fieldable.

  • Values can be exported and stored as Yaml, can be stored by modules in config/install

Code example :

  https://www.drupal.org/node/1809494

Store configuration object in the module :

Config object (not states) can be stored in a module and imported during the install process of the modules.

To export a config object in a module you can use the configuration synchronisation UI at /admin/config/development/configuration/single/export

Select the configuration object type, then the object, copy the content and store it in your custom module config/install directory following the name convention that is provided below the textarea.

You can also use the features module that is now a simple configuration packager.

If after the install of the module, you want to update the config object, you can use the following drush command :

Configuration override system

Remember the variable $conf in settings.php in D6/D7 for overriding variables.

In D8, you can also override variable from the configuration API:

You can also do overrides at runtime.

Example: getting a value in a specific languages :

Drupal provide a storage for override an module can specify her own way of override, for deeper informations look at :

https://www.drupal.org/node/1928898

Configuration schema

The config object of Config API and of the configuration entity API have attached schema defined in module_name/config/install/module_name.schema.yml

These schema are not mandatory, but if you want to have translatable strings, nor form configuration / consistent export, you must take the time to implement the schema for your configuration object. However if you don't want to, you can just implement the toArray() method in your entity config object class.

Example, docs and informations : https://www.drupal.org/node/1905070

Configuration dependencies calculation

Default is in the .info of the module that define the config object like in D6/D7

But config entity can implements calculateDependencies() method to provide dynamic dependencies depending on config entity values.

Think of Config entity that store field display information for content entities specific view modes, there a need to have the module that hold the fields / formatters in dependencies but these are dynamic depending on the content entity display.

More information : https://www.drupal.org/node/2235409

Jun 09 2021
Jun 09

Ressources

Migrate in Drupal 8

Migrate is now included in the Drupal core for making the upgrade path from 6.x and 7.x versions to Drupal 8.

Drupal 8 has two new modules :
Migrate: « Handles migrations »
Migrate Drupal : « Contains migrations from older Drupal versions. »

None of these module have a User Interface.

« Migrate » contains the core framework classes, the destination, source and process plugins schemas and definitions, and at last the migration config entity schema and definition.

« Migrate Drupal » contains implementations of destination, sources and process plugins for Drupal 6 and 7 you can use it or extend it, it's ready to use. But this module doesn't contain the configuration to migrate all you datas from your older Drupal site to Drupal 8.

The core provides templates of migration configuration entity that are located under each module of the core that needs one, under a folder named 'migration_templates' to find all the templates you can use this command in your Drupal 8 site:

To make a Drupal core to core migration, you will find all the infos here : https://www.Drupal.org/node/2257723 there is an UI in progress for upgrading.

A migration framework

Let have a look at each big piece of the migration framework :

Source plugins

Drupal provides an interface and base classes for the migration source plugin :

  • SqlBase : Base class for SQL source, you need to extend this class to use it in your migration.
  • SourcePluginBase : Base class for every custom source plugin.
  • MenuLink: For D6/D7 menu links.
  • EmptySource (id:empty): Plugin source that returns an empty row.
  • ...

Process plugins

There is the equivalent of the D7 MigrateFieldHandler but this is not reduced to fields or to a particular field type.
Its purpose is to transform a raw value into something acceptable by your new site schema.

The method transform() of the plugin is in charge of transforming your $value or skipping the entire row if needed.
If the source property has multiple values, the transform() will happen on each one.

Drupal provides migration process plugin into each module of the core that needs it (for the core upgrade),
To find out which one and where it is located you can use this command :

Destination plugins

Destination plugins are the classes that handle where your data are saved in the new Drupal 8 sites schemas.

Drupal provides a lot of useful destination classes :

  • DestinationBase : Base class for migrate destination classes.
  • Entity (id: entity) : Base class for entity destinations.
  • Config (id: config) : Class for importing configuration entities.
  • EntityBaseFieldOverride (id: entity:base_field_override): Class for importing base field.
  • EntityConfigBase : Base class for importing configuration entities.
  • EntityImageStyle (id: entity:image_style): Class for importing image_style.
  • EntityContentBase (id: entity:%entity_type): The destination class for all content entities lacking a specific class.
  • EntityNodeType: (id: entity:node_type): A class for migrate node type.
  • EntityFile (id: entity:file): Class for migrate files.
  • EntityFieldInstance: Class for migrate field instance.
  • EntityFieldStorageConfig: Class for migrate field storage.
  • EntityRevision, EntityViewMode, EntityUser, Book...
  • And so more…

Builder plugins:

"Builder plugins implement custom logic to generate migration entities from migration templates. For example, a migration may need to be customized based on the data that is present in the source database; such customization is implemented by builders." - doc API

This is used in the user module, the builder create a migration configuration entity based on a migration template and then add fields mapping to the process, based on the data in the source database. (@see /Drupal/user/Plugin/migrate/builder/d7/User)

Id map plugins:

"It creates one map and one message table per migration entity to store the relevant information." - doc API
This is where rollback, update and the map creation are handled.
Drupal provides the Sql plugin (@see /Drupal/migrate/Plugin/migrate/id_map/Sql) based on the core base class PluginBase.

And we are talking only about core from the beginning.
All the examples (That means docs for devs) are in core !

About now :

While there *almost* a simple UI to use migration in Drupal 8 for Drupal to Drupal, Migrate can be used for every kind of data input. The work is in progess for http://Drupal.org/project/migrate_plus to bring an UI and more source plugins, process plugins and examples. There already is the CSV source plugin and a pending patch for the code example. The primary goal of « migrate plus » is to have all the features (UI, Sources, Destinations.. ) of the Drupal 7 version.

Concrete migration

(migration with Drupal 8 are made easy)

I need to migrate some content with image, attached files and categories from custom tables in an external SQL database to Drupal.

To begin shortly :

  • Drush 8 (dev master) and console installed.
  • Create the custom module (in the code, I assume the module name is “example_migrate”):
    $ Drupal generate:module
    or create the module by yourself, you only need the info.yml file.
  • Activate migrate and migrate_plus tools
    $ Drupal module:install migrate_tools
    or
    $ drush en migrate_tools
  • What we have in Drupal for the code example :
    • a taxonomy vocabulary : ‘example_content_category’
    • a content type ‘article’
    • some fields: body, field_image, field_attached_files, field_category
  • Define in settings.php, the connexion to your external database:

We are going to tell migrate source to use this database target. It happens in each migration configuration file, it’s a configuration property used by the SqlBase source plugin:

This is one of the reasons SqlBase has a wrapper for select query and you need to call it in your source plugin, like $this->select(), instead of building the query with bare hands.

N.B. Each time you add a custom yml file in your custom module you need to uninstall/reinstall the module for the config/install files to imports. In order to avoid that, you can import a single migration config file by copy/paste in the admin/config configuration synchronisation section.

The File migration

The content has images and files to migrate, I suppose in this example that the source database has a unique id for each file in a specific table that hold the file path to migrate.

We need a migration for the file to a Drupal 8 file entity, we write the source plugin for the file migration:

File: src/Plugin/migrate/source/ExampleFile.php

We have the source class and our source fields and each row generate a path to the file on my local disk.

But we need to transform our external file path to a local Drupal public file system URI, for that we need a process plugin. In our case the process plugin will take the external filepath and filename as arguments and return the new Drupal URI.

File: src/Plugin/migrate/process/ExampleFileUri.php

We need another process plugin to transform our source date values to timestamp (created, changed), as the date format is the same across the source database, this plugin will be reused in the content migration for the same purpose:

File: src/Plugin/migrate/process/ExampleDate.php

For the destination we use the core plugin: entity:file.

Now we have to define our migration config entity file, this is where the source, destination and process (field mappings) are defined:

File: config/install/migrate.migration.example_file.yml

We are done for the file migration, you can execute it with the migrate_tools (of the migrate_plus project) drush command:

The Term migration

The content has categories to migrate.
We need to import them as taxonomy term, in this example I suppose the categories didn't have unique ids, it is just a column of the article table with the category name…

First we create the source :

File: src/Plugin/migrate/source/ExampleCategory.php

And we can now create the migration config entity file :

File: config/install/migrate.migration.example_category.yml

This is done, to execute it :

The Content migration

The content from the source has an html content, raw excerpt, image, attached files, categories and the creation/updated date in the format Y-m-d H:i:s

We create the source plugin:

File: src/Plugin/migrate/source/ExampleContent.php

Now we can create the content migration config entity file :

File: config/install/migrate.migration.example_content.yml

Finally, execute it :

Group the migration

Thanks to migrate_plus, you can specify a migration group for your migration.
You need a to create a config entity for that :

File: config/install/migrate_plus.migration_group.example.yml

Then in your migration config yaml file, be sure to have the line migration_group next to the label:

So you can use the command to run the migration together, and the order of execution will depend on the migration dependencies:

I hope that you enjoyed our article.

Best regards,

Delta https://www.drupal.org/u/delta

Jun 09 2021
Jun 09

At Studio.gd we love the Drupal ecosystem and it became very important to us to give back and participate.
Today we're proud to announce a new module that we hope will help you !

Inline Entity Display module will help you handle the display of referenced entity fields directly in the parent entity.
For exemple if you reference a taxomony "Tags" to an Article node, you will be able directly in the manage display of the article to display tags' fields. It can become very usefull with more complex referenced entity like field collection for exemple.

VOIR LE MODULE : https://www.drupal.org/project/inline_entity_display


Features

- You can control, for each compatible reference field instances, if the fields from the referenced entities would be available as extra fields. Disabled by default.

- You can manage the visibility of the referenced entities fields on the manage display form. Hidden by default.

- View modes are added to represent this context and manage custom display settings for the referenced entities fields in this context {entity_type}_{view_mode} Example: "Node: Teaser" is used to render referenced entities fields, when you reference an entity into a node, and you view this node as a teaser if there are no custom settings for this view mode, fields are rendered using the default view mode settings.

- Extra data attributes are added on the default fields markup, so the field of the same entity can be identified.

Compatible with Field group on manage display form.

Compatible with Display Suite layouts on manage display form.

Requirements

- Entity API
- One of the compatible reference fields module.

Tutorials

simplytest.me/project/inline_entity_display/7.x-1.x
The simplytest.me install of this module will come automatically with these modules: entity_reference, field_collection, field_group, display suite.

VOIR LE MODULE : https://www.drupal.org/project/inline_entity_display

We are currently developping a similar module for Drupal 8 but more powerful and more flexible, Stay tuned !

Jun 08 2021
Jun 08

Website’s security is never (and should never be) an afterthought. A breached website does not just cause a loss in revenue but also in reputation. A secure website is one that has been developed keeping in mind different ways it could be broken into.

For this, we must ensure that the security checklist is handled before the launch and also after the launch of the site. One of the most important steps to ensure a secure Drupal website is to make certain that users have and maintain strong password policies. Out of the box, Drupal does not enforce a strong password policy. By default, you can choose to set easy (and weak passwords). But this behavior is not recommended especially for users who have content administration and other higher privilege permissions.

And that’s where the Drupal Password Policy module shines. It enables site admins to set strong password policies and enforce restrictions to a website. The Password policy module is a contributed Drupal module that is compatible with Drupal 9 as well.

Password Policy Module

Installing the Password Policy Module

Step 1: Install the Password Policy module using composer or download from here.

$ composer require 'drupal/password_policy:^[email protected]'

Note: Before installing the password policy module, make sure you have installed and enabled the Ctools module.

Step 2: Enable the downloaded module using drush or Drupal UI.

Through the Drupal UI, head to the module listing page. Under the Security tab, you will find the password policy module with submodules. Enable the first Password Policy module and then the submodules as per your requirement.

Security

Configuration

To configure your recently installed and enabled Password policy module, go to Configuration → Security → Password Policy. Here you will add password policies for various roles with different constraints as per your requirement.

Password Reset

Now give a Policy name and set password reset days. If you don't want to the password to expire, set the Password reset days as 0 days.

General Info

After this, you can add constraints and configure it through the Constraints settings tab. Note that the submodule that you added in security modules listing will list in the Constraints dropdown.

Configure Constraints

Let’s implement this with an example for better understanding. I need to add a password policy for an author role that enforces that the password must contain a minimum of 3 characters from the subsequent character types: lowercase letters, uppercase letters, digits, special characters, a minimum of 1 special character and the password length must be a minimum of 8 characters.

Character TypeNumber of CharacterMaximum LengthPolicy Constraints

Once you have configured the above constraints, apply it to the author role.

Apply to Roles

Click on the Finish button to create your new password policy. You have now successfully created a password policy for the author role.

Jun 08 2021
Jun 08

There's a neat little Drupal module called JSON Field and recently, I had a chance to play around with it. Out of the box, JSON field is a just a plain field where JSON data can be input and output on a web page. On its own, the module does not do much beyond just printing the raw data formatted as JSON. However, I got to thinking it would be ideal to nicely format the data with HTML. In this article, I will show you how I accomplished this with both a preprocess function and some custom code in Twig.

Getting started

First, you'll want a Drupal 8 or 9 instance running. In the root of your project, run:

composer require drupal/json_field

Note, if you get an error, you may need to append a version number, for example:

composer require drupal/json_field:1.0-rc4

Next, enable the module and create a new field on an entity, for example on a page content type. When I created my field, I chose the option, JSON stored as raw JSON in database

Next, input some JSON data, for sample data, I like to use Mockaroo. (At a high level, I could envision using the Drupal Feeds module to import JSON data in bulk and mapping it to a JSON field but I have not tested this.)

An example of the Mockaroo interface showing mock data being generated An example of the Mockaroo interface showing mock data being generated

Create a preprocess function

We are rendering this data in a node so I have a basic node preprocess function setup below with a sub-theme of Olivero called Oliver. Within this, we will leverage Xdebug to examine the data up close. We write this code in our theme's .theme file.

Define and check for an instance of a node

The first thing we want to do is, since we are working within a node context, we will set some node definitions and check to ensure that we are on a node page.

At the top of our file, we will add

use Drupal\node\NodeInterface;

Then, within the function, we will define the node.

  // Define the node.
  $node = \Drupal::routeMatch()->getParameter('node');

Now we check for an instance of a node:

  // If instance of a node.
  if ($node instanceof NodeInterface) {

Field PHP magic

I named my JSON field field_json_raw and we first want to check to see if the field exists and that it is not empty. For this, I like to use a PHP magic method. A magic method is a short cut of sorts to dig into the field data. In Drupal terms, this looks like:

 if ($node->hasField('field_json_raw') &&
   !$node->get('field_json_raw')->isEmpty()) {
	// Custom code here...
  }

The magic methods above are hasField and get.

Start up Xdebug

Next up, we will use Xdebug to examine the data output to see how we might prepare variables for our Twig template. Within the node preprocess function, I set an Xdebug breakpoint and start listening. Once Xdebug is running we can evaluate the field expression using the magic method again but this time adding the value on. For example, $node->get('field_json_raw')->value. That ends up with the plain value of the field:

[{"id": 1, "country": "Cuba", "animal_name": "Cape Barren goose", "description": "Curabitur convallis.", "animal_scientific": "Cereopsis novaehollandiae"}, {"id": 2, "country": "Vietnam", "animal_name": "Horned puffin", "description": "Pellentesque ultrices mattis odio." Etc...
Xdebug showing the plain value of the JSON output Xdebug showing the plain value of the JSON output

Convert the value into an array

What we need to do now is convert that to a usable PHP array. We use the json_decode function:

 // Set a variable for the plain field value.
 $json_raw = $node->get('field_json_raw')->value;
 // Convert the data into an array using json decode.
 $json_array = json_decode($json_raw);

That ends up looking like this:

Xdebug showing the converted JSON array Xdebug showing the converted JSON array

Create a template variable

Now we have a nicely formatted array to loop through once inside a twig template. The final piece is to check for valid JSON and set the template variable. Note, JSON field already check for valid json but it's probably good practice to do this anyway.

 // Check for valid JSON.
 if ($json_array !== NULL) {
  // Create a variable for our template.
  $vars['json_data'] = $json_array;
 }

Finished preprocess function

Putting it all together, our entire preprocess function looks like this:

getParameter('node');
  // If instance of a node.
  if ($node instanceof NodeInterface) {
    // Check for the field and that it is not empty.
    if ($node->hasField('field_json_raw') &&
      !$node->get('field_json_raw')->isEmpty()) {
      // Set a variable for the field value.
      $json_raw = $node->get('field_json_raw')->value;
      // Convert the data into an array.
      $json_array = json_decode($json_raw);
      // Check for valid JSON.
      if ($json_array !== NULL) {
        // Create a variable for our template.
        $vars['json_data'] = $json_array;
      }
    }
  }
}

Render the JSON variable in Twig

Now we'll go into our Twig template and render the variable with a loop. At its very most basic, it will look something like this:


    {% if content.field_json_raw | render %}
      {% for item in json_data %}
      {{ item.animal_name }}
      {{ item.animal_scientific }}
      {{ item.country }}
      {% endfor %}
    {% endif %}

Of course we want to add HTML to this to make it look nicely styled and here is where you can do most anything you want. I opted for a data table:


    {% if content.field_json_raw | render %}
      

{{ 'Animals from around the world'|t }}

{% for item in json_data %} {% endfor %} {{ 'Animal Name'|t }} {{ 'Scientific Name'|t }} {{ 'Country'|t }} {{ item.animal_name }} {{ item.animal_scientific }} {{ item.country }}
{% endif %}

The code above ends up looking like this:

The finished styled output of the JSON data Twig loop The finished styled output of the JSON data Twig loop

Summary

And there you have it, nicely styled JSON data rendered in a Twig template. Using the JSON Field module might not be a common everyday item but it definitely fulfills a specific use case as outlined here.

Resources

Tags

Jun 07 2021
Jun 07

Last month we implemented the 'Resources' page on iias.asia, as you can see: you can filter the Resource nodes on this page in the right of the screen. This is a Drupal View with exposed filters, which are placed via a block via Twig Tweak module.

There is a 'Region' Filter and a 'Tags' taxonomy reference field, which are also used in other content types. 

We only wanted to show used terms in the drop downs.

So, there are other pages (Views) that also implemented this tag as a filter (like the Alumni page). But of course: those pages have other content types, so 'used tags' are also different.

So here is how we limited the used tags, per Drupal View, this article gave us a kickstart.

YOURMODULE.module:
/**
 * Implements hook_form_alter()
 *
 */
function YOURMODULE_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  // Id's of the forms we want to alter.
  $form_ids = [
    'views-exposed-form-resources-page-1' => 'resource',
    'views-exposed-form-reviews-page-1' => 'review',
    'views-exposed-form-available-for-review-page-1' => 'review',

  ];
  // Targeted taxonomy term fields.
  $select_fields = ['tid_2', 'field_tags_target_id'];
  // Continue only if current form_id is one in defined array.
  if (array_key_exists($form['#id'], $form_ids)) {
    // Loop through targeted fields.
    foreach ($select_fields as $select_field) {
      // Get 'terms in use', switch query on which field is currently in the loop.
      // This only works if you have 1 or 2 targeted filters.
      $available_terms = ($select_field == 'tid_2')
                        ? _get_available_terms_region($form_ids[$form['#id']], $form['#id'])
                        : _get_available_terms_tags($form_ids[$form['#id']], $form['#id']);
      // Continue if $available_terms exists.
      if (isset($available_terms)) {
        // Unset the existing list.
        unset($form[$select_field]['#options']);
        // Build new list with 'terms in use'.
        $form[$select_field]['#options']['All'] = '- Any -';
        foreach ($available_terms as $available_term) {
          $tid = $available_term[0];
          $name = $available_term[1];
          $form[$select_field]['#options'][$tid] = $name;
        }
      }
    }
  }
}
/**
 * Custom function to query and build new filter list.
 *
 * @param $node_type
 * @param $exposed_block_id
 * @return array
 */
function _get_available_terms_region($node_type, $exposed_block_id) {
  // Table name of tags field.
  $node_tags_table = 'node__field_profile_region';
  // Query data.
  $query = \Drupal::database()->select($node_tags_table, 'nft');
  $query->distinct();
  $query->join('taxonomy_term_field_data', 'tname', 'tname.tid = nft.field_profile_region_target_id');
  $query->join('node', 'node', 'node.nid = nft.entity_id');
  $query->fields('nft', ['field_profile_region_target_id']);
  $query->fields('tname', ['name']);
  $query->orderBy('tname.name');
  // Dynamically query node type.
  $query->condition('node.type', $node_type);
  if($exposed_block_id == 'views-exposed-form-available-for-review-page-1') {
    $query->join('node__field_boolean_1', 'available', 'available.entity_id = nft.entity_id');
    $query->condition('available.field_boolean_1_value', 1);
  }
  $result = $query->execute();
  // Build term list.
  $term_list = [];
  while ($row = $result->fetchAssoc()) {
    array_push($term_list, [$row['field_profile_region_target_id'], $row['name']]);
  }
  // Return term list.
  return $term_list;
}

/**
 * Custom function to query and build new filter list.
 *
 * @param $node_type
 * @param $exposed_block_id
 * @return array
 */
function _get_available_terms_tags($node_type, $exposed_block_id) {
  // Table name of tags field
  $node_tags_table = 'node__field_tags';
  // Query data.
  $query = \Drupal::database()->select($node_tags_table, 'nft');
  $query->distinct();
  $query->join('taxonomy_term_field_data', 'tname', 'tname.tid = nft.field_tags_target_id');
  $query->join('node', 'node', 'node.nid = nft.entity_id');
  $query->fields('nft', ['field_tags_target_id']);
  $query->fields('tname', ['name']);
  $query->orderBy('tname.name');
  // Dynamically query node type.
  $query->condition('node.type', $node_type);
  if($exposed_block_id == 'views-exposed-form-available-for-review-page-1') {
    $query->join('node__field_boolean_1', 'available', 'available.entity_id = nft.entity_id');
    $query->condition('available.field_boolean_1_value', 1);
  }
  $result = $query->execute();
  // Build term list.
  $term_list = [];
  while ($row = $result->fetchAssoc()) {
    array_push($term_list, [$row['field_tags_target_id'], $row['name']]);
  }
  // Return list.
  return $term_list;
}

Custom functions in the block above, we put these in the same .module file, but maybe it's better to place it in a Drupal service for example. Of course you can always refactor this into one function -more dynamic. But I'll leave that up to you :)

I hope this gives you a head start, please let me know if you have any questions or improvements.

Drupal code Planet Drupal Written by Joris Snoek | Jun 07, 2021

Join the conversation

Jun 07 2021
Jun 07

The overall challenge to wrangling the Webform module's issue queue is that everyone has different levels of experience in Drupal. Organizations are trying to build unique and complex digital experiences. If you combine this with the fact that we are an international community, the result is that issue queue tickets come in all shapes and sizes. Therefore, with each issue, I have to figure out the problem, priority, solution, and level of effort required to resolve the issue.

Jun 06 2021
Jun 06
Drupal 8 will be released on November 19 | Wunderkraut

Coincidence?

We're ready to celebrate and build (even more) amazing Drupal 8 websites. 
On November 19 we'll put our Drupal 8 websites in the spotlight...be sure to come back and check out our website.

By

Michèle Weisz

Share

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Jun 06 2021
Jun 06
77 of us are going | Wunderkraut

Drupalcon 2015

People from across the globe who use, develop, design and support the Drupal platform will be brought together during a full week dedicated to networking, Drupal 8 and sharing and growing Drupal skills.

As we have active hiring plans we’ve decided that this year’s approach should have a focus on meeting people who might want to work for Wunderkraut and getting Drupal 8 out into the world.
As Signature Supporting Partner we wanted as much people as possible to attend the event. We managed to get 77 Wunderkrauts on the plane to Barcelona!  From Belgium alone we have an attendance of 17 people.
The majority of our developers will be participating in sprints (a get-together for focused development work on a Drupal project) giving all they got together with all other contributors at DrupalCon.

We look forward to an active DrupalCon week.  
If you're at DrupalCon and feel like talking to us. Just look for the folks with Wunderkraut carrot t-shirts or give Jo a call at his cell phone +32 476 945 176.

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Jun 06 2021
Jun 06
Watch our epic Drupal 8 promo video | Wunderkraut

How Wunderkraut feels about Drupal 8

Drupal 8 is coming and everyone is sprinting hard to get it over the finish line. To boost contributor morale we’ve made a motivational Drupal 8 video that will get them into the zone and tackling those last critical issues in no time.

[embedded content]

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Jun 06 2021
Jun 06

Once again Heritage day was a huge succes.

About 400 000 visitors visited Flanders monuments and heritage sites last Sunday.  The Open Monumentendag website received more than double the amount of last year's visitors.

Visitors to the website organised their day out by using the powerful search tool we built that allowed them to search for activities and sights at their desired location.  Not only could they search by location (province, zip code, city name, km range) but also by activity type, keywords, category and accessibility.  Each search request being added as a (removable) filter for finding the perfect activity.

By clicking on the heart icon, next to each activity, a favorite list was drawn up.  Ready for printing and taking along as route map.

Our support team monitored the website making sure visitors had a great digital experience for a good start to the day's activities.

Did you experience the ease of use of the Open Monumentendag website?  Are you curious about the know-how we applied for this project?  Read our Open Monumentendag case.

Jun 06 2021
Jun 06
Very proud to be a part of it | Wunderkraut

Breaking ground as Drupal's first Signature Supporting Partner

Drupal Association Executive Director Holly Ross is thrilled that Wunderkraut is joining as first and says: "Their support for the Association and the project is, and has always been, top-notch. This is another great expression of how much Wunderkraut believes in the incredible work our community does."

As Drupal Signature Supporting Partner we commit ourselves to advancing the Drupal project and empowering the Drupal community.  We're very proud to be a part of it as we enjoy contributing to the Drupal ecosystem (especially when we can be quircky and fun as CEO Vesa Palmu states).

Our contribution allowed the Drupal Association to:

  • Complete Drupal.org's D7 upgrade - now they can enhance new features
  • Hired a full engineering team committed to improving Drupal.org infrastructure
  • Set the roadmap for Drupal.org success.

First signaturepartner announcement in Drupal Newsletter

By

Michèle Weisz

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Jun 06 2021
Jun 06

But in this post I'd like to talk about one of the disadvantages that here at Wunderkraut we pay close attention to.

A consequence of the ability to build features in more than one way is that it's difficult to predict how different people interact (or want to interact) with them. As a result, companies end up delivering solutions to their clients that although seem perfect, turn out, in time, to be less than ideal and sometimes outright counterproductive. 

Great communication with the client and interest in their problems goes a long way towards minimising this effect. But sometimes clients realise that certain implementations are not perfect and could be made better. And when that happens, we are there to listen, adapt and reshape future solutions by taking into account these experiences. 

One such recent example involved the use of a certain WYSIWYG library from our toolkit on a client website. Content editors were initially happy with the implementation before they actually started using it to the full extent. Problems began to emerge, leading to editors spending way more time than they should have performing editing tasks. The client signalled this problem to us which we then proceed to correct by replacing said library. This resulted in our client becoming happier with the solution, much more productive and less frustrated with their experience on their site. 

We learned an important lesson in this process and we started using that new library on other sites as well. Polling our other clients on the performance of the new library revealed that indeed it was a good change to make. 

Jun 06 2021
Jun 06

A few years ago most of the requests started with : "Dear Wunderkraut, we want to build a new website and ... "  - nowadays we are addressed as "Dear Wunderkraut, we have x websites in Drupal and are very happy with that, but we are now looking for a reliable partner to support & host ... ".

By the year 2011 Drupal had been around for just about 10 years. It was growing and changing at a fast pace. More and more websites were being built with it. Increasing numbers of people were requesting help and support with their website. And though there were a number of companies flourishing in Drupal business, few considered specific Drupal support an interesting market segment. Throughout 2011 Wunderkraut Benelux (formerly known as Krimson) was tinkering with the idea of offering support, but it was only when Drupal newbie Jurgen Verhasselt arrived at the company in 2012 that the idea really took shape.

Before his arrival, six different people, all with different profiles, were handling customer support in a weekly rotation system. This worked poorly. A developer trying to get his own job done plus deal with a customer issue at the same time was getting neither job done properly. Tickets got lost or forgotten, customers felt frustrated and problems were not always fixed. We knew we could do better. The job required uninterrupted dedication and constant follow-up.

That’s where Jurgen came in the picture. After years of day job experience in the graphic sector and nights spent on Drupal he came to work at Wunderkraut and seized the opportunity to dedicate himself entirely to Drupal support. Within a couple of weeks his coworkers had handed over all their cases. They were relieved, he was excited! And most importantly, our customers were being assisted on a constant and reliable basis.

By the end of 2012 the first important change was brought about, i.e. to have Jurgen work closely with colleague Stijn Vanden Brande, our Sys Admin. This team of two ensured that many of the problems that arose could be solved extremely efficiently. Wunderkraut being the hosting party as well as the Drupal party means that no needless discussions with the hosting took place and moreover, the hosting environment was well-known. This meant we could find solutions with little loss of time, as we know that time is an important factor when a customer is under pressure to deliver.

In the course of 2013 our support system went from a well-meaning but improvised attempt to help customers in need to a fully qualified division within our company. What changed? We decided to classify customer support issues into: questions, incidents/problems and change requests and incorporated ITIL based best practices. In this way we created a dedicated Service Desk which acts as a Single Point of Contact after Warranty. This enabled us to offer clearly differing support models based on the diverse needs of our customers (more details about this here). In addition, we adopted customer support software and industry standard monitoring tools. We’ve been improving ever since, thanks to the large amount of input we receive from our trusted customers. Since 2013, Danny and Tim have joined our superb support squad and we’re looking to grow more in the months to come.

When customers call us for support we do quite a bit more than just fix the problem at hand. Foremostly, we listen carefully and double check everything to ensure that we understand him or her correctly. This helps to take the edge off the huge pressure our customer may be experiencing. After which, we have a list of do’s and don’t for valuable support.

  • Do a quick scan of possible causes by getting a clear understanding of the symptoms
  • Do look for the cause of course, but also assess possible quick-fixes and workarounds to give yourself time to solve the underlying issue
  • Do check if it’s a pebkac
  • and finally, do test everything within the realm of reason.

The most basic don’t that we swear by is:

  • never, ever apply changes to the foundation of a project.
  • Support never covers a problem that takes more than two days to fix. At that point we escalate to development.

We are so dedicated to offering superior support to customers that on explicit request, we cater to our customers’ customers. Needless to say, our commitment in support has yielded remarkable  results and plenty of customer satisfaction (which makes us happy, too)

Jun 06 2021
Jun 06

If your website is running Drupal 6, chances are it’s between 3 and 6 years old now, and once Drupal 8 comes out. Support for Drupal 6 will drop. Luckily the support window has recently been prolonged for another 3 months after Drupal 8 comes out. But still,  that leaves you only a small window of time to migrate to the latest and greatest. But why would you? 

There are many great things about Drupal 8 that will have something for everyone to love, but that should not be the only reason why you would need an upgrade. It is not the tool itself that will magically improve the traffic to your site, neither convert its users to start buying more stuff, it’s how you use the tool.  

So if your site is running Drupal 6 and hasn’t had large improvements in the last years it might be time to investigate if it needs a major overhaul to be up to par with the competition. If that’s the case, think about brand, concept, design, UX and all of that first to understand how your site should work and what it should look like, only then we can understand if a choice needs to be made to go for Drupal 7 or Drupal 8.  

If your site is still running well you might not even need to upgrade! Although community support for Drupal 6 will end a few months after Drupal 8 release, we will continue to support Drupal 6 sites and work with you to fix any security issues we encounter and collaborate with the Drupal Security Team to provide patches.

My rule of thumb is that if your site uses only core Drupal and a small set of contributed modules, it’s ok to build a new website on Drupal 8 once it comes out. But if you have a complex website running on many contributed and custom modules it might be better to wait a few months maybe a year until all becomes stable. 

Jun 06 2021
Jun 06

So how does customer journey mapping work?

In this somewhat simplified example, we map the customer journey of somebody signing up for an online course. If you want to follow along with your own use case, pick an important target audience and a customer journey that you know is problematic for the customer.

1. Plot the customer steps in the journey

customer journey map 1

Write down the series of steps a client takes to complete this journey. For example “requests brochure”, “receives brochure”, “visits the website for more information”, etc. Put each step on a coloured sticky note.

2. Define the interactions with your organisation

customer journey map 2

Next, for each step, determine which people and groups the customer interacts with, like the marketing department, copywriter and designer, customer service agent, etc. Do the same for all objects and systems that the client encounters, like the brochure, website and email messages. You’ve now mapped out all people, groups, systems and objects that the customer interacts with during this particular journey.

3. Draw the line

customer journey map 3

Draw a line under the sticky notes. Everything above the line is “on stage”, visible to your customers.

4. Map what happens behind the curtains

customer journey map 4

Now we’ll plot the backstage parts. Use sticky notes of a different color and collect the persons, groups, actions, objects and systems that support the on stage part of the journey. In this example these would be the marketing team that produces the prod brochure, the printer, the mail delivery partner, web site content team, IT departments, etc. This backstage part is usually more complex than the on stage part.

5. How do people feel about this?

Customer journey map 5

Now we get to the crucial part. Mark the parts that work well from the perspective of the person interacting with it with green dots. Mark the parts where people start to feel unhappy with yellow dots. Mark the parts where people get really frustrated with red. What you’ll probably see now is that your client starts to feel unhappy much sooner than employees or partners. It could well be that on the inside people are perfectly happy with how things work while the customer gets frustrated.

What does this give you?

Through this process you can immediately start discovering and solving customer experience issues because you now have:

  • A user centred perspective on your entire service/product offering
  • A good view on opportunities for innovation and improvement
  • Clarity about which parts of the organisation can be made responsible to produce those improvements
  • In a shareable format that is easy to understand

Mapping your customer journey is an important first step towards customer centred thinking and acting. The challenge is learning to see things from your customers perspective and that's exactly what a customer journey map enables you to do. Based on the opportunities you identified from the customer journey map, you’ll want to start integrating the multitude of digital channels, tools and technology already in use into a cohesive platform. In short: A platform for digital experience management! That's our topic for our next post.

Jun 06 2021
Jun 06

In combination with the FacetAPI module, which allows you to easily configure a block or a pane with facet links, we created a page displaying search results containing contact type content and a facets block on the left hand side to narrow down those results.

One of the struggles with FacetAPI are the URLs of the individual facets. While Drupal turns the ugly GET 'q' parameter into a clean URLs, FacetAPI just concatenates any extra query parameters which leads to Real Ugly Paths. The FacetAPI Pretty Paths module tries to change that by rewriting those into human friendly URLs.

Our challenge involved altering the paths generated by the facets, but with a slight twist.

Due to the projects architecture, we were forced to replace the full view mode of a node of the bundle type "contact" with a single search result based on the nid of the visited node. This was a cheap way to avoid duplicating functionality and wasting precious time. We used the CTools custom page manager to take over the node/% page and added a variant which is triggered by a selection rule based on the bundle type. The variant itself doesn't use the panels renderer but redirects the visitor to the Solr page passing the nid as an extra argument with the URL. This resulted in a path like this: /contacts?contact=1234.

With this snippet, the contact query parameter is passed to Solr which yields the exact result we need.

/**
 * Implements hook_apachesolr_query_alter().
 */
function myproject_apachesolr_query_alter($query) {
  if (!empty($_GET['contact'])) {
    $query->addFilter('entity_id', $_GET['contact']);
  }
}

The result page with our single search result still contains facets in a sidebar. Moreover, the URLs of those facets looked like this: /contacts?contact=1234&f[0]=im_field_myfield..... Now we faced a new problem. The ?contact=1234 part was conflicting with the rest of the search query. This resulted in an empty result page, whenever our single search result, node 1234, didn't match with the rest of the search query! So, we had to alter the paths of the individual facets, to make them look like this: /contacts?f[0]=im_field_myfield.

This is how I approached the problem.

If you look carefully in the API documentation, you won't find any hooks that allow you to directly alter the URLs of the facets. Gutting the FacetAPI module is quite daunting. I started looking for undocumented hooks, but quickly abandoned that approach. Then, I realised that FacetAPI Pretty Paths actually does what we wanted: alter the paths of the facets to make them look, well, pretty! I just had to figure out how it worked and emulate its behaviour in our own module.

Turns out that most of the facet generating functionality is contained in a set of adaptable, loosely coupled, extensible classes registered as CTools plugin handlers. Great! This means that I just had to find the relevant class and override those methods with our custom logic while extending.

Facet URLs are generated by classes extending the abstract FacetapiUrlProcessor class. The FacetapiUrlProcessorStandard extends and implements the base class and already does all of the heavy lifting, so I decided to take it from there. I just had to create a new class, implement the right methods and register it as a plugin. In the folder of my custom module, I created a new folder plugins/facetapi containing a new file called url_processor_myproject.inc. This is my class:

/**
 * @file
 * A custom URL processor for cancer.
 */

/**
 * Extension of FacetapiUrlProcessor.
 */
class FacetapiUrlProcessorMyProject extends FacetapiUrlProcessorStandard {

  /**
   * Overrides FacetapiUrlProcessorStandard::normalizeParams().
   *
   * Strips the "q" and "page" variables from the params array.
   * Custom: Strips the 'contact' variable from the params array too
   */
  public function normalizeParams(array $params, $filter_key = 'f') {
    return drupal_get_query_parameters($params, array('q', 'page', 'contact'));
  }

}

I registered my new URL Processor by implementing hook_facetapi_url_processors in the myproject.module file.

**
 * Implements hook_facetapi_url_processors().
 */
function myproject_facetapi_url_processors() {
  return array(
    'myproject' => array(
      'handler' => array(
        'label' => t('MyProject'),
        'class' => 'FacetapiUrlProcessorMyProject',
      ),
    ),
  );
}

I also included the .inc file in the myproject.info file:

files[] = plugins/facetapi/url_processor_myproject.inc

Now I had a new registered URL Processor handler. But I still needed to hook it up with the correct Solr searcher on which the FacetAPI relies to generate facets. hook_facetapi_searcher_info_alter allows you to override the searcher definition and tell the searcher to use your new custom URL processor rather than the standard URL processor. This is the implementation in myproject.module:

/**
 * Implements hook_facetapi_search_info().
 */
function myproject_facetapi_searcher_info_alter(array &$searcher_info) {
  foreach ($searcher_info as &$info) {
    $info['url processor'] = 'myproject';
  }
}

After clearing the cache, the correct path was generated per facet. Great! Of course, the paths still don't look pretty and contain those way too visible and way too ugly query parameters. We could enable the FacetAPI Pretty Path module, but by implementing our own URL processor, FacetAPI Pretty Paths will cause a conflict since the searcher uses either one or the other class. Not both. One way to solve this problem would be to extend the FacetapiUrlProcessorPrettyPaths class, since it is derived from the same FacetapiUrlProcessorStandard base class, and override its normalizeParams() method.

But that's another story.

Jun 04 2021
Jun 04

Creating a dynamic PDF file is one of the most important points of a project. The crucial thing is to find the right solution that meets your expectations and requirements. In this article, we’ll show you one of the most popular PDF generation modules for Drupal that will be helpful for you to generate, view or download reports, articles, and invoices in PDF.

Dates

The first version of the module was released on 22 January 2015, the latest update - 18 June 2020.

The module can be installed on:

  • Drupal 7 (7.x-1.5 stable release version),
  • Drupal 8 and 9 (8.x-2.2 stable release version).

All stable releases for this project are covered by the security advisory policy.

Module’s popularity

According to the usage statistics from the module's page, it’s currently used by around 12 thousand websites.

Module’s creators

The module is created and maintained by benjy.

What is the module used for?

Entity Print allows you to print (save) any Drupal entity (Drupal 7 and 8) or View (Drupal 8+ only) to a PDF file. It uses the PDF engines based on popular PHP libraries like Dompdf, Phpwkhtmltopdf and TCPDF.

Unboxing

  1. To install Entity Print, go to its webpage or do it via composer: 
    composer require "drupal/entity_print 2.x"
  2. Install Dompdf by running composer require
    composer require "dompdf/dompdf:0.8.0" 
  3. Additionally you can install Wkhtmltopdf and TCPDF engines.
    composer require "mikehaertl/phpwkhtmltopdf ~2.1" 
    composer require "tecnickcom/tcpdf ~6" 
  4. Enable the Entity Print module.
  5. Grant permissions for non-admin users to download the PDFs.
  6. Optionally, enable the Entity Print Views module.
The installation settings of the Entity Print Drupal module

 

Configure Entity Print

Let’s check what this module allows us to configure.

  • Enable Default CSS - provide some very basic styles.
  • Force Download - force the browser to download the PDF file.
  • PDF - select the default PDF engine for printing.
  • Paper Size - change page size to print the PDF to.
  • Paper Orientation - change orientation to Landscape or Portrait.
  • Enable HTML5 Parser - Dompdf doesn't work without this option enabled.
  • Disable Log - disable Dompdf logging to log.html in Drupal's temporary directory.
  • Enable Remote URLs - must be enabled for CSS and Images to work unless you manipulate the source manually.
  • SSL CONFIGURATION - may be needed for development only, in production you shouldn’t change it.
  • HTTP AUTHENTICATION - if your website is behind HTTP Authentication, you can set the username/password.
  • EPub - select the default EPub engine for printing (currently not supported, you can follow the opened issue).
  • Word Document - select the default Word Document engine for printing (currently not supported, follow the opened issue).
The Entity Print module's configuration

 

The preferable PDF engine is Dompdf, which is (mostly) a CSS 2.1 compliant HTML layout and rendering engine written in PHP. It’s a style-driven renderer so it’ll download and read external stylesheets, inline style tags, and the style attributes of individual HTML elements. It also supports most presentational HTML attributes.

Module’s use

After the module is configured properly and all permissions are set, we can start exporting entities and views to a PDF file.

Exporting entities

Entity Print adds a disabled field to the view modes of each content type. The field has a default label value of "View PDF". To make this field visible on any content type, enable it on the Manage Display page

(/admin/structure/types/manage/[content_type]/display).

Enabling the View PDF field on the Manage Display page of the Entity Print module

From now, we’ll have a view PDF button added to our content type. The URL button is:

https://example.com/print/pdf/[entity_type]/[entity_id]

(i.e. https://example.com/print/pdf/node/1)

View PDF option visible in a PDF file

 

Here is the PDF we want to save or print:

An example of a PDF file to save or print in the Entity Print Drupal module

 

Exporting Views

With the Entity Print Views module enabled, a "Print" Global option can be added to the View's header or footer.

The URL button looks like:

https://example.com/print/view/pdf/[view_id]/[display_id]

Adding a Print Global option to the View's header in the Entity Print module

 

Debugging

For easy and quick debugging, Entity Print provides us a HTML version of the entities sent to the PDF engine. The URL looks pretty much the same as above only with /debug appended.

E.g. https://example.com/print/pdf/[entity_type]/[entity_id]/debug.

How to hide the View PDF link in the exported file

By default, the "View PDF" link is also added to the PDF. To remove it, go to the Manage Display page for the specific content type. On the Custom Display Settings section you need to enable the PDF view mode.

Custom display settings in the Entity Print module

 

Now you can disable the Entity Print field on that view mode. Next time you export the PDF, you won’t have the "View PDF" link included.

By default, the PDF view mode is installed for Nodes only. In case you have any custom entities, you must first create a PDF view mode for the specific entity type via "admin/structure/display-modes/view". You can name it whatever you like but remember that a machine name should be "pdf", as it will be automatically prefixed with the entity name.

Styling the PDF from your theme

The following examples show how to register entity_print CSS files from your_theme_name.info.yml file. You can do it for any entity type or even view.

#Add css library to all nodes:

entity_print: 
  node: 
    all: 'YOUR_THEME_NAME/print-styling' 

#Add css library to the Node entities but only article bundles:

entity_print: 
  node: 
    article: 'YOUR_THEME_NAME/print-styling' 

#Add css library to all views:

entity_print: 
  view: 
    all: 'YOUR_THEME_NAME/print-styling' 

Don’t forget to define a CSS library in your YOUR_THEME_NAME.libraries.yml:

print-styling: version: VERSION css: theme: css/print-style.css: { } 

 

Modifying templates

All the normal ways to override templates are available. Using theme hook suggestions, you can create a Twig template and modify the content of any specific entity (e.g. node--blog-post--pdf.html.twig). In this template, you can modify the markup as you normally do by using {{ content.field_example }} or {{ node.field_example }}.

You can modify the HTML output of the entire Entity Print template. Just copy the entity-print.html.twig file from its base location into your theme folder. Using theme hook suggestions, you can also create entity-print--node--[content-type].html.twig file.


 
   {{ title }}
   {{ entity_print_css }}
 </head>
 <body>
   

{{ content }}

Make sure the {{ entity_print_css }} is always included inside the tag tag head in your custom Twig template file. Otherwise your custom CSS libraries won’t work.

Custom PDF Engines

It’s worth mentioning that the Entity Print PDF engines are pluggable. It means you can easily implement your own engines. To do so, you need to create a class in your_module_name/src/Plugin/EntityPrint/NewPdfEngine.php that implements Drupal\entity_print\Plugin\PrintEngineInterface with its all required methods. After that, your plugin will be available in the engines select list on the configuration page.

Alternative solutions

There are a few other options for making a PDF of an entity in Drupal that you might want to investigate.

PrintFriendly & PDF is a plugin for Drupal 7, 8 and 9. Below you can see its features list:

  • It’s fully customizable.
  • With the On-Page-Lightbox option, the PDF file opens in a Lightbox.
  • It lets you print or get a PDF.
  • You can edit the page before printing or getting a PDF, for example by removing the images and paragraphs you don't need.

Printer and PDF versions for Drupal 8+ is a module that also works for Drupal 9. It allows you to generate the following printer-friendly versions of any node:

  • webpage printer-friendly version (at /node/[node_id]/printable/print),
  • PDF version (at /node/[node_id]/printable/pdf).

Supported libraries:

  • mPDF,
  • TCPDF,
  • wkhtmltopdf,
  • dompdf.

Summary

The Entity Print module provides multiple methods of exporting entities to PDFs via different PDF engines. It is a very flexible and customizable module which will help you in dynamic generation of your articles, invoices or other content related to your Drupal website. The module has full test coverage and is ready to be used in production for Drupal 7, 8 and 9.

Jun 04 2021
Jun 04

Continuing our series highlighting the work of initiative groups across the Drupal community, this month we are catching up with six more groups:

  1. DrupalCon Europe Advisory Group, by Imre Gmelig Meijling
  2. Drupal Trivia, by Stella Power
  3. Bugsmash, by Kristen Pol
  4. Decoupled Menus, by Théodore Biadala
  5. Project Browser, by Mihaela Jurković
  6. Security Team, by Tim Lehnen

The takeaway message this month is that there are some key opportunities to get involved and help grow the Drupal community with fun and interesting contribution. Certainly, helping Stella with the curation of questions for one of the most fun parts of the Drupal year, Trivia, has to be a highlight!

If you spot a place where your skills fit, don’t hesitate to contact either the group’s spokesperson, or Community Liaison, Rachel Lawson.

What have been your priorities in the last three months?

Together with DrupalCon Europe Advisory Group, Kuoni, the Drupal Association and many local camp organisers and passionate Drupal volunteers from Europe and around the world, we have been working on DrupalCon Europe 2021.

While still being a COVID year and people getting weary of online events and sitting behind screens all day, DrupalCon will happen. We all need that place to connect and share, albeit online.

European Drupal camps are uniting with DrupalCon so Drupal enthusiasts will have 1 major conference to go to. Speakers, sponsors and attendees won't have to take up so much effort to organize an online event themselves. Instead they can team with the DrupalCon team and the international community to create one big experience with a lower threshold to go yet another online event. Plus a bigger, international reach.

The world will see Drupal still going strong at DrupalCon and they will get a chance to connect with various regions and Drupal communities.

And what has been your greatest success in the last three months?

It's been so great to see European Drupal Associations and community leaders get together to talk about maintaining a strong Drupal experience in Europe. Getting European countries as well as other international communities working together to create a united Eurovision Drupal experience is something that is really great!

What has been your greatest challenge in the last three months?

It's been a challenge to align European communities and camps and have as many as possible to team up with DrupalCon 2021. It's not so much about making money or spending time to create the experience, rather having one strong Drupal message and letting the world know Drupal is here to stay.

Do you have a "call to action" you want to make to the Drupal Community?

Please take a look at where DrupalCon and the local camps are at and see if it's possible for your camp, association or local community to team up. This can be very small and with little effort.

It's about uniting in common cause: the more camps will underscore this by teaming up, the stronger Drupal will come out of it.

What have been your priorities in the last three months?

The main priority in the last three months was, of course, writing the questions for Trivia Night at DrupalCon North America, as well as creating the picture clues.

And what has been your greatest success in the last three months?

Another very successful Drupal Trivia night at DrupalCon North America

What has been your greatest challenge in the last three months?

The greatest challenge was writing the quiz questions. It takes a lot of work, not just in writing the questions themselves, but also formulating the rounds so you hit the right mix of topics and the right difficulty level. Of course, the switch to the online/virtual format has also been a bit of a challenge - a different way of writing the questions is required.

Do you have a "call to action" you want to make to the Drupal Community?

Yes! I'm looking for someone else to help write the questions! It takes a fair bit of preparation work, so if someone would be willing to contribute their time to help write and curate the questions, with a view to taking on the role of being the primary question curator for one of the DrupalCons each year.

What have been your priorities in the last three months?

Recent priorities for the Bug Smash team have been to prepare for the DrupalCon North America initiative keynote and contribution event in April, including recruiting mentors, as well as our regular activities of issue triage and bug smashing. One fun thing we do each meeting is we nominate issue "targets" for the team to work on. These issues cover the gamut from views to form caching to ajax to media and so much more.

To see recent issue targets, our meeting transcripts are available in the issue queue.

But, one of the great things about the Bug Smash Initiative is that you have complete freedom to work on what you want and there are a wide variety of issues to choose from. Each person focuses on whatever interests them or fits within their available time. One person like mohit_aghera may focus on writing tests while others may focus on issue triage or accessibility reviews or testing.

And what has been your greatest success in the last three months?

The Bug Smash team has had some great successes over the last few months. The DrupalCon contribution event was a great way to mentor new contributors and onboard them to the initiative. larowlan ran an introduction workshop based on pameeela's Bug Smash presentation previously given at the Sydney Drupal user group. As a result of DrupalCon's success, we've had new team members jump into our Slack channel and start contributing!

Looking at the issue queue, there were more than 600 core issues worked on in the last 3 months with almost half of those fixed or closed. One fun issue that got fixed was from 2005! Big thanks to lendude and quietone for continuing to improve our bug statistics tools so we can better understand our initiative's impact. A fun fact from quietone during the May initiative meetings was there had been a ~584 year reduction in total number of years of all open bugs in the previous month! Whoa!

What has been your greatest challenge in the last three months?

At the biweekly Bug Smash meeting, we always ask about people's challenges during the previous fortnight. Some of them are fun personal distractions like new puppies or watching America's Cup, or not so fun life things like dealing with expensive car problems. Sometimes it's other Drupal activities that take people away from Bug Smash work like other initiatives or April's full-on DrupalFest activities.

From a more tactical viewpoint, finding "low hanging fruit" issues can sometimes be a challenge when we are trying to find quick wins. Or, we'll end up focusing on new issues rather than trying to get issues we've already worked on "over the fence". But, there is one challenge that you, the reader, can help with, and that's getting issues reviewed. If you have time to help, manually testing and reviewing fixes is immensely helpful. Search the queue for anything tagged as Bug Smash Initiative with status of "Needs review".

But, all in all, the number one challenge for the Bug Smash team is usually time… not enough of it. And, often, that's due to work being particularly busy. We highly encourage organizations who benefit from Drupal to free up some of their team's time to help on initiatives like Bug Smash. And, we highly recommend you read our very own Derek Wright's blog post on why organizations should support the Bug Smash Initiative.

Do you have a "call to action" you want to make to the Drupal Community?

A very simple call to action is simply attending one of the Bug Smash meetings. We meet every two weeks in Slack and it's asynchronous, so you can still participate afterwards within a 24 hour window. They are very well-organized thanks to jibran who typically runs the meetings and are transcribed by quietone, so everyone can get credit for participating. You can introduce yourself and ask questions, and we'll help you get acclimated. You can also review the helpful Bug Smash Initiative documentation to learn more (https://www.drupal.org/community-initiatives/bug-smash-initiative/workin...), thanks largely to the writing efforts of dww with help from other team members.

The Bug Smash docs specifically have a section on "how to help" (link to https://www.drupal.org/community-initiatives/bug-smash-initiative/workin...) but, as mentioned above in our challenges, if you are keen on helping review, that would be a great focus. Issue review involves reviewing code and/or manually testing the latest code fix works.

Based on the success of DrupalCon North America, we hope to have more mentored contribution events this year, so keep your eyes open or pop into the bugsmash Slack channel to check in on the status. If you are interested in helping mentor at these events, we very much welcome that contribution as well. Hope to see you soon!

What have been your priorities in the last three months?

We published the results of the decoupled survey that shed some light into what sort of things people use and expect from Drupal when used in a decoupled fashion.

On the Technical side we have the decoupled menus module published that provides the missing pieces for API consumption of menus. This was started in contrib to try out a few things, when things are stable enough we’ll propose it for addition in Drupal Core to provide this for everyone. There are also a couple of helper JS libraries: Decoupled menu parser, Linkset.

The team also spent a significant amount of time preparing for DrupalCon, making sure we have things for people to do, test, and help.

And what has been your greatest success in the last three months?

The thing that tied everything together was DrupalCon, where Baddý Sonja Breidert, Liam Hockley, Gabe Sullice, Juanluis Lozano, Brian Perry, Joe Shindelar did an amazing job of preparing and running the Decoupled Day.

We’ve had very good participation on the different workshops and some great examples of menu consumption based on the decoupled menus module:

A solid start at the documentation structure was made as well.

What has been your greatest challenge in the last three months?

Producing documentation has been a challenge, people prefer writing code!
DrupalCon helped with seeing what needs to be documented, the survey helped with what people expect, and what kind of tools people use.

Do you have a "call to action" you want to make to the Drupal Community?

By now most of the technical pieces are present and we need people to take charge of building the documentation for all this wonderful code so that it’s accessible to more people. This will in turn help us streamline the experience of consuming menu data from Drupal API.

You can head over #decoupled-menus-initiative slack channel and the Start an end-user-friendly technical documentation issue.

What have been your priorities in the last three months?

The Project Browser initiative kick off meeting was only 10 days ago! Getting started has been the main thing so far, and we can't wait to report on our further developments.

And what has been your greatest success in the last three months?

We have formed a group of people interested in contributing their ideas and efforts to the Project Browser. Several people stepped up to coordinate initial subtasks, and we started a list of potential features, the audiences they may cater to, and what might go into the Minimum Viable Product (MVP).

Our conclusion was that the Project Browser should help make Drupal more attractive to a general audience of site owners/builders as its first priority, by enabling them to easily expand the Drupal core site features through additional modules. The Project Browser will offer any audience a list of modules that is easy to understand, contains the relevant information, and filters out and sorts the modules reliably.

What has been your greatest challenge in the last three months?

The Project Browser needs to cater to multiple audiences at the same time. Our most important audience is the general public, or framed more specifically, site owners. Discovering their pain points is always a challenge. Without understanding exactly what problem we're solving for them it's difficult to prioritize the features we should focus on.

Do you have a "call to action" you want to make to the Drupal Community?

The Project Browser initiative needs input from site owners and builders who aren't experienced with the technical aspects of Drupal. If you are able to conduct interviews with some of those people (or ARE some of those people!), please come to our Slack channel (#project-browser) and share your wishes/experiences about what would make it easier to expand your Drupal site with more features.

Editor’s note: has kindly stepped in to help compile this information as you may well be aware that the Security Team have been fully occupied with the recent Drupal core security release. Thanks Tim!

What have been your priorities in the last three months?

The security team has been focused on two key areas in recent months. The first is our core mission of supporting responsible disclosure of security advisories and updates in the Drupal community. The second has been preparing for the Drupal Association's release of the community tier of Drupal Steward, which will shortly be available to customers at drupalsteward.org.

And what has been your greatest success in the last three months?

In the last three months we've successfully two Drupal core security advisories, and twelve security advisories for contributed modules.

These advisories represent the hard work of Drupal core and contributed module maintainers, security researches, and the security team itself, and continue to prove that the Drupal security team is one of the best in the industry.

What has been your greatest challenge in the last three months?

Our greatest challenge came with the most recent Drupal core release, SA-CORE-2021-003. This core release came outside of the regular release window, and coincided with unplanned infrastructure instability that delayed the release. We know this impacted many members of our community waiting for the release to drop, especially those outside of US time zones, for whom the final release came quite late at night.

We've released a post-mortem blog to talk further about what happened and how we hope to mitigate these issues in the future.

Do you have a "call to action" you want to make to the Drupal Community?

To keep up to date with Drupal security information you can follow any of the channels described on the Drupal security landing page. In addition to the news page and sub-tabs, all security announcements are posted to an email list. To subscribe to email: log in, go to your user profile page and subscribe to the security newsletter on the Edit » My newsletters tab.

You can also get rss feeds for core, contrib, or public service announcements or follow @drupalsecurity on Twitter.

Lastly, you can join the #security-questions channel in Drupal Slack to ask real-time questions of other community members related to security.

Jun 04 2021
Jun 04

With the digital becoming a ubiquitous part of life, businesses, both B2C and B2B, need to adapt their strategies and redefine them with a digital-first mentality, to determine how to make the best use of digital channels and latest technological innovations.

While some elements of a digital strategy are common to both B2C and B2B, there are many more aspects where the two differ. In this article, we’ll compare the key factors of digital strategy in B2C versus B2B, with a focus on content and marketing strategies. 

B2C

In B2C, marketing and driving business growth is more large-scale, aiming to reach as many potential customers as possible. The focus is on the customer experience across the entire customer journey and a multitude of different devices, so experiences need to be tailored for this omnichannel reality.

Due to the high focus on CX, data-driven personalization and conversion rate optimization are also key. Data obtained through machine learning algorithms is used to get a 360 degree view of customers and enable real-time marketing across all channels that they frequent.

B2C brands frequently interact with their customers via social media - Instagram, Twitter, to a lesser extent Facebook, and lately we’ve also been seeing a huge surge in TikTok usage and opportunities there. Since the start of the pandemic, a lot of brands have also implemented their own e-commerce solutions or started a presence on Amazon.

The messaging and advertising in B2C is more brand and product-oriented (i.e. product marketing). It also leverages purpose-driven marketing that resonates most with Generations Y and Z, which form the youngest and yet the largest part of the global consumer base.

Marketing relies heavily on data, from both 1st and 3rd-party sources. Advertising is a widely used B2C tactic and takes place across the entire web (and sometimes even beyond it, e.g. via in-app push notifications or SMS). Data is often handled with a data management platform (DMP) or customer data platform (CDP).

B2B

In contrast, B2B is much more focused, often employing what’s called account-based marketing (ABM), which involves messaging targeted at prominent decision makers of potential client companies.

It’s often more direct and personal, and focuses more on building and nurturing close relationships rather than on optimizations to the immediate customer experience; this is why the phrase “human to human” or “H2H” is now popping up more frequently. This also means that personalization is key; however, it isn’t as software-driven as in B2C, but involves more human research and input.

While B2B uses social media to a lesser extent, LinkedIn is the B2B social media platform. A lot of research and initial outreach happens there, as does most of the advertising. And, while B2C primarily uses email for newsletters and promotional offers, it’s an absolutely essential channel in B2B communication.

As opposed to unique marketplace-type websites à la Amazon, a B2B company’s digital presence is mostly centered around its own website, which can have custom landing pages for specific audience personas, or tied to specific advertising/marketing campaigns.

B2B companies produce content which is more industry or company oriented, as its goal is less to lead to an immediate purchase and more to help move the client further down the sales pipeline. 

This content is often longer than in B2C and features important industry insights and company achievements. The former can be gated, meaning that visitors are able to obtain them in exchange for their personal data - which is then used to initiate a conversation with them. Gated content most typically occurs in the form of white papers, in-depth guides, reports or webinar recordings.

For technology needs, B2B companies more often tend towards enterprise software which is typically more expensive but also more capable and robust. Because of their need for 1-on-1 communication, efficient video conferencing solutions are also a must.

Like in B2C, B2B companies also rely on data, but here less so on 3rd-party data gathered through software and more on data obtained via human intel. Customer relationship management (CRM) platforms are often used for a cohesive overview of client relationships.

Top considerations for both

As we stated in the introduction, a lot of elements of a successful digital strategy are shared between B2C and B2B. A culture of agility and innovation is becoming commonplace in the fast-paced digital age with both a lot of competition and rapid market shifts, where competitive edge is gained through swift responses to constantly emerging trends.

Human-centered design and accessibility are also of extreme importance, the latter even more so in B2C since here access to a digital location often means access to physical goods or services.

Automation is another tool that’s essential for any kind of business which wants to thrive in the digital era. Both B2C and B2B companies will want to streamline their processes and free their employees up for the kind of work which requires a more human touch. 

Automated email campaigns are key marketing tactics in both fields, and we’re now also seeing a greater use of chatbots, which can be entirely automated or used together with human agents.

Since mobile usage continues to be on the rise, lately increasingly also in less developed parts of the world, mobile optimization is an absolute necessity for both areas. While B2B companies are less likely to have dedicated mobile apps than B2C brands, both need to ensure that the mobile experience is adequate and comparable to a website experience.

As for marketing and content, both B2C and B2B are seeing the value of combining physical and digital channels. In B2C, this may look like digital campaigns which are tied to in-store campaigns, or leveraging a technology like augmented reality to bring the in-store experience closer to the digital customer.

In B2B, again, this is more personal, e.g. sending personalized gifts through the mail to clients, in order to thank them for their collaboration and/or strengthen the relationship with them.

Both B2C and B2B employ different types of content to reach their audiences. In B2C, you’ll likely see a greater use of social media stories and other video content, but even B2B is now featuring more video and even audio content (e.g. podcasts) in addition to blog insights. As we mentioned earlier, though, B2B typically has more long-form content, which tends to be true for video and audio as well.

Repurposing content is another tactic that’s shared between the two. The same topic or product can be presented in a variety of ways which will have different effects on different customers; perhaps a blog post would perform better in video rather than in written form, or certain visitors may prefer the shorter and more visually appealing infographics.

Conclusion

Man in business suit working on tablet

Whether your business operates in B2C or in B2B, it needs to embrace a digital-first approach to business strategy and the leaner and more flexible tactics that are so well positioned in the digital age.

That said, the ways in which the two succeed with digital can be vastly different, as we’ve demonstrated in this article. This is particularly evident with digital content and marketing strategies, where even when the same tactics or tools are used, how they’re used differs (e.g. different social media platforms and ways of achieving personalization).

If your digital strategy is already sound, but you could do with some help in the development department, get in touch with us and we can help you select the best technologies for it and execute on it.

Jun 03 2021
Jun 03

Mapping Google Groups to Drupal user roles with Google Authentication seemed like a straightforward task, but there were a few wrinkles in the process that I didn’t expect. We stumbled into the need for mapping groups to roles while performing a routine Drupal core upgrade from 8.8.5 to 8.9.13. Complications with the existing LDAP authentication’s upgrade path — and conversations with the client — steered us towards replacing LDAP with Google Authentication.

Overall the idea is simple enough: when a user signs into your Drupal site using Google Authentication, their membership(s) to various Google Groups should map to some existing Drupal user roles. This way, users in your organization who are already affiliated with various groups can be automatically assigned appropriate user permissions in your Drupal site for editing or accessing content, approving comments, re-arranging homepage promotions or whatever the case may be.

The majority of the functionality is established with just a few downloads, installations, and configurations. Our custom functionality relies on the Social API, Social Auth, and Social Auth Google modules, as well as the Google APIs Client Library for PHP. The first step is to install and enable the above mentioned modules and library. Then comes a little insider info on configuring the necessary Google Service Account, and finally a custom implementation of the EventSubscriberInterface that reacts to authentication via Social Auth to read from Google Groups memberships. Our custom code is packaged into a simple demo module you can download to get all of this working for yourself.

There are a lot of moving parts for something you’d expect to be a simple task, but in the end it’s a relatively quick setup if you know what to expect.

The devil’s in the details: Google Service Account configuration

This bit is pretty counterintuitive: Google users that authenticate via Google Authentication don’t actually have read access to their own groups. Weird, right? Where are you supposed to pull group information if the accounts can’t read their own group data from the API?

The answer is a Google Service Account. A properly configured Google Service Account can have read access to both users and their group relationship information, which allows it to act as a go-between during authentication to pull Google Group data and map it to Drupal roles. Configuring your Google Service Account isn’t self-explanatory, but luckily Joel Steidl put together the following screencast to walk you through it. Once you’ve set up the Google Service Account, you’ll just need to create and download a JSON key file to include in the code we’ll walk through further below.

Now that your Google Service Account is properly configured and your JSON key file has been safely stored away, it’s time to tie everything together with a few blocks of code that do the actual role assigning.

Role assignment during Google Authentication: The who’s who handshake

Once you’ve configured your Google Service Account and downloaded your JSON key file, you need code that reacts to Social Auth events (like a login via Google Authentication) to trigger the role mapping functionality. I’ve added some example code below that's part of a demo module that should get you 95% of the way there, although in our demo the role mapping itself — which Google Group becomes which Drupal role — is hard coded.

If you download my demo module you’ll see I’m reacting to both the USER_CREATED and USER_LOGIN events provided by the Social Auth module’s getSubscribedEvents method. This way we can reevaluate Google Group to Drupal role mappings each time a user logs in to make sure their roles and permissions stay up to date. You’ll also see that I’m verifying the user’s email address to make sure it falls within our expectations ([email protected]_domain.com), which you can modify or remove depending on your needs — but you want to make sure that your Google Service Account will have the appropriate access to this user’s group info.

The magic happens in the determineRoles method, where the JSON key file and the Google Service Account you setup previously will come into play. You’ll need to modify the getSecretsFile method to return your JSON key file, then change the $user_to_impersonate variable to the email address that you granted the Groups Reader role as discussed in Joel’s screencast. Finally, you'll need to update the $roleAssignment array with actual Google Group names and Drupal roles.

/**
 * When a user logs in verify their Google assigned group & set permissions
 *
 * @param \Drupal\user\UserInterface $givenUser
 *   The passed in user object.
 *
 */
protected function determineRoles($givenUser)
{
  $KEY_FILE_LOCATION = $this->getSecretsFile();
 
  // Only run if we have the secrets file
  if ($KEY_FILE_LOCATION) {
    // 1. Admin SDK API must get enabled for relevant project in Dev Console.
    // 2. Service user must get created under relevant project and based on a user with
    // 3. User must have Groups Reader permission.
    // 4. Scope must get added to Sitewide Delegation.
    $user_to_impersonate = '[email protected]_domain.com';
    $client = new Google_Client();
    $client->setAuthConfig($KEY_FILE_LOCATION);
    $client->setApplicationName('Get a Users Groups');
    $client->setSubject($user_to_impersonate);
    $client->setScopes([Google_Service_Directory::ADMIN_DIRECTORY_GROUP_READONLY]);
    $groups = new Google_Service_Directory($client);
 
    $params = [
      'userKey' => $givenUser->getEmail(),
    ];
    $results = $groups->groups->listGroups($params);
 
    // Hold Map for roles based on Google Groups
    $roleAssignment = [
      "Author Group Name" => "author",
      "Editor Group Name" => "editor",
      "Publisher Group Name" => "publisher",
    ];
 
    // Loop through the user's groups an add approved roles to array
    foreach ($results['groups'] as $result) {
      $name = $result['name'];
 
      //Assign roles based on what was determined
      if (array_key_exists($name, $roleAssignment)) {
        $givenUser->addRole($roleAssignment[$name]);
        $givenUser->save();
      }
    }
  }
}

That’s it! With your modified demo module enabled, your users should get their roles automatically assigned on login. We were pretty excited to get Google Groups mapping to Drupal roles, so much so that my colleague Matthew Luzitano is considering packaging this functionality (and GUI configurations!) into a Drupal module. So if you’re not in any rush and don’t feel like browsing our demo code, you can always wait until that happens.

Do you have a different approach to mapping Google Groups to Drupal roles? We’d love to hear about it! Let us know in the comments below.

Jun 03 2021
Jun 03

Editor’s note: This post is the first in a three-part series: Highlighting Accessibility

In recent years, accessibility has become an essential factor for teams tasked with bringing new products, services, and experiences to market. The reason for this is clear: design works best when it works for everyone. Historically, though, either out of ignorance or intention, we have often designed with the assumption that all users are alike in their abilities to perceive and interact, but more and more we’re recognizing that just like food & nutrition, humans vary considerably in their preferences and restrictions. 

To be fair, it is much easier to assume that our users are largely homogeneous; that they not only share the same preferences and goals but also their ability to achieve those goals. As we learn more about the nature of human cognition and interaction, we come to appreciate how a whole host of physical, mental, and even social characteristics can influence the fundamental ways we perceive the world.

So what is accessibility?

Broadly speaking, accessibility describes the degree to which something can be entered, or used. Architects and civil engineers have long incorporated things like wheelchair ramps, motion- or button-activated doors as ways to improve accessibility in the physical world; businesses often offer telecommunications devices for the deaf (TDD) capabilities for sales/customer services phone lines; car retailers offer vehicle modifications to make them wheelchair accessible and operable by persons with a whole range of physical disabilities. All of these enhancements are the result of the painstaking work by affected groups and their advocates toward gradual recognition of and empathy toward (and laws protecting) the varying needs of all human beings. 

“The digital world has changed so many of our lives for the better and it’s crucial to make sure the same can be true for everyone.”

Troy Shields, Sr Front End Engineer, Zivtech

In the digital world, these enhancements are defined by the Web Content Accessibility Guidelines (WCAG), put forth by the World Wide Web Consortium’s (W3C) Web Accessibility Initiative (WAI). As one of the foremost advocates for the needs of all users, the WCAG and related guidelines serve as a north star for organizations and project teams looking to justly serve users of all abilities. Indeed, the guidelines are extensive and cover a wide range of topics and best practices, but the best place to start to familiarize yourself with accessibility is via their four principles:

Principle of Accessibility #1 Perceivable

1. Perceivable

Information and user interface components must be presentable to users in ways they can perceive.

Naturally, this is a given for any interfaces we create, but it’s easy to overlook how the nature of perceivability varies by ability. For low vision users, a simple thing like an icon, button, or image may not be perceivable at all. In this and similar scenarios, designers and builders need to account for this impairment with consistent use of alt text and other modifications. Now, think of a chart or infographic, how much more difficult might it be for a low-vision user to consume that information?

Principle of Accessibility #2 Operable

2. Operable

User interface components and navigation must be operable.

Once we’ve made our user interface fully perceivable, we must then ensure interactive elements are usable. This can be really challenging as there are so many adaptations that some of our users need to make in order to interact with the physical and digital world, most of which we take for granted. Users with visual impairments may have a hard time discerning and using things like mouse-over menus, confusing or unlabeled buttons, and the like. Moreover, users with physical disabilities may have a hard time completing form elements where multiple simultaneous maneuvers are required, eg CTRL + click.

Principle of Accessibility #3 Understandable

3. Understandable

Information and the operation of user interface must be understandable.

Understandable design creates and maintains consistent conventions that help users anticipate how the interface will perform as they progress through their journeys. It’s predictable in its use of design patterns and information architecture. Amazon sells a wide range of products, but its layout for books is the same as it is for sporting goods, as it is for clothing. Imagine if every product or product category had a unique layout - how challenging might that be for a user to anticipate where they might find the product specs or customer reviews?

Principle of Accessibility #4 Robust

4. Robust

Content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies.

As the movement to accommodate people of different abilities has taken steam, so has the development of assistive technologies to help them. Today, there is a wide range of tools and devices that are designed to deliver improved user experiences for users with a wide range of physical abilities. Unfortunately, these assistive technologies do not all perform the same, so designers and builders must work to deliver a consistent experience. Consider web browsers like Chrome, Firefox, and Internet Explorers. All of these tools are built around standardized languages and frameworks like HTML (hypertext markup language) yet they all perform slightly differently. Imagine how broad the discrepancies might be for tools that aren’t based on any real widely accepted standards.

Principle of Accessibility BONUS: Robust

BONUS: Receptive

Interfaces should provide ways for users to provide feedback on their experience 

We humbly offer this as a fifth principle for your consideration. In the spirit of continuous improvement, we should always be looking to enhance the experiences we create, and that often starts by capturing feedback and learning from those we serve. Even with many hours of accessibility training, and several successful projects under our belt, our team recognizes that we can always get better at delivering exceptional experiences. We should all strive to be open to learning and feedback, and more consistent in our willingness to apply what we learn.

...

Overall, the WCAG Accessibility Principles seek to inspire empathy in designers and builders who are committed to creating digital experiences that work for all. They help establish internal project standards that are more thoughtful, curious, and ultimately more inclusive. The principles should help organizations and project teams factor accessibility in at the start of the project instead of just running some tests at the end. Indeed, accessibility is best executed when it’s built-in to the core processes of how we work, standardizing up-front approaches for how we approach elements like navigation and information architecture, being consistent with the use of interactive elements, and so on. 
 

"Accessibility should not be an afterthought, it should be an integral part of your design strategy right from the start. "

Barry Brooks, Creative Director, Zivtech

Indeed, delivering truly accessible experiences requires additional effort and forethought to achieve, but it’s absolutely worth it. When we enable more of our users to succeed in their journey, we succeed as well. 

Jun 03 2021
Jun 03

Occasionally I find myself needing plugin-like functionality, where users/downstream can throw a class into a folder and expect it to work. My script is supposed to find and instantiate these plugins during runtime without keeping track of their existence.

In a regular Drupal module, one would usually use the plugin architecture, but that comes with its overhead of boilerplate code and may not be the solution for the simplest of use cases.

Many class finder libraries rely on get_declared_classes() which may not be helpful, as the classes in question may not have been declared yet.

If you are on a Drupal 8/9 installation and want to use components already available to you, the Symfony (file) Finder can be an alternative for finding classes in a given namespace.

Installing dependencies

Ouside of Drupal 8/9, you may need to require this library in your application:

  1. composer require symfony/finder

A simple example

  1. use Symfony\Component\Finder\Finder;

  2. class PluginLoader {

  3. /**

  4.   * Loads all plugins.

  5.   *

  6.   * @param string $namespace

  7.   * Namespace required for a class to be considered a plugin.

  8.   * @param string $search_root_path

  9.   * Search classes recursively starting from this folder.

  10.   * The default is the folder this here class resides in.

  11.   *

  12.   * @return object[]

  13.   * Array of instantiated plugins

  14.   */

  15. public static function loadPlugins(string $namespace, string $search_root_path = __DIR__): array {
  16. $finder = new Finder();

  17. $finder->files()->in($search_root_path)->name('*.php');

  18. foreach ($finder as $file) {

  19. $class_name = rtrim($namespace, '\\') . '\\' . $file->getFilenameWithoutExtension();
  20. try {

  21. $plugins[] = new $class_name();

  22. }

  23. catch (\Throwable $e) {

  24. continue;

  25. }

  26. }

  27. }

  28. return $plugins ?? [];

  29. }

  30. }

Usage

  1. $plugin_instances = PluginLoader::loadPlugins('\Some\Namespace');

This is just an abstract catch-all example with a couple of obvious problems which can be circumvented when using more specific code.

In the above example, the finder looks for all files with the .php extension within all folders in a given path. If it finds a class, it tries to instantiate it. The try-catch block is for it to not fail when trying to instantiate non-instantiatable classes, interfaces and similar.

The above can be improved upon by making assumptions about the class name (one could be looking for class files named *Plugin.php) and examining the file content (which the Finder component is capable of as well).

Let me know of other simple ways of tackling this problem!

Jun 03 2021
Jun 03

Modern websites have become more and more extensive. At some point, a user may feel confused due to a large amount of content. At the same time, the website itself starts to be hard to use. Then the need to create a convenient, easy to develop, and maintain navigation appears. The solution which will work great in this case is a mega menu.

What is a mega menu?

A mega menu is a type of navigation consisting of several levels, used on websites. The first level looks like a standard list of links, but after hovering over one of its elements, additional options are shown. These options consist not only of a list of links in the second level of navigation but often have a third or even fourth level. Sometimes there are also visible photos or even videos. A menu built in this way is visually appealing to the user and allows them to easily find their way around large websites.

Below you can see the example of how the mega menu works on the webpage built in Droopler.

Mega menu in Drupal on the example of a website created in Droopler, a website builder

 

Mega menu in Drupal

In the Drupal CMS, creating a mega menu is quite a difficult task. That’s why the modules that simplify building advanced navigation were made. One of them is Mega Menu. Not only will it save you countless time creating and customizing complex navigation, but at the same time make your website amazingly impress your users.

Here’s how the Mega Menu module works: based on the default Drupal menus, the configurable mega menus are created. We can add to them submenu or even the whole blocks, using a friendly configuration interface. The additional option is the possibility to add a few columns in the submenu, setting styles, and animations.

Creating an extensive menu in Drupal, using a Mega Menu module

Source: Drupal.org

However, it’s important to mention the downsides of Mega Menu. One of them is that the module uses a separate Bootstrap library. This not only increases the size of the CSS and JS files, but also conflicts with the Bootstrap library used on the websites. The second disadvantage is that the navigation built with the Mega Menu module is poorly optimized for mobile devices.

In Droopler, a Drupal distribution, we rewrote the frontend part of the module from scratch. This way, we eliminated the most significant downsides of the default Mega Menu module. On the example of Droopler, we’ll show you how you can create with this module an extensive menu for your website.

Creating an extensive menu in Droopler

Droopler is a Drupal-based distribution dedicated to building corporate websites. It contains ready-to-use elements and tools enabling quick creation of the websites.

Main menu configuration

We can find the main menu settings in the Drupal administration menu in Structure. Next, in Drupal 8 Mega Menu, we look for the Main navigation element and from this element's menu, we go to Edit links.

Configuration of main menu in the Drupal administration menu

After going to the edit page, the working structure of the main menu appears, which we can freely edit using the "drag and drop" method. To add more levels to the menu, we need to move the secondary element to the right to create an "indentation".

Adding new blocks

To add a block to the menu, we must first add a new content block. We go to Structure -> Block Layout -> Add custom block and select the content block.

Adding a new content block to the menu

The system takes us to a form, where we add a new block. We fill out the title and the content of interest. In this case, it’ll be the title and a few links. Next, we add another block with the content and photo.

Editing a custom block in Droopler

 

After adding two blocks, we need to add them to the region on the page and - at the same time – disable the display of these blocks in the region. In the Block Layout of the header section, we click Place Block, and then select from the popup window the block we want to add and click Place block. In the next step, we uncheck the show title option and save.

Adding new blocks to the region on the page

 

When the blocks are added to the region, we should disable them. To do this, we click Disable in the block's action menu.

Disabling display of new blocks in the region

 

After completing this step, we can start adding blocks to the menu. To do this, we go to Structure, Drupal 8 Mega Menu, and then click config next to the main navigation.

We are taken to the menu edit. In our case, we’ll add blocks to the "products" element. After clicking "products", a configuration panel appears on the right. We check the Submenu in it as active.

Activating the Submenu in the configuration panel of the mega menu in Drupal

 

Now a workspace appears under "products", where we’ll add our blocks in a moment. After clicking on this element under "products", another configuration panel appears, but this time it concerns our workspace.

The configuration panel of the workspace

 

In this panel, we use the plus and minus buttons to set the number of columns in our mega menu. In the Grid field, we set the column’s width, while the Blocks field is used to select the previously added blocks.

After setting the appropriate options, we click Save and our menu should look similar to the one below.

The ready mega menu built with Droopler, a Drupal distribution, and the Mega Menu module

 

Summary

A mega menu is a good option for presenting to the user an advanced website’s structure or a web application. With the Droopler distribution and the Mega Menu module, we can easily build the advanced pages with a user-friendly navigation which allows us to clearly show the structure of a large website.

Jun 02 2021
Jun 02

Theme settings are simple and powerful, here's me figuring out how to create one.

Creating theme settings have so many uses. For example, you might have a sub-theme that wants to set a theme setting for the main/accent colour, and then use CSS variables to pull that into your theme - simple. Or, in my case, I want to create a base theme that is styled with CSS and JS, but allow sub-themes to switch off those libraries and just use the HTML from our templates.

Here's how:

I've started streaming some of my work, especially if it's contributions to open source, on twitch if you'd like to follow along/subscribe. As it turns out, Twitch deletes your videos after 14 days, so I have also been uploading them to YouTube. Feel free to subscribe.

Jun 02 2021
Jun 02

Last week, when I renewed my yoga teaching credentials through the Yoga Alliance, I was required to agree to an "Ethical Commitment" based on values intrinsic to the practice of yoga, such as ahiṃsā (nonviolence), satya (truthfulness), asteya (not stealing), aparigraha (non-possessiveness), and santoṣa (contentment). While it might seem like such an agreement would be limited to my role as a yoga teacher, these same principles inform decisions that I make in all aspects of my life, including how I build my website.

The page on this site titled "How I Built This Site" describes some of the technical choices I have made to represent myself online. This is the first article in a series of articles that will describe each of these choices in more detail.

This website is built using a content management system called Drupal. The Drupal community is one of the largest free software communities in the world, with more than 1,000,000 contributors working together. The values and principles of the Drupal community align well with the Ethical Commitment to which all yoga teachers certified by the Yoga Alliance must agree. While not all of the ethical commitments overlap, here are a few examples to support my claim:

While words and documents are important, I am more influenced by my lived experiences. In both communities I consistently interact with people and witness behaviors that align with my conception of an ethical life.

One significant difference is the Drupal community's commitment to free software that is difficult to find among yoga teachers. Yoga teachers rely heavily on "free" proprietary services that exploit their users, such as Instagram, Facebook, YouTube, etc. While I have accounts on all of those platforms, I avoid using them whenever possible.

Rather, my online presence relies heavily on Drupal. In this sense, Drupal functions as my "ethical base." Many of the methods I use to communicate with people online — newsletters, RSS feeds, forms, etc. — are provided by Drupal. Certainly there are many ethical alternatives to the tools used by "surveillance capitalists," but Drupal happens to be the one that I know well. I've been using Drupal for more than a decade and I personally know many of the people who contribute to Drupal.

Both yoga and Drupal have improved my life, and there is nothing magical about either of them. Drupal allows me to interact online in a way that feels consistent with my values. People who use Drupal are granted the "four essential freedoms" of free software. While it would certainly be easier to just use the tools provided by Big Tech, I cannot use those tools in good conscience. Drupal is not automatically ethical, and can be used for all sorts of purposes, both beneficial and harmful. In my case, I use Drupal because I wish to be nonviolent, truthful, and kind to all people. For more than a decade Drupal has provided a solid ethical foundation for my online presence.

Jun 02 2021
Jun 02

If you need a temporary buffer in Drupal to save data to, Drupal's tempstore service might come in handy. We used it for example in a webshop module, where users didn't have a saved address yet and also had to option to never save their address permanently (because of gdpr).

So, after some searching, the code is pretty straight forward. This example might save you some time to implement it

We implemented this in a custom Drupal webshop service, the functions are called in other code via dependency injection. So here is how to set and get your custom data with help of Drupal's tempstore:

(Be aware: the tempstore data wíll expire automatically)

  /**
   * Set data in Drupal's temp_store.
   *
   * @param array $vars
   */
  public function setMyDataInTempStore(array $vars) {
    // Get tempstore service.
    $tempstore = \Drupal::service('tempstore.private');
    // Get tempstore data we need.
    $tempstore_data = $tempstore->get('my_custom_key');
    $params = $tempstore_data->get('params');
    // Fill vars.
    $params['var_1'] = $vars['var_1'];
    $params['var_2'] = $vars['var_2'];
    // Save vars to tempstore.
    $tempstore_data->set('params', $params);
  }

  /**
   * Get data from Drupal's temp_store.
   *
   * @return array
   */
  public function getMyDataInTempStore() {
    // Get tempstore service.
    $tempstore = \Drupal::service('tempstore.private');
    // Get tempstore data we need.
    $tempstore_data = $tempstore->get('my_custom_key');
    $tempstore_params = $tempstore_data->get('params');
    // Get vars.
    $my_var_1 = $tempstore_params['var_1'];
    $my_var_2 = $tempstore_params['var_2'];
    // Return data in array.
    return [
      'var_1' => $my_var_1,
      'var_2' => $my_var_2,
    ];
  }
Drupal code Planet Drupal Written by Joris Snoek | Jun 02, 2021

Join the conversation

Jun 02 2021
Jun 02

For a great number of use cases

There is a whole range of use cases for a Site Assistant. For what purpose and with what goal they are used depends very much on the context. To help you understand what Site Assistant can do for you, we show three different use cases here:

Variant 1 - Product information

The Site Assistant is displayed on the product detail pages of a technology company. There it contains for example:

  • A link to the online store where the products can be purchased.
  • The contact details to the sales department for bulk orders
  • The contact details for technical customer support for queries about products that have already been purchased
  • Information and forwarding to the current product innovation of the company

Variant 2 - Events

The Site Assistant is displayed on the trade fair pages of a venue. There it contains for example:

  • A link to the exhibitor registration page
  • A link to the ticket store for visitors
  • The contact details of the project management for further inquiries

Variant 3 - Press area

The Site Assistant is displayed in the press and news area of an international company. There it contains for example:

  • A link to the media library to download additional media
  • A subscription to the press newsletter
  • The contact details of the press spokesperson
  • The opening hours of the company

Unrestricted design for editorial teams

The Site Assistant module is built in such a way that editors or other authorized roles can use it to quickly and easily add content, reuse it, as well as display the assistant on any page.

A huge advantage of our module (especially compared to other solutions) is that any number of assistants can be used for different purposes on one Drupal site. For example, pages on product topics can contain an online store and the contact of the person responsible for the product, whereas news pages can display press contacts and downloads in the assistant.

All in all, the module offers four enormous advantages for editorial teams and page operators from our point of view:

  1. It offers editorial teams themselves a simple tool to provide site visitors with up-to-date and relevant information and to provide them with customized navigation options.
  2. Furthermore, assistants can be used as a marketing tool to better reach customers and direct them to specific (marketing relevant) content.
  3. Since the module is shared with the Drupal community, it is available for all for download. So there are no costs for extending the Drupal site and using the Site Assistant module.
  4. The complete design of the assistants can be customized. This requires site builder skills to customize the Drupal templates and CSS if necessary.

What exactly is in the Site Assistant module? We already wrote a post about this a few weeks ago titled "New Drupal module for contextual assistants".

Jun 02 2021
Jun 02

For my local Drupal development environment over the past few years, I’ve been quite happy using Docksal for my projects. Recently, I onboarded to a new project that required me to use Lando, another popular local development server. In addition, I use PHPStorm for coding, an integrated development environment or IDE. When I am coding, I consider Xdebug to be of immense value for debugging and defining variables.

Xdebug is an extension for PHP, and provides a range of features to improve the PHP development experience. [It's] a way to step through your code in your IDE or editor while the script is executing.

Getting started: the basic setup

In this article I will share with you my basic setup to get up and running with Xdebug 3. The core of this tutorial requires Lando and PHPStorm to be running on your machine. You can head over to GitHub to grab the latest stable release of Lando. Installing Lando will also install Docker desktop, a containerized server environment. You'll also need PHPStorm to be installed as well. If you have any issues with getting Lando running, you can check their extensive documentation. You can spin up a Drupal 9 site using Lando but as of yet, there is no option that will set it up as a true composer based workflow. Therefore, you can use Composer itself composer create-project drupal/recommended-project to create a new Drupal 9 project and then initialize it with Lando. In this case, you'd need Composer 2 to be setup globally on your local machine but once Lando is up and running, you can then switch to using lando composer... Thereafter, be sure to install Drupal as well.

Configure the Lando recipe

Lando has the notion of "recipes" for specific server setups and you'll want to set up a basic Drupal 9 site running on Lando. The TL;DR for the one I am using is below. This code would go in your .lando.yml file in the root of your project. Noting that any time your change your recipe, you will need to run lando rebuild -y.

name: d9sandbox
recipe: drupal9
config:
  php: '7.4'
  composer_version: '2.0.7'
  via: apache:2.4
  webroot: web
  database: mysql:5.7
  drush: true
  xdebug: true
  config:
    php: lando/config/php.ini

services:
  node:
    type: node:12.16
  appserver:
    xdebug: true
    config:
      php: lando/config/php.ini
    type: php:7.4
    overrides:
      environment:
        PHP_IDE_CONFIG: "serverName=appserver"

# Add additional tooling
tooling:
  node:
    service: node
  npm:
    service: node

Of note in the above code:

  1. Xdebug set to true
  2. PHP set to 7.4
  3. MySQL set to 5.7
  4. Composer set to 2.x
  5. Pointer to a custom php.ini file where we will need additional configuration for Xdebug.

Configure php.ini

Now, we need to create our php.inifile within lando/config. Create these directories if they do not exist already. You'll want to set these variables for Xdebug:

[PHP]
xdebug.max_nesting_level = 256
xdebug.show_exception_trace = 0
xdebug.collect_params = 0
xdebug.mode = debug
xdebug.client_host = ${LANDO_HOST_IP}
xdebug.client_port = 9003
xdebug.start_with_request = yes
xdebug.log = /tmp/xdebug.log

I also like to add memory_limit = -1 to this file as well to avoid any out of memory issues. Also note that port 9003 is set for Xdebug, that is because Xdebug 3 requires this. If you've used an older version of Xdebug in the past, the port was typically 9000.

Rebuild Lando

Now that we have Lando all set, go ahead and run lando rebuild -y so that these updates take effect. Once that is done, you'll see something like this in terminal:

NAME            d9xdebug
LOCATION        /Users/me/Projects/d9xdebug
SERVICES        appserver, database, node
APPSERVER URLS  https://localhost:55165
                http://localhost:55166
                http://d9sandbox.lndo.site/
                https://d9sandbox.lndo.site/

Once that is done, go to your site and make sure it loads. In my case the url is http://d9sandbox.lndo.site/.

Configure PHPStorm

Now we are ready to configure PHPStorm. Here, we will connect Docker to the IDE. The following below is a series of screen captures that illustrate how to set this up.

Choose a language level and a CLI interpreter

The first thing to do is to go to preferences and search for PHP. Here you will want to set PHP 7.4 as the language level. Next click on the three vertical dots to choose a CLI interpreter.

Choosing a language level and a CLI interpreter in PHPStorm Choosing a language level and a CLI interpreter in PHPStorm

Configure the CLI interpreter

Here you will choose "From Docker…"

Configuring the CLI interpreter in PHPStorm Configuring the CLI interpreter in PHPStorm

Choose the CLI image name

Here you will choose "Docker" and then the image name, in our case devwithlando/php:7.4-apache-2. Now click apply and ok to save all these settings.

Choosing the CLI image name Choosing the CLI image name

Trigger Xdebug

Now for the final step, we are ready to trigger Xdebug with the steps below.

  1. Input a variable to set a breakpoint for. In the example, below, I've set a fake variable, $a =1;within function bartik_preprocess_node(&$variables){...}
  2. Run lando drush cr
  3. Click in the left margin parallel to the variable so that you see a red dot.
  4. Click on the Phone icon at the top of the IDE so that it turns green. This is the "listener" where we set PHPStorm to listen for incoming connections so as to trigger Xdebug.
  5. Now click accept after choosing index.php. This is a standard convention.
  6. If all goes well, you will now see Xdebug get triggered. Click “Accept.”
Triggering Xdebug in the IDE Triggering Xdebug in the IDE

View Xdebug output

Once the above is set, you should now see the actual output from Xdebug as shown below. Thereafter, you can inspect arrays to your heart’s content!

View the Xdebug output View the Xdebug output

Tips and Tricks

  • I have found that occasionally, if other Lando projects are running, Xdebug will sometimes not be triggered so it might be best to only run one project at a time. To do that, run these commands in the root of your project.
lando poweroff
lando start
  • If for some reason the listener does not trigger Xdebug, stop it and clear Drupal cache. Then start listening again.

Summary

Hopefully this article has helped to get up and running with Lando and Xdebug 3 with PHPStorm. I cannot stress enough how useful Xdebug is when working on a Drupal website. I’ve seen folks struggle with using Kint, print_r(), and var_dump(). While these are somewhat useful, to me nothing compares to the speed, preciseness, and usefulness of Xdebug.

Resources

Tags

Jun 01 2021
Jun 01

Are you still on Drupal 7 or 8?

All software, even that of Drupal’s top world ranking open source community, comes with a shelf life.

Drupal 7 is now a decade old, with the advent of Drupal 8 falling just four years behind. These previous major Drupal versions weren’t left in the dust with the release of Drupal 9 in the summer of 2020. But that support has an expiration date coming soon. Both versions 7 and 8 have set “End of Life” dates which means they will no longer be supported by the official Drupal community.

Version 9: Drupal in its Prime

Organizations are moving to Drupal 9 to reap the benefits of fresh features and community-supported innovation. The pace of adoption is faster than ever. It took one month to go from 0 to 60,000 sites on Drupal 9 compared to taking 7 months to get 60,000 sites on Drupal 7. Although good progress is being made with organizations moving to Drupal 9, there are still thousands of Drupal 7 and 8 sites live. If you have a Drupal 7 or 8 site, you need to start planning immediately to ensure you are prepared for support ending.

So, what is Drupal 7 and 8 End of Life?

End of Life marks the date when the Drupal community officially stops supporting older versions of Drupal. The Drupal community has always supported the current and previous versions of Drupal but with the launch of Drupal 9 last year and Drupal 10 scheduled for June 2022, “End of Life” is quickly approaching for Drupal 8 and Drupal 7’s days are numbered as well.

key dates for drupal 7 and 8

State of Drupal presentation (April 2021)

It might come as a surprise (and seem a little odd) that Drupal 8’s End of Life occurs before Drupal 7’s. There are a couple reasons behind this. First, the transition from Drupal 8 to Drupal 9 is not a significant effort in most cases. Drupal 9 is not a reinvention of Drupal, with only two key differences; updated dependencies and deprecating APIs. The Drupal Association illustrates the transition from Drupal 8 to 9 as just a station on the same track (versus moving the train to a different track entirely for previous upgrades). The other reason is Drupal 8 is dependent on Symfony 3 and Symfony 3’s end of life is November 2021.

drupal upgrade train track illustration

Source: Understanding How Drupal 9 Was Made, Drupal.org

For Drupal 7, the original end of life was set for November 2021 but due to the impact COVID-19 had on many organizations who are still on Drupal 7, no dependencies on Symfony 3, and the effort needed to upgrade from Drupal 7 to Drupal 9 (requiring a migration and site rebuild), the date was pushed out a year.

How does this impact you?

Drupal is an Open Source project with a uniquely robust security model:

  • Drupal has a dedicated team of Security professionals who proactively review core and contributed modules for vulnerabilities.
  • When a security vulnerability is identified, the Drupal Security Team is notified and code is quickly fixed to remove the vulnerability.
  • When a fix is available an announcement immediately goes out and a patch is released to the community. Drupal sites are also automatically notified that they need to upgrade.

When Drupal 7 and 8 support ends, there will be no more Drupal core updates to Drupal 7 and 8 - even for critical security issues. The contributed modules that power your site will also no longer be actively tested and reviewed, patches and security evaluation will depend on your internal resources.

A recent study indicates that nearly 12 million websites are currently hacked or infected. Ensuring you correctly handle the security implications of Drupal 7 and 8’s End of Life is essential.

What do YOU need to do?

Without taking active steps to protect your website, you are going to be vulnerable to a security breach. Drupal 7 and 8 are widely-used content management systems that have served as a platform for some of the world’s largest websites. It is public knowledge that support for it will be ending. It’s likely that hacking groups are waiting for official support to end to use security exploits that they have already discovered to increase the number of systems they can access before they're patched.

Mitigating this risk is much easier with an experienced partner. We advise our clients to take the following steps:

  1. Ensure your website will be secure after Community Support ends. You can do this by developing an internal plan to monitor future Drupal 7 or 8 security releases, or engaging with your Drupal hosting provider and agency to cover you while you plan and execute the move off of Drupal 7 or 8.
  2. If you’re on Drupal 7 or 8, it’s likely that the time is now for a reassessment of how you use the web. Partnering with an expert Drupal agency like Mediacurrent will help you to reassess your website plans, determine if your Digital Strategy is effective, and ensure your next platform serves your needs now and can adapt in the future.
  3. Once you have identified the correct platform, plan and execute your migration. By taking care of security first, and securing the right partner, you can take the time to correctly plan and build your next Digital Platform in Drupal.

Learn to love the upgrade

While Drupal 7 and 8 End of Life might mean more work for you, we view it as an opportunity. The way we consume information on the web has changed drastically since Drupal 7 and 8 launched, and if you are still on these versions and not planning on innovating, you are likely putting yourself at a serious competitive disadvantage.

In the longer term, sticking with Drupal 7 or 8 not only means you will be fighting a constant battle against security vulnerabilities, but also that you will be doing so with a dwindling number of allies. As time goes on, Drupal 7 and 8 sites will disappear. Fewer agencies will offer any sort of Drupal support for these versions. The talent pool of developers will dry up - anyone who learns Drupal today will be learning on newer releases.

Upgrading from Drupal 7 or 8 to Drupal 9 is the opportunity to revolutionize the way you think about the web as a business tool. If you have a Drupal 7 or 8 site, you have almost certainly had it for at least five years. How many little usability improvements have you considered in that time? Is your design dated? Does your site build reflect modern practices in SEO (Search Engine Optimization), accessibility, and user experience?

With Drupal 9, Upgrading is about more than just security

Out of the box, Drupal 9 offers a significantly more powerful feature set that allows you to build the modern digital experiences you need to compete on today’s web.

At this time, no new features are being added to Drupal 7 or 8. All the innovation is happening in Drupal 9!

Look at Drupal 9’s core features:

All the best of Drupal 8 - Drupal 8 came with many new features such as an improved authoring experience with Layout Builder, an API-first architecture that opens the door to a decoupled CMS, TWIG templating engine for improved design capabilities, and built-in web integrations to name a few. All of these features are carried over to Drupal 9.

Intuitive tools - Improving Drupal’s ease-of-use remains a top priority. Easy out of the box, a new front-end theme, and automatic updates are among the strategic initiatives for Drupal core.

Future upgrades - Upgrades will be seamless for future releases. You will no longer be forced to replatform as new versions are released.

Stay on the edge of innovation - Adopting Drupal 9 will give you access to the latest new feature releases, happening twice a year.

Powerful Distributions - If you’re planning a Drupal 9 project, you don’t have to start with a blank slate. Drupal distributions like Rain CMS can be used as the “starter” for your next Drupal project.

Prepare for an upgrade with a Drupal 9 Audit

Upgrading can certainly come with challenges. As you start planning for an upgrade, a good starting point is performing a readiness audit. An audit assesses the level of effort and provides recommendations for a smooth migration path to Drupal 9.

For more information on our Drupal 9 Readiness Audit, or to start the discussion about transitioning to Drupal 9, please visit our contact page or chat with us right now (see bottom right corner of the page). We would be happy to talk more about your project.

Jun 01 2021
Jun 01

The Drupal Core security release SA-CORE-2021-003 on 2021 May 26 was delayed by a combination of infrastructure issues, causing the release to be delayed to outside the planned release window. This post-mortem blog explains the circumstances, and how the Drupal security team and Drupal Association are collaborating to improve the situation for the next release.

On 2021-05-25, the Drupal Security team published a Public Service Announcement (PSA) about an off-cycle release for Drupal Core. Typically, the Drupal security team will publish PSA's about releases under a few conditions:

  1. If the release is off-cycle (core security releases are on the 3rd Wednesday of the month
  2. If the release is highly critical, the security team will issue a PSA before the normal window.

For SA-CORE-2021-003 the release was off-cycle because it related to a 3rd party library, and so a PSA was issued.

At 17:30pm UTC on 2021-05-26 the release process started. This normally takes 20-45min. The PSA had triggered a large amount of "bot" like traffic to our GitLab infrastructure.

Because of the nature of this particular commit as a change to the integration points of a 3rd party dependency, the diff was exceptionally large. As a result, each request to commit pages in the Drupal GitLab instance was significantly more resource-intensive than average.

The load on the GitLab infrastructure caused the systems that create the packages and update subtree splits to become unstable.

Key points

  • A PSA was sent in advance of the release because it was happening outside the normal core cycle.
  • This PSA resulted in large amounts of traffic to Drupal.org and to our GitLab installation, and in particular, a large number of people and scripts looking at the commit history for new commits - even before the release was published.
  • Because of the nature of this particular commit, the diff was exceptionally large, and each request to commit pages triggered a heavy syntax highlighting operation.
  • Once the commit was pushed to git, even before the packaged release was available, requests from CI systems and Composer increased dramatically.
  • These combined factors resulted in the release packaging process itself being blocked on 500 errors, unable to populate a key db table in the middle of the pipeline process.
  • We mitigated this by blocking the most aggressive automated traffic that was repeatedly attempting to gather commit information.
  • This allowed us to complete the packaging process, and get the packaged release out. 
  • In the meantime, the load on the GitLab instance itself remained high, resulting in slowdowns and 500 errors for several hours, as it gradually cleared. 

Future mitigations: 

We are still evaluating potential mitigations for future release windows, but they may include one or more of the following strategies:

  • Improving log collection from our GitLab servers for better diagnosis of issues.
  • Enforcing static archiving on GitLab commit pages, either temporarily or permanently.
  • Investigating the possibility of separating web traffic from Git traffic for our GitLab instance, or at least to reduce load spikes and provide dedicated git resources to the packaging job. 
  • Dividing read-only traffic between the GitLab primary and the replica, for which we have a plan.
  • Deliberately redirect all non-cached traffic until packaging completes.

These mitigations should help ensure that the full release process can be completed within the specified window.

Communication

The final aspect of this release that was disruptive was communication. We recognize that communication gets sparse whenever there is an unexpected delay, because the team is fully occupied trying to resolve any issues as quickly as possible. In future, we'll try to designate a person to handle communications, so that if we have to update the expected release time, we do so in a timely manner.

As a long-term goal, we know our international community members would benefit from more peace of mind when updating sites. We hope a combination of the in-development Automatic Updates initiative and the Drupal Steward program for highly critical vulnerabilities can help these international teams. 

Jun 01 2021
Jun 01

DevOps today is considered to be the most effective approach for software development and deployment.  Software service providers can now decrease development complexities, and continuously deliver top-quality, innovative software. Continuous Integration (CI), Continuous DElivery (CDE), and Continuous Deployment (CD) are some of the key processes that allow speedy deployment and delivery of software without compromising on quality.

CircleCI is one of the simplest continuous integration and continuous delivery (CI/CD) platforms that helps development teams release code rapidly and automate the build, test, and deploy process using Docker containers. It is a reliable platform that works well with languages like Php, Ruby, Python, NodeJS, Java, and Clojure. CircleCI provides built-in support for parallelism that can be accomplished across developer options. It is considered as a scalable option with a better UI.

In this article, we will discuss more about CircleCI, CI/CD and how you can configure your Drupal project on CircleCI.

But First, what is CI/CD?

Drupal Project on CircleCI

Continuous Integration is a practice to merge all developers code into a shared central repository. CI is a coding philosophy and set of practices that drive the development team to implement the changes and check the code into the central repository frequently. Because most applications require the code in different platforms and tools, the team needs to integrate and validate its changes.

Whereas, Continuous Delivery is the ability to get changes of all types including new features, configuration changes, bug fixes, and experiments into production, or into the hands of users, safely and quickly in a sustainable way.

A CI/CD pipeline is a full set of processes that run when you trigger work on your projects or commit your changes on the central repository. Pipelines have your workflows, which coordinate your jobs, and this is all defined in your project configuration file.

Configure your first Drupal project in CircleCI

Step1: Create a repository in Bitbucket

Create Repository

Step2: Push your Drupal application code to the repository. Here are some git commands to help you to commit your changes in Bitbucket.

git init
git remote add origin 
git add 
git status
git commit -m "first commit"
git push origin master

Step3: Login to the CircleCI using Bitbucket

Login CircleCI

Step 4: Set up your Drupal project

Setup Drupal Project

Step5: Setup .circleci/config.yml

CircleCI will find and run the config.yml and interact with the code to orchestrate your build, tests, security scans, approval steps, and deployment. CircleCI treats the configuration as code which is why it has been orchestrated through a single file.

Our current code structure looks like this:

Code Structure

Workflows Configuration

Workflows support several configurations, some of them are listed below:

  1. Sequential job execution 
  2. Parallel job execution 
  3. Fan-in/fan-out in a repo 
  4. Branch-level filtering for job execution

Fan-in/fan-out in a repo is ideal for a more complex build orchestration and for teams who want to run different test suites against commits to getting faster feedback. It allows you to run multiple jobs in parallel that lead up to a singular job and vice versa from a singular job to multiple jobs. 

See the following example of a Drupal startup kit:

Built Artifact

Troubleshooting a failed pipeline job

You can access your build job via SSH in CircleCI very easily. You can run your failed job via SSH. You can debug your failed job and rerun only that failed job. No need to run the entire pipeline.

Check out the following example:

You can click the job and see the details of where it has failed and the reason for the failure.

Deploy Over SSH

You will then get the rerun option where you can rerun the pipeline from the failed job. Also, you can debug the same via SSH.

Deploy Dev
Jun 01 2021
Jun 01

Ek_jitsi 8.x-1.0-beta5 release

EK_jitsi is an integration of Jitsi video conferencing solution.

By installing this module you can:

  • create block to access Jitsi videos from your site

  • join existing room based on room name

  • create room with random name in 2 clicks

  • insert Jitsi video field into content pages

 

The latest beta release includes the video content field. I can be added to content like basic page or article.Embed field start a video room automatically based on the page title or alias as jitsi video url

If combined with a timestamp field, you can set a video start date and the page will show a countdown before start. This feature can be used for instance to send invitation in advance to join a conference at a certain time.

timestamp video

Documentation is available in Drupal and an installed demo is available here.

May 31 2021
May 31
[embedded content]

Don’t forget to subscribe to our YouTube channel to stay up-to-date.

Drupal comes with a toolbar which is useful when administering a Drupal site. If you log in and have the correct permissions, you’ll see a toolbar across the top of the page that allows you to access back-end configuration pages.

The Admin Toolbar module extends the functionality of the toolbar and gives you lots of extra features such as drop-down menus, access to cache and cron settings and an autocomplete search.

In this tutorial, you will learn how to install and configure Admin Toolbar and its sub-modules.

Table of Contents

Getting Started

Let’s begin by downloading the Admin Toolbar module.

Run the following Composer command:

composer require drupal/admin_toolbar

Then go to Extend and install Admin Toolbar and its three sub-modules.

Functionality around the toolbar is spread across four modules in total. Here’s a breakdown of what each module does.

Admin Toolbar

Provides an improved drop-down menu interface to the site toolbar.

Admin Toolbar Extra Tools

Adds menu links like Flush cache, Run Cron, Run updates, and Logout under the Drupal icon

Admin Toolbar Links Access Filter

Provides a workaround for the common problem that users with ‘Use the administration pages and help’ permission see menu links they don’t have access permission for. Once the issue https://www.drupal.org/node/296693 be solved, this module will be deprecated.

Admin Toolbar Search

Provides quick search functionality for configuration pages.

Let’s take a look at each module in more detail.

This module is the base module of Admin Toolbar. After installation, it provides drop-down functionality on top of the standard toolbar.

There is a new version 3.0.0 released recently, which requires Drupal 8.8 or above. With this version, a new configuration form was introduced to limit the number of bundles to display in the drop-down menu, resulting in performance issues.

To configure this new mechanism, go to:

Admin > Configuration > User interface > Admin Toolbar Tools

The default value is 20, but can be modified here. Just pay attention to the warning on this page which says ‘Loading a large number of items can cause performance issues.’. We can leave it 20 by default to start with.

This sub-module adds extra drop-down functionality to the base module by providing the following additional functions:

  • Additional menu items are available in the drop-down functionality
  • An extra Drupal logo icon will be attached to the left corner of the Admin Toolbar, which provides additional functions in a convenient manner:
  • Index – provides a list of functions indexed in alphabetical order
  • Flush all caches – flush all caches, or individual caches of the system.
  • Run cron – run a cron job immediately, instead of waiting for the default time period of 3 hours.
  • Run update – run a system update process, generally after system or module updates.
  • Logout – provide fast access to the logout function.

This sub-module offers a workaround for users which have the ‘Use the administration pages and help’ permission but not access to the specific pages. Menu items are still visible even if the user doesn’t have access.

Once https://www.drupal.org/node/296693 is fixed this module will be deprecated.

This sub-module doesn’t require any configuration, just install and you’re done.

An ‘Admin toolbar quick search’ box is provided on the Admin Toolbar. This is useful for beginners, site builders or administrators who are not familiar with Drupal. They can find the functions they are looking for, administrative or configuration pages by searching here.

Summary

Admin Toolbar is a must-have module on any Drupal site. It gives site administrators and builders a better user experience when managing content and configuration.

I especially like the quick search functionality offered by “Admin Toolbar Search”. This makes it easy to search for configuration pages.

Editorial Team

About Editorial Team

Web development experts producing the best tutorials on the web. Want to join our team? Get paid to write tutorials.

May 31 2021
May 31


There’s always a struggle between making the website functional yet user-friendly and intuitive. There are tons of websites with great design but lack of understandable user experience and necessary features. Equally valid is the situation with all the necessary features yet the absence of well-thought design, corrupted elements, and different fonts here and there. 

The first step that comes to mind is diving deeper into the design theory. Reading books, subscribing to web design courses, attending workshops, lectures, and other events that help you to understand the basics of how to make this experience better. It’s definitely an important contribution to the final result because knowing the main principles of theory will help you to view the process from a different angle.

So what is the right strategy to approach to build your website finding the perfect mix of performance and elegance? Let’s check:

Define what you want to achieve

The first thing on the list when making the website is defining its aim. Think over what type of actions you expect from people who visit the website. Making a purchase, subscribing to a newsletter, or having the possibility to see your portfolio are just a few of the possible options.

Answering this question will help you to establish a clear vision of the type of website you are planning to build and project the behavior of the target user. 

Conduct business analysis and market research

When you are ready with the general assumption, it’s high time to start thinking about the business requirements. To do that, make sure to carefully write down all the user flows. It will help you to outline all the features you need on your website.

For an e-commerce store, they may include purchase, payment, additional items ordering, editing cart, etc. It’s also important to write user stories from the user perspective so that you can assess this or that feature from the point of value it brings. 

Example of a user story: 

“As a user, I want to have a possibility to see the cart content when I come back to the page so that I do not have to spend time adding everything anew after accidentally closing the page”.

When working on the user stories, and thinking over the concept of your website, competitor analysis is a great tool to assess the ways of features’ realization and the whole process behind to get inspiration, see the weak points and develop ideas on how to make it better on your website.

Prototype

Before opening a web editor, try to schematically visualize the website structure. Define the menu items and website navigation to find the most optimal ways of the user journey, and see how the components you plan to include match one another, shift them, and reorganize to find perfect combinations.

Technically, you need only a pen and a piece of paper to start, but with crystalizing your ideas, you may use the prototyping tools like Webflow or Figma to see these prototypes in action. 

One of the popular mistakes in striving to do better than competitors is stuffing the interface with many different functions that make the user experience more complicated and can even lead to adverse effects when instead of making it better, you will puzzle the user.

So, the prototyping step is critical in singling out the helpful interface elements and eliminating those that bring distraction from your main goals.

Pro tip: MVP

When in doubt, plan the release with only a critical set of features to see if users behave as you expect and ask for their feedback. It may appear that they need something different that you had in mind.

So this way you can test your hypothesis on real users (it may be your friends or a limited group of users from your target audience). This practice is called an MVP (a minimum viable product) and it helps you to confirm the usefulness of your ideas. 

Design

And finally the time for the actual design. When you make the way through to reach this far, designing the final elements and building the interface will seem much easier. In this step, you may consult different inspiring design resources and platforms where designers share.

Later on, from the designs you liked, you may compile mood boards to create something unique from this blend, think about branding, work on the color scheme, logos, and fonts as well as visual assets to give your website a unique style. The result of this step is a set of final mockups that need to be followed by the development.

During the design and development phase, make sure to also consider the accessibility of a website and focus on making its design inclusive, so that the user interface is equally convenient for impaired users and the rest of your audience.

Start the development process

Here the most interesting part starts, as all the graphic assets need to be transformed into working pieces of code so that you may publish the website on the internet and attract new users to it.

At this stage goes the choice of involved programming languages, business logic for the website or application, and its architecture at the same time adhering to the mockups and the best UI/UX practices. And for sure the release.

Depending on the complexity and methods used to build the website (custom-coded structure or a Content Management System), the process may take from a couple of days to several months before the website reaches the user and you are ready to enjoy the results. 

Final thoughts

The process of achieving the simple yet elegant design on the finish line first may seem not easy at all. For sure it takes more time, and involvement of different specialists rather than start building it right away once you come up with the idea of a website. But it’s definitely worth all the efforts as this helps to approach the design the right way and save you a lot in the long run. 

Even when the website is ready, you come up with new ideas and features to enhance the functionality. They may come from everywhere starting from the feedback of your customers, meeting the market demands, or scaling the business. So the new improvements and changes gradually fill the backlog and you can start going through the same steps planning for future releases with the user in mind.

Author bio

Stewart Dunlop looks after content marketing at Udemy and has a passion for writing articles that users will want to read. In his free time, he likes to play football and read Stephen King. 

May 28 2021
May 28

Malcom Forbes, the publisher of Forbes Magazine once said, “Diversity: the art of thinking independently together.” This quote gives us a very strong message that diversity in ideas and opinions can work for the benefit of the society at large. So, when we talk about diversity in open source, we get to see a similar scenario, where people are encouraged regardless of their gender, race, age, class or nationality to express their ideas, innovations and skill sets in a single platform to enhance better performance and results in their respective careers. 

Here, I would like to draw your attention towards some of the insights of this vast topic i.e. diversity in open source. With the help of this article, I would like to give you an idea about the community contribution towards open source followed by the various challenges encountered by the community in building a platform welcoming diversity.

Let’s begin by discussing the important role of community in open source.

Power of Community in Open Source

Open source gives you the opportunity to bring up your unique ideas and innovations independently in front of the whole world. You get full freedom to share your skills regardless of ethnicity, socioeconomic status, exceptionalities or geographical area and so on. In this manner, the open-source community is built up by the contributions made by people from every nook and corner of the world. Here, we will have to understand the fact that community can be regarded as the backbone of open source. With the help of the community, one can strengthen the open source ecosystem by active participation and contribution.

Now, let us get a better understanding about the open source community with the help of a contributor funnel created by @MikeMcQuaid below:

Three icons on right representing people, horn and tools, and a down arrow mark on left explaining the diversity, equity and inclusion in open source  Source: Open Source Guides

The above diagram shows three categories of participants in the open source community. Like the users, contributors and maintainers. These participants play an important role in the progress of the open source community. You will be surprised to know that every user can become a potential maintainer. It is possible by making the experience of each stage of the category easy and hassle free which indeed will encourage every user to take more responsibility and become an active maintainer of the community. 

Here, the question which now comes is, how can you maintain a healthy community? Let me give you an idea of how you possibly can build up a trust-worthy community which will contribute with best endeavour. So, the first and foremost thing to do is, welcome the participants and make them feel valued in the community. After they step in, give them clarity about your work with the help of a README which will provide them full transparency regarding your project. As documenting everything about your project with a README is always a must for better understanding which is also supported by GitHub’s 2017 Open Source Survey. Thereafter, you can let the participants start their contributions by handling simple issues which will boost up their confidence and help them get more involved at work. This further gives you a chance to share the ownership of your project with them making them feel more accepted in the community. While doing that if anyone comes up with any queries, you should always be ready to answer them at the earliest. 

Your community can be a great place for the contributors to learn from each other’s experiences and expertise. Therefore, it is your responsibility to expand your community by sustaining the right people and letting go of the ones who unnecessarily create a toxic environment for everyone. You should stand strong for your community giving equal value to everyone’s opinion and ideas. In this way, you can build a prosperous and healthy community for all. 

What’s next? Let’s now take a look at some of the tweets supporting community contributions to open source.

Contributing to open source is easier for some than others. There’s a lot of fear of being yelled at for not doing something right or just not fitting in. (…) By giving contributors a place to contribute with very low technical proficiency (documentation, web content markdown, etc) you can greatly reduce those concerns. — @mikeal, “Growing a contributor base in modern open source

The truth is that having a supportive community is key. I’d never be able to do this work without the help of my colleagues, friendly internet strangers, and chatty IRC channels. (…) Don’t settle for less. Don’t settle for assholes. — @okdistribute, “How to Run a FOSS Project

While it’s important to create a sense of belonging for the members of an open source community, it is equally important to encourage diverse minds to be a part of the community and become more and more inclusive. Let’s look at a research report conducted by the World Economic Forum that highlighted the importance of diversity and inclusion at workplace. This report explains that when all the employees, managers and the entire organization work under well managed diverse teams, they tend to perform better than homogeneous teams in terms of their usual productivity. Therefore, the practice of diversity, equity and inclusion should be encouraged by the open source community too. This can be seen in the below diagram:

Graphical representation with curves explaining diversity and inclusion at workplaceSource: World Economic Forum

Challenges while Promoting Diversity, Equity and Inclusion in Open Source Communities

In the past few years, the open source community has witnessed some challenges while promoting diversity, equity and inclusion. Let us discuss today some of the most important challenges by taking help of some genuine survey reports.

Not enough contributions from female and non-binary coders

According to a 2017 GitHub open source survey, 95% of contributors of open source projects were male, whereas only 3% of contributors were reported to be female(1% defined as non-binary). The US Bureau of Labor Data says that only 21.2% of professional developers are female.  

Here comes a diagram from 2019 DigitalOcean developer survey to show that the participation of male in open source is comparatively more than female:

Table explaining diversity, equity and inclusion in open sourceSource: DigitalOcean

Next, let’s look at another diagram where the participation of young developers, both male and female are shown in comparison to older generations of contributors regarding their experience in open-source.

Table explaining diversity, equity and inclusion in open sourceSource: DigitalOcean

From the above diagram we can understand the fact that when the younger generation both male and female join the open-source community, they don’t find any hurdles in terms of guidance or required resources. But as we know that the older generation of contributors mainly comprises of male due to gender diversity. So, they do not preferably contribute to the change needed in the community towards gender inequality. As a result, the young female contributors experience injustice and are deprived of opportunities. Therefore, the cycle of male preference in open-source continues.

The sad state of women in a male-dominated world

To get a deeper understanding about the reasons for lack of diversity in open source, let us look into another seminar paper which was written as a part of the lecture Free and Open Technologies, held by Christoph Derndorfer and Lukas F. Lang at TU Wien, Austria, during the winter term 2019/2020. This seminar paper talks about a case, Katie Bouman. 

In April 2019, the first visualization of the black hole was revealed. After this, another picture went viral which was of a young female computer scientist, named Katie Bouman. She is a postdoctoral fellow at MIT and a member of the team running Event Horizon Telescope, contributing with her algorithm to capture this image. Her team consisted of 200 researchers. But solely, Bouman was made the face of the black hole project by the media. Bouman tried to clarify this confusion but was made a role model as she was a woman working in a men-dominated field. On the other side she received immense hatred and even her Wikipedia page was said to be deleted. So, with this case you get an idea about the diversity problem in open-source.

Women need to be empowered within open source communities 

Then we have one more report named “Towards a More Gender-inclusive Open Source Community” published by the Institute of Development Studies at the (DIAL) Digital Impact Alliance. It brings light into the circumstances of gender diversity. This paper shows how women can be supported and an inclusive community can be built.

This report says that there has to be some changes in facilities for women to empower them to contribute actively in open-source. Let us see how can it be done:

  • Resources: In order to excel in one’s career aspirations, one should be given the right amount of resources which can be used to reach the desired goals. Similarly, talking about women in open source, they should also be provided with the opportunity of learning various skills required to enhance their knowledge and contributions. They should be given a friendly working atmosphere to come up with their ideas and plans in the community.
  • Institutions: This comprises the different social environment one gets in life in the form of family, educational institutions and the society at large. It is usually seen that a woman is less likely to get the necessary support from her family in terms of education and other facilities. Growing up when they want to pursue their career and also look after their family life, they are stopped from doing so and are left to take tough decisions of choosing one over another. Therefore, a change in the society’s perception is needed to encourage women to manage their both family and career with dignity and respect.
  • Agency: This comprises the ability of a woman to become a good leader and a decision maker. When it comes to open source, there is less encouragement towards women to prove their capabilities compared to men. So, such injustice has to be abolished for equal and fair chances.

As a part of the above mentioned report, one interview was conducted among women who were contributors working  as programmers and multi-taskers. It’s definitely high to empower and encourage more women to participate in open source ecosystem going by the number of women who were in the open source projects at the time of interview.

Table showing number of women in open source communitiesOpen source organisations and communities in which women contributors were active at the time of interview | Source: DIAL

Time, money, and recruitment from demographically homogeneous communities are obstacles too

In a report by GitHub, in association with The Case Foundation, Nadia Eghbal states that one of the reasons why there is a lack of diversity in the open-source community compared to the technology sector, is because the open-source contributors need time and money to contribute initially which at times can be very difficult. The open source ecosystem must be enhanced by the inclusion of diversity, believes Lorena Mesa, an engineer at GitHub and the Director and Vice-Chair Elect of the Python Software Foundation. Justin W. Flory, a member of the RIT LibreCorps and UNICEF, an open source initiative said that the early leaders of OSS recruited contributors from homogeneous communities leading to diversity issues which can be seen till the present. He further stated, “I look at what we're going through now in this emergency of emphasis on communities, on diversity inclusion, and I feel like there is no other way to describe it then as a feminist movement in free software.” Learn how good leadership and inclusion within the open source communities can make a world of difference.

Neverending myths about open source have to vanish

Now, a contribution is made by Nithya A. Ruff, the Head of Comcast’s Open Source Program Office about some misconceptions which people have about open-source. So, here it goes. The first misconception is that you need to be a programmer in order to join the open source community. But it isn’t true. Open-source is also a platform for various other industries other than technology. The second myth is that the culture and norms of the open source community is easy to navigate. But in reality it isn’t so. You will find a lot of feedback where the new contributors reveal that they don’t get the necessary co-operations from the community members. The third misunderstanding is about the fact that you can’t be a casual contributor but you rather have to work under an open-source project for your entire life-time. But the truth is, you can work according to your convenience and be a short time contributor.

Here are few tweets supporting the above discussions:

Tweet showing a person's face on left and text on rightTweet showing a person's face on left and text on right


So, the above myths can also be termed as challenges towards a more diverse open-source community. Therefore, necessary steps should be taken to overcome these adversities.

Open source meritocracy and the significance of diversity and inclusion

Now let’s also look into some of the information depicting the truth about inequality and favouritism within the open-source community. Women were seen compromising their health when in the year 2015, a heart transplant was available to only 20% of women compared to 80% of men. Apple launched its first Health Tracking App in 2014 which omitted a woman’s menstrual cycle that should have been one of the major concerns regarding a woman’s health and wellness. Amazon was seen removing its AI recruiting tool which was designed to select applicants based on resumes submitted to the company over a 10 years time period that mostly came from men.  

So, apart from the above mentioned challenges of diversity in open-source, you will find many other such discriminations based on socioeconomic status, nationality and so on. In cases like this a question comes to my mind, what if meritocracy was practiced in open source? It could help the community in finding the right participants with merit, intelligence, creativity and skills who were truly deserving. With meritocracy, the open-source could experience diversity, equity and inclusion within the community. This ideology is also supported by huge companies like Google, Facebook, Microsoft and Netflix. More on the impact of large companies on open source here.

Meritocracy does not consider the reality that tech does not operate on a level playing field. — Emma Erwin and Larissa Shapiro, Mozilla.

Statistics show that 78% of companies run all of their businesses with open source software. So, looking at the ratio, it is a must that all the companies working in open source software should follow the rules and regulations and include contributors encouraging meritocracy within their community. If not all, but some are taking initiatives in giving opportunities to the people around the globe looking forward to being a part of open-source. Here is an example: Outreachy is a program which organises a three month paid internship with free and open source software projects for people who experience favouritism and are under-represented  in the technical industry where they are living. 

After discussing the challenges of the open source community, let’s now peep into the topic, ‘Code of Conduct’ which cannot be avoided. 

Is Code of Conduct Enhancing Diversity in Open Source?

Code of conduct can be a medium of communication among the contributors of the open source community. It helps people to know about the set rules, regulations and practices that are to be followed in order to maintain the professional conduct of an organization. 

For open source projects, The Contributor Covenant was created by Coraline Ada Ehmke who is a software developer and an open-source advocate. This contributor covenant works for safeguarding the rights of the members of the community from experiencing misbehaviour and ill treatment. So, it is followed by prominent companies like Apple, Google, Salesforce, Linux, Creative Commons and many open-source projects as well. It has become an essential part of the open source community. We have with us an example where Eric S. Raymond, one of the founders of the Open Source Initiative was banned for violating the code of conduct by his misbehaviour. But alone CoC can’t stop discrimination happening in the field of diversity in open source. There is a need for better authority and management which can strictly look after the matter. 

Wonderful Stories from Open Source Communities Embracing Diversity

The Open source community can be termed as a social movement which is diverse. It is a community model which is designed to help aspiring people contribute to the world their ideas, innovations and unique talents. Here we have some examples of extraordinary open source contributors:

1. Drupal

Diversity, Equity and Inclusion are valued by Drupal as there is a separate team to monitor their active participation and implementation. Drupal celebrates the pride month every year by changing its logo on social media platforms. This is observed inorder to thank the members of Drupal for welcoming and supporting LGBTQ+ into their community.

logo of Drupal Association with drop shaped icon on left

When the world was once again reminded, through the George Floyd incident, that the 21st century still witnesses the violent incidents of racism, the Drupal Community joined hands in raising its voice against such brutal attacks. The statement given by Drupal is as follows:

(We stand with people across the globe in condemning racism, racist behavior and all abuses of power. We grieve for the black community, which has endured another unspeakable tragedy in a long history of injustice)

Drupal believes in the ideology of getting better quality results or performance out of diverse working groups in the community. The Drupal Diversity and Inclusion Contribution Team aims at increasing the contributions to the Drupal projects by the people who are underrepresented or devalued in the Drupal community.

logo of Drupal Diversity and Inclusion contribution team in the shape of a droplet in blue

The diversity problem was handled by Drupal in a very smart way, by introducing Drupal conferences and workshops. These platforms gave exposure to the underrepresented groups to speak and open up their views, perceptions and ideas to a larger audience increasing their confidence level. A speaker training workshop was hosted on September 21 and 28, 2019 with Jill Binder by the Drupal Diversity and Inclusion Group to inspire people around the world. 

The Community Working Group(CWG)  also conducts workshops for the community leaders to provide them the necessary tools, resources and knowledge to build a friendly and flourishing community. To get a better idea of conducting successful conferences, CWG follows feedback from past workshops like Teamwork and Leadership workshop conducted at DrupalCon Nashville. Such workshops are of two days duration. The first day was spent by discussing the needs or necessities and challenges faced by the community members. It was followed by the discussion of utilizing the nudges appropriately and building a positive environment within the community. Then the second day, they talked about emotional intelligence and finding ways to resolve conflicts. There was also a case study challenge, where various groups were assigned tasks of resolving conflicts which were seen in Drupal or other open-source communities. These workshops proved to be beneficial for the community members. 

Drupal encourages healthy conversations to maintain a positive ambience within the community. Even though people try to maintain a peaceful environment, sometimes due to differences in opinions people tend to hurt each other’s feelings. To resolve this issue, an idea was discussed at a Community Working Group (CWG) workshop at DrupalCon Seattle. For the Drupal community, the CWG Community Health team has been working on a communication initiative which comprises a series of de-escalation templates labeled as “Nudges”.

Five icons attached to semicircle representing diversity and inclusion in Drupal


There are five nudges which the community members can utilize when they come across any such uncomfortable circumstances within the Drupal community. Every nudge gives a clarity about why a certain comment towards a member can be harmful and it also provides some relevant links like the Code of Conduct and Values and Principles.

Below are the nudges. Take a look:

  • Inclusive language (gendered terms): Use of gendered language is prohibited. Such language impacts the community negatively as it encourages gender inequality within the community.
  • Inclusive language (ableist terms): Use of  ableist language can hurt the sentiment of people with disabilities. Therefore, one is abstained from using such language within the community.
  • Gatekeeping knowledge: When a community member expects a new community member to know everything about the project without giving the required guidance and questions his/her contributions and ideas, then this nudge can be used. The new contributor shall be supported by helping them learn the necessary concept and topic.
  • Cultural differences: The members coming from different backgrounds, speaking a culturally specific language are undervalued for their contribution by the other members as they are unable to speak the global language common for all. While translating one has to be very careful as expressing something exactly the same in a different language might at times sound rude and uncomfortable.
  • Escalating emotions: Every community demands mutual understanding and proper communication inorder to build a healthy environment for all. So, while working together every member should be given equal amount of respect and dignity without any discrimination. The Drupal community further takes care by providing resources to the members at the time of conflict.

Give a look at what Dries Buytaert, the founder of Drupal, wants to share about gender and geographic diversity statistics of the recent years. It is as follows:

Gender Diversity’s position was closely observed by Dries Buytaert as it is one of the biggest challenges of the open-source community. A slight progress was seen in terms of contribution but still wasn’t enough to be celebrated as a victory of gender equality. Let’s take a look below:

Bar graph with blue, green, and black vertical bars showing statistics on contribution by different genders of drupal communitySource: Dries Buytaert's Blog

Here we have the top 20 countries from which contributions are made in Drupal. The below diagram says it all:

Bar graph with blue, green, and black vertical bars showing statistics on contribution by different countries for drupalSource: Dries Buytaert's Blog

With the above explanations, we get to know that the efforts are made to improve gender and geographic diversity. But it isn’t enough. Therefore, better practices and strategies have to be made in order to reach the desired results. Learn more about Drupal's role in encouraging diversity and inclusion here.

2. Red Hat

Red Hat is one of the leading open source provider companies which actively takes initiative in building an open source community filled with innovation and productivity of better technology. They believe in the collective contribution of every skilled participant irrespective of gender, race, class or nationality within the community. One of the initiatives it took to encourage diversity was by including Women in Open Source Community Award in 2015. This ceremony was organized to appreciate and honor women for their outstanding contribution towards the open source community.  

Red Hat observed a very sensitive matter of using inappropriate language by software programmers which at times affected the sentiment of some participants. The usage of terminology like ‘master’ or ‘slave’ was the major concern. Chris Wright, chief technology officer at Red Hat confirms that Red Hat is building a team to examine its documentation, code and content to find out the improper language and replace it with the right ones. Some of the changes are as follows:

  • Master branch will be renamed as ‘main branch’
  • Whitelist to be renamed as ‘allowlist’
  • Blacklist to be changed to ‘denylist’

3. Mozilla

Mozilla is one of the communities who is open and easily accessible to everyone looking for meaningful contributions towards the vast open source community who is constantly seeking growth. Diversity has been one of the interests of this community and they have always taken the necessary steps towards it. 

In 2018, the code review process was made equal for all without any gender bias by this community. To improve diversity within their staff they published a blogpost in the year 2019. Here is the progress they made:

  • There was an increase of women in technical roles from 13.2 percent to 17.4 percent in their community.
  • Out of all people managers, the women representation has increased from 36.0 percent to 39.1 percent and in terms of executive leadership roles, the graph has raised from 33.3 percent to 41.2 percent within December 2018.
  • The representation of minorities rose from 6.9 percent to 7.9 percent in 2018, but the target of 8.9 percent couldn’t be achieved.
  • From underrepresented minority groups, Mozilla hired 12.4 percent people and also a rise in people of colour from 35.2 percent (2017) to 36.2 percent (2018).

In cooperation with Kubernetes and companies like Red Hat, Mozilla gives importance to the execution of codes of conduct so that proper communication and professional conduct can be maintained within the people of diverse nature. Most importantly, it raises funds for open source projects.

4. The Linux Foundation

The Linux Foundation focuses on broadening the practice of diversity and inclusion, building a more welcoming space for people from diverse backgrounds and expertise. This association commits towards constructing a bias free environment by taking few initiatives as follows:

  • Initiative of Inclusive Naming
  • Advancing diversity and inclusion in Software Engineering
  • Availability of free online courses
  • Diversity and Inclusion in Events
  • Live Mentorship Series
  • LiFT Scholarships

Out of all these, advancing diversity and inclusion in Software Engineering is something that can catch one’s attention. The Linux Foundation announced the Software Developer Diversity and Inclusion (SDDI) project on 26th October 2020. Through SDDI, exploration and utilization of best research procedures, the diversity and inclusion in software engineering could be increased.

5. The Apache Software Foundation

The Apache Software Foundation started a project named Apache Diversity and Inclusion with the mission of constructing a community valuing diversity and inclusion giving exposure to a wide group of people seeking career path/ professional advancement.

homepage of Apache diversity showing textual information on their mission and vision for encouraging diversity and inclusion in their project development works


6. The Academy Software Foundation

The Academy Software Foundation stands against the injustice and inequality happening in the open-source community. It aims at removing all the barriers which creates hurdles in the process of growth and development of potential contributors all over the world.

To set your basics right and make your open source project more diverse and inclusive, Open Source Diversity is a good place to start. From identifying projects which support underrepresented groups like WikiProject Women in Red (for increasing the women representation in Wikipedia) to finding mentorship programs like Write/Speak/Code (visibility for women and non-binary coders through thought leadership), Open Source Diversity has it all!

Conclusion

To completely abolish the challenges of diversity, equity and inclusion in the open-source community is not easy. But there is no end to consistent effort and endeavor. So, it is important to be fully aware of the situation and work towards the collective goal as a team around the world. Therefore, let’s never forget, ‘Diversity leads to Prosperity’. 

May 27 2021
May 27

The process of migrating data into a Drupal database from a CSV file can be fulfilled through Drupal’s integrated Migrate API and three extra custom modules (Migrate Source CSV, Migrate Plus and Migrate Tools). 

This is known as the ETL (Extract - Transform - Load) process, in which data is fetched from one source in the first step, transformed in the second step, and finally loaded to its destination on the Drupal database in the third step. 

This tutorial will explain the creation of 12 book nodes for a library database.  Keep reading to learn how!

Migrate Data from a CSV File in Drupal 8/9

 Step # 1 - Install Drush and the Required Modules

To execute migrations in Drupal, we need Drush. Drush is not a Drupal module, but a command-line interface to execute Drupal commands. To install the latest version of Drush,  

  • Open the terminal application of your system
  • Place the cursor outside the /web directory.
  • Type:  composer require drush/drush

This will install Drush inside the vendor/bin/drush directory of your Drupal installation. However, it is cumbersome to type vendor/bin/drush instead of drush, each time you want to execute a drush command.

Drush launcher makes it possible to execute the specific Drush version of each Drupal installation on a per-project basis. 

Why does this make sense?

Each project has different requirements, specific Drush versions help to avoid dependency issues. Some contrib modules may not work properly with the latest version of Drush.

The specific instructions for OSX and Windows systems can be found here: https://github.com/drush-ops/drush-launcher#installation---phar

For a Linux system:

  • Type the following to download the file named drush.phar from GitHub:

wget -O drush.phar
https://github.com/drush-ops/drush-launcher/releases/latest/download/drush.phar

  • Type the following to make the file executable:  chmod +x drush.phar
  • Type the following to move the .phar file to your $PATH and run Drush as a global command: sudo mv drush.phar /usr/local/bin/drush

It is now time to install the required contributed modules to perform the migration.

  • Type the following:

composer require drupal/migrate_tools
composer require drupal/migrate_source_csv

Migrate Data from a CSV File in Drupal 8/9Once Composer has finished downloading the modules,

  • Open the Drupal backend in your browser
  • Click Extend
  • Enable Migrate, Migrate Plus, Migrate Tools and Migrate Source CSV
  • Click Install

Migrate Data from a CSV File in Drupal 8/9

Step # 2 - More About the ETL Process

The process of Extracting, Transforming, and Loading data can be achieved by defining the migration in a .yml file, and then executing it with a Drush command, so the Drupal database can correctly be populated. 

There are some important facts to notice:

  1. Each one of the steps is performed through Drupal plugins.
  2. You are only allowed to use one plugin In the first step (source definition i.e. Extract) and one plugin in the last step (destination definition, i.e. Load) of the process. 
    • In other words, you may only fetch data from one source (CSV file, JSON feed, etc) and store it in Drupal only under a particular entity bundle, for example, Article, Page, a custom content type, a user, or a configuration entity, as well.
  3. You are allowed to use as many plugins as necessary to model the data so that it matches the format expected by Drupal.
  4. Drupal has by default a list of source/process/destination plugins that can be used within the definition file.

To see a list of all source plugins, 

  • Open your terminal window.
  • Type the following:

drush ev
"print_r(array_keys(\Drupal::service('plugin.manager.migrate.source')->getDefinitions()));"

That list is a little bit longer, remember that you can use as many plugins as you need in the process step.

To see a list of all destination plugins,

drush ev "print_r(array_keys(\Drupal::service('plugin.manager.migrate.destination')->getDefinitions()));"


Notice: the csv plugin is there because we already have enabled the Migrate Source CSV module.

To see a list of all process plugins,

  • Type the following:

drush ev
"print_r(array_keys(\Drupal::service('plugin.manager.migrate.source')->getDefinitions()));"

That list is a little bit longer, remember that you can use as many plugins as you need in the process step. 

   Notice: the csv plugin is there because we already have enabled the Migrate Source CSV module.  To see a list of all process plugins,  Type:  drush ev getDefinitions()));" That list is a little bit longer, remember that you can use as many plugins as you need in the process step. To see a list of all destination plugins, Type: drush ev "print_r(array_keys(\Drupal::service('plugin.manager.migrate.destination')->getDefinitions()));"" />

 To see a list of all destination plugins,

  • Type the following:

drush ev "print_r(array_keys(\Drupal::service('plugin.manager.migrate.destination')->getDefinitions()));"

   Notice: the csv plugin is there because we already have enabled the Migrate Source CSV module.  To see a list of all process plugins,  Type:  drush ev getDefinitions()));" That list is a little bit longer, remember that you can use as many plugins as you need in the process step. To see a list of all destination plugins, Type: drush ev "print_r(array_keys(\Drupal::service('plugin.manager.migrate.destination')->getDefinitions()));"" />

Step # 3 - Create the Content Type

  • Click Structure > Content types > Add Content type
  • Create the ‘Book’ content type
  • Click Save and manage fields
  • Use the title row of the CSV file to create the fields

 I am going to concatenate the values of the columns edition_number and editor, so I need only one field in the database for this purpose. 

Notice: the field names (machine names) do not have to match exactly to the column names of the CSV file, yet it makes sense, to at least relate them with similar words - that eases the field mapping in the process step.

 Migrate Data from a CSV File in Drupal 8/9

 The title field is mandatory for every Drupal node, so there is no need to create a title field. You can leave the body field untouched or you can delete it, it depends on what you are planning to do with your particular content type.

Step # 4 - The Migration Definition File

Source Step

  • Open your preferred code editor
  • Type the following:

id: my_first_migration
label: Migrate terms from a CSV source
source:
plugin: csv
path: public://csv/library.csv
header_row_count: 1
ids:
  [id]
delimiter: ';'
enclosure: "'"

The id of the file matches its name. 

We are using the csv plugin of the contrib module Migrate Source CSV in the source section. 

The number 1 at the header_row_count option indicates the value of the column titles (they are all placed on the first row).

The ids definition provides the unique identifier for each record on the CSV file, in this case, the id column, which is of type integer. Don’t forget the brackets, since the module is expecting an array here.

delimiter and enclosure refer to the structure of the CSV file, in my particular case, it is delimited by “;” characters, whereas strings are enclosed between “‘“ single quotation marks.

Migrate Data from a CSV File in Drupal 8/9

Notice also, the definition of a path. That is where you have to place the CSV file, so Drupal will be able to read the data from it. 

  • Open your terminal application.
  • Type: 

mkdir web/sites/default/files/csv
cp /home/path/to/library.csv web/sites/default/files/csv/
chmod -R 777 web/sites/default/files/csv/

This will:

  1. create a directory called csv inside the public folder of your Drupal installation.
  2. place a copy of the CSV file inside that directory.
  3. make the file accessible to everyone in the system (that includes the source plugin). 

Process Step

This is where we map each one of the columns of the CSV file with the fields in the content type:

process:
title: title
field_id: id
field_author: author
field_location: location
field_availability: availability
field_editor:
  plugin: concat
  source:
    - editor
    - edition_number
  delimiter: ' '
type:
  plugin: default_value
  default_value: book

The field_editor field will be the result of performing the concatenation of 2 strings (the values inside the editor and edition_number columns).
The first 5 key/value pairs in the form drupal_machine_name: csv_column_name map the CSV records to the database fields without performing any changes. 

The delimiter option makes it possible to set a delimiter between both strings, in this case, a blank space.

The default_value plugin helps us to define an entity type since this information is not available in the source data.

Destination Step

The final part of this process is the destination step.

destination:
plugin: entity:node

We are migrating content and each record will be a node.

Step # 5 - Execute the Migration 

  • Click Configuration > Configuration synchronization > Import > Single item
  • Select Migration from the dropdown
  • Paste the code from the .yml file into the textarea
  • Click Import

Migrate Data from a CSV File in Drupal 8/9 

  • Click Confirm to synchronize the configuration. You will get the message “The configuration was imported successfully”
  • Change to the terminal application.
  • Type the following:

drush migrate:import my_first_migration


Migrate Data from a CSV File in Drupal 8/9

 You can now check the content on your site.

Migrate Data from a CSV File in Drupal 8/9

 You have learned the basic principles of migrating data from a CSV file to Drupal 8/9. 

As you have already seen, the migration process requires attention to the details, so make sure that you work first on a staging server because one little mistake could break the whole site. I hope you liked this tutorial. 

Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
May 27 2021
May 27

Targeted for release in June 2022, a little over a year from now, here’s an update on the current state of development, what Drupal 10 means for your business and how you can get involved.

As Dries highlighted in the Driesnote at DrupalCon North America 2021, the Drupal 10 initiative is important to ensure that Drupal uses the latest and greatest components that we depend on; ensuring stability and security for all. Dries stressed the importance of maintaining the activity and momentum to push Drupal to be ready in time for the targeted release next year.


What will change?


Some of the third-party dependencies and system requirements will be upgraded to, ideally, their latest stable releases. Code deprecated throughout the Drupal 9 lifecycle will also be removed in the first release of Drupal 10.

  • CKEditor 4 will be replaced with CKEditor 5
  • Symfony 4 will be upgraded to Symfony 5 or Symfony 6
  • jQuery UI will be removed
  • PHP 8.0 will be required (up from PHP 7.3)
  • PostgreSQL 12 with pg_trgm will be required (up from PostgreSQL 10)

The Drupal stock themes “Seven” and “Bartik” will be replaced with “Claro” and “Olivero” respectively. A much needed and welcomed change is that Drupal core is dropping support for Internet Explorer 11 in Drupal 10. A high-level overview of all changes coming to Drupal 10 is listed on the drupal.org website.

We already know many of the deprecated APIs, some dating back to Drupal 8, which means module maintainers can start to update their code to rely on the newer APIs.


Current Tooling


To ease the process of upgrading from Drupal 9 to Drupal 10, there are a few tools in place. These tools were already developed to help aid in the upgrade of Drupal 8 to Drupal 9, so might be familiar to many.

  • The Drupal 10 Deprecation Status page shows Drupal 10 compatibility based on all currently confirmed deprecated API uses in Drupal 9 extensions.
  • The Upgrade Rector and the underlying drupal-rector project will be used to automate updates of deprecated code.
  • The Upgrade Status UI or CLI will check your Drupal 10 readiness on your Drupal 9 site (as much as already defined).

The best place to help with the upgrade path right now is updating drupal-rector. The rector 0.10 version update currently underway enables the tool to run on a modern setup. After that, adding support for Drupal 9 is key. Once that is in place, work on covering the top deprecated APIs can start.

The Drupal Project Update Bot also relies on Upgrade Status and drupal-rector to automatically post compatibility fixes to community projects. Automating as much as possible will streamline the upgrade process not only for Drupal core but for all contributed modules.


Get Involved


If you’re interested in helping to get Drupal ready for version 10, feel free to join the Drupal 10 Readiness initiative meetings, held every other Monday. The meetings are always open to all and held on the #d10readiness channel on Drupal Slack. The meetings are text-only, and transcripts of past meetings are available in the meeting archive.

While June 2022 might seem a way off and many website owners are still preparing for the shift to Drupal 9, version 10 will be here before you know it, so plan ahead and let our web development team help you make the move - get in touch with us today.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web