Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jun 22 2021
Jun 22

Lynette has been part of the Drupal community since Drupalcon Brussels in 2006. She comes from a technical support background, from front-line to developer liaison, giving her a strong understanding of the user experience. She took the next step by writing the majority of Drupal's Building Blocks, focused on some of the most popular Drupal modules at the time. From there, she moved on to working as a professional technical writer, spending seven years at Acquia, working with nearly every product offering. As a writer, her mantra is "Make your documentation so good your users never need to call you."

Lynette lives in San Jose, California where she is a knitter, occasionally a brewer, a newly-minted 3D printing enthusiast, and has too many other hobbies. She also homeschools her two children, and has three house cats, two porch cats, and two rabbits.

Jun 22 2021
Jun 22

https://www.drupal.org/project/xmlsitemap

Warning: We’ve had some trouble getting the XML Sitemap module to work on some websites. In those cases, we’ve used the Simple XML Sitemap module which works great but lacks some of the robustness: https://www.drupal.org/project/simple_sitemap.

Credits & Thanks

Thank you to:

About the XML Sitemap module

The XML Sitemap module creates an XML sitemap of your content that you can submit to the search engines. An XML sitemap is a specially-formatted summary of each piece of content on your website. You can read more at https://www.sitemaps.org/.

Tip: If you’re running an eCommerce website, this module is of particular importance. We’ve seen catalogs with extensive product listings increase traffic by thousands of visitors per day with a proper XML sitemap.

drupal xml sitemap admin page screenshot

Having an XML sitemap helps your SEO by giving Google a list of the pages that you want them to crawl. While Google can crawl your site without an XML sitemap, bigger and more complex sites confuse the crawler so it could potentially miss pages and even whole sections. If you don’t do this, you will have to manually submit every single page of your site to Google which is ridiculously time-consuming.
 

Install and Enable the XML Sitemap Module

  1. Install the XML Sitemap module on your server. (See this section for more instructions on installing modules.)

    drupal xml sitemap module installation
     

  2. Go to the Extend page: Click Manage > Extend (Coffee: “extend”) or visit https://yourDrupalSite.dev/admin/modules.
     
  3. Select the checkbox next to XML sitemap, XML sitemap custom, and XML sitemap engines and click the Install button at the bottom of the page.
     

Permissions

If necessary, give yourself permissions to use the XML Sitemap module.

  1. Click Manage > People > Permissions (Coffee: “perm”) or visit https://yourDrupalSite.dev/admin/people/permissions.

    drupal xml sitemap module permissions screenshot
     

  2. Select the appropriate checkbox for “Administer XML sitemap settings”.
     
  3. OPTIONAL: If you wish for your XML sitemap to include user information, select the appropriate checkbox for “User > View user information”, otherwise go on to the next step.

    drupal xml sitemap view user info screenshot
     

  4. Click the Save permissions button at the bottom of the page.
     

Configure the XML Sitemap module

  1. Click Manage > Configuration > Search and metadata > XML Sitemap, (Coffee: “xml”) then click the Sitemap Entities tab or visit https://yourdrupalsite.com/admin/config/search/smlsitemap/entities/settings

    drupal custom entities settings screenshot
     

  2. Select the checkbox next to each entity you want to show up in Google. You will likely select your Content Types and Taxonomies but you may or may not want to select Comments, User, or other items. If in doubt, include them, as they’re often good content for SEO purposes.
     
  3. Click the Save configuration button at the bottom of the page. After saving, stay on the Sitemap Entities tab and complete in the Configuring Individual Content Type Sitemap Settings section below.
     

Configuring Individual Content Type Sitemap Settings

For each content type you selected on the Sitemap Entities tab, you’ll want to enable their inclusion in the XML Sitemap and weight the content. While not difficult, you’ll want to weight your content differently based upon the type of content.

  1. Click the Configure button next to the first content type. This will display the XML sitemap settings page for that content type.

    drupal article xml sitemap settings screenshot
     

  2. From the Inclusion drop down list, select “Included”. A new set of fields will display.

    drupal article xml sitemap settings screenshot 2
     

  3. Set the Default Priority and Default change frequency drop down lists to the settings specified in the table below for each standard content type.

    Table: Standard Content Type XML Sitemap Settings
     

    Content Type Default priority Default change frequency Article 0.8 always Blog 0.5 always Basic page (if used) 0.8 always Tags (all taxonomy terms) 0.5 always All other content 0.5 always
     
  4. Click the Save Configuration button at the bottom of the page.
     
  5. Once you are finished each content type configuration, go to the Settings tab. Here you will see the different content types divided into tabs.

    Note: Frontpage is automatically set to a Priority of 1.0 (highest) - you’ll want to leave this as it is.

    drupal xml sitemap weightings
     

  6. Make sure that the Minimum sitemap lifetime is set to “No minimum”.
     
  7. Make sure that the check box next to Include a stylesheet in the sitemaps for humans is selected.
     
  8. Click the Save configuration button at the bottom to save your changes.
     

Building Your XML Sitemap for the First Time

  1. Select the Rebuild tab or go to https://yourDrupalsite.com/admin/config/search/xmlsitemap/rebuild.

    drupal xml sitemap rebuilt screenshot
     

  2. Within the Select which link types you would like to rebuild block, select all items.
     
  3. Make sure the checkbox next to Save and restore any custom inclusions and priority links is selected.
     
  4. Click the Save configuration button. This will generate your sitemap for the first time. Go to https://www.yourwebsite.com/sitemap.xml to see it.

The XML sitemap is automatically updated when Cron runs. That makes it unnecessary to rebuild your sitemap again unless you run into problems.

facebook icon twitter social icon linkedin social icon

Jun 22 2021
Jun 22

Like many other things in life, our code needs some discipline too. I’m pretty sure no developer in the world wants to write unclean code. However, unclean code still does exist. It can arise due to various reasons - may be due to business pressure, lack of documentation, lack of interaction among team members or incompetence of the developer. Due action needs to be taken to make the code cleaner and to avoid problems that may come up in the future because of this. Fortunately, we can tackle these issues with a disciplined technique to restructure the code called as Refactoring.

Refactoring is a technique to improve the internal structure of the existing program source code and maintaining its external behavior. It is a step-by-step procedure to improve the code, which will otherwise cost us a lot of time and effort.

Code Refactoring

Disadvantages of Unclean Code

Expanding the source code size by adding new programs to it without refactoring for a long time makes the code messy and unclean. It makes the maintenance and further development on the project very difficult. Unclean code comes with tons of disadvantages:

  1. It increases the maintenance cost of the project.
  2. Adding new feature takes a lot of time and sometimes it become impossible to add.
  3. It slows down the introduction of new people to the project.
  4. It contains duplicate code.
  5. It does not pass all tests.

There are many other disadvantages, but these problems cost a lot of money and time to the organization.

Advantages of Clean Code

Clean and disciplined code has its own advantages:

  1. It does not contain duplicate code.
  2. It passes all tests.
  3. Makes the code more readable and easier to comprehend.
  4. It makes the code easier to maintain and less expensive.

Advantages of clean code are many. The process of turning a messy unclean code into a more maintainable clean code is called Refactoring process.

Refactoring Process

Refactoring should be done as a series of small changes, each of which slightly improves the existing code and allows the program to continue to run without breaking it. After refactoring, code should become cleaner than before. If it remains unclean, then there is no point of refactoring. It is just a loss of time and effort. No new code should be added during refactoring to create new functionality. It should pass all the tests after refactoring.

When to Refactor

Refactoring should be performed when:

  • Adding duplicate code to the project. Because duplicate code is hard to maintain and changes in one place may require updates in many other places.
  • Adding a feature. Refactoring makes it easier to add new functionality.
  • When fixing a bug, you should refactor because it will make the bug discovered automatically.
  • Code review is the last stage of refactoring the code before changes are deployed to production.

Dealing with Code Smells

Problems on the code are sometimes referred to as code smells. These are the problems addressed during the refactoring. They are easier to find and fix.

For example:

  • Large methods and classes which are very hard to work with.
  • Incomplete or incorrect usage of object-oriented programming concept.
  • Code that makes it hard to add any changes.
  • Duplicate and deprecated code in the project.
  • Highly coupled code.

Refactoring Techniques

Refactoring techniques are the steps taken to refactor the code. Some of the refactoring techniques are:

1. The Extract Method

// Problem

function printInvoice() {
    $this->printHeader();
  
    // Print details.
    print("name:  " . $this->name);
    print("amount " . $this->getPrice());
}

// Solution

function printInvoice() {
    $this->printBanner();
    $this->printDetails($this->getPrice());
}
  
function printDetails($price) {
    print("name:  " . $this->name);
    print("amount " . $outstanding);
}

If you have a code fragment that can be grouped, add a new method for that part of code and replace the old code. It makes the code more isolated and removes duplication.

2. Extract Variable

// Problem

if (($student->getMarksinMath() > 60) &&
    ($student->getMarksInPhysics() > 60) &&
    ($student->getMarksinChemistry() > 60) && $this->pass)
{
  // do something
}

// Solution

$mathResult = $student->getMarksinMath() > 60;
$physicsResult = $student->getMarksinPhysics() > 60;
$chemistryResult = $student->getMarksinChemistry() > 60;
$hasPassed = $this->pass;

if ($mathResult && $physicsResult && $chemistryResult && $hasPassed) {
  // do something
}

For large expressions like the one displayed in the problem, which is very hard to understand, different variables can be created for each expression. It makes the code more readable. But this method should be applied with caution. However, it has its own disadvantages. The code will execute even if the condition is false which is not the case in the problem.

3. The Inline Method

// Problem

function printResult() {
    return ($this->getResult()) ? “Pass” : “Fail”;
}
function getResult() {
    return $this->totalMarks > 300;
}

// Solution

function getRating() {
    return ($this->totalMarks > 300) ? “Pass” : “Fail”;
}

When the method body is more obvious, use this technique. Replace the method with method content and delete the method. It minimizes the number of unwanted methods.

4. Inline Temp

// Problem

$mathMark = $student->getMathResult();
return $mathMark > 60;

// Solution

return $student->getMathResult() > 60;

If there is an unwanted temporary variable which just holds the expression result, remove the variable with the expression itself. It helps in getting rid of unnecessary variables.

5. Replace array with object

// Problem

$row = [];
$row[0] = "Virat Kohli";
$row[1] = 70;

// Solution

$row = new Player();
$row->setName("Virat Kohli");
$row->setNumberofCentury(70);

If there is an array with various types of data, replace that with an object. Because fields of a class are easier to document and maintain than arrays with various types of data.

6. Parameterized Method

// Problem

function fivePercentRaise() {
}

function tenPercentRaise() {   
}

// Solution

function raise(percent) {
}

If multiple methods perform similar action on data, then replace all the methods with one method and pass the data as argument(s). It removes duplicate and redundant methods.

7. Separate query from modifier

// Problem

function getInterest() {
    $this->interestAmount = $this->principal * 10 / 100;

    return $this->interestAmount;
}

// Solution

function setInterest() {
    $this->interestAmount = $this->principal * 10 / 100;
}

function getInterest() {
    return $this->interestAmount;
}

If a method is returning value and changing the object, split the two methods. One for modification and another to return the result. It removes the chance of unintentional modification of an object.

Jun 21 2021
Jun 21

How could we help companies improve their sales funnel and project proposals? Would more case studies help? Would promoted case studies help?

What additional support and training could be provided to help organizations learn and use the Webform module? Would an organization want to sponsor monthly Webform training videos? Would sponsoring and organizing a monthly Webform lunch-n-learn be of interest to anyone?

Are there any additional services that could be offered around the Webform module? Would recommending or troubleshooting Webform add-on modules help organizations succeed?

One thing that I struggle with is determining what is the right level of transparency needed to make the Webform module’s Open Collective sustainable? For example, I am separately engaging with organizations to do project planning as a consultant and these engagements rightfully need to remain private and confidential.

A related question is, should I directly be engaging, possibly cold calling, or directly emailing organizations and companies trying to get them to sponsor the Webform module's Open Collective? I am hesitant to do this, but asking someone directly for help can make a huge difference.

Jun 21 2021
Jun 21

Watch Part 3 of our CiviCRM Entity training series.

[embedded content]

CiviCRM Entity is a Drupal module that provides enhanced integration with Drupal, allowing developers, site builders, and content creators to utilize powerful Drupal modules including Views, Layout Builder, Media, and Rules. Drupal fields can be added to CiviCRM entity types. It provides Drupal pages and edit forms for all CiviCRM data such as contacts, allowing usage of familiar Drupal content creation tools and workflows.

During this training we cover:

  • CiviCRM Entity Leaflet Address module
  • Views Proximity Filter
  • Map markers and Marker Clustering
  • Drupal Calendar Views for Activities

Valuable Tools for Better Site Building

  • Entity Browser Module: Provides extremely configurable selection, entity creation, edit form display field widget for referencing/creating entities with entity reference fields
  • Bootstrap Layout Builder: Enhances Drupal Core's layout builder module, including boostrap column layouts, integration with Bootstrap Styles so that site builders can control typography, borders, padding, margin, background color, background video, and much more
  • Bootstrap Styles Module: Provides configurable styles to be used with Bootstrap Layout Builder
  • Bootstrap Library Module: Provides any version of Bootstrap css/js framework for themes that are not bootstrap based
  • Layout Builder Restrictions Module: Allows site builders to limit what blocks are available to add to layouts
  • Inline Entity Form Module: Another module entity creation, edit form display field widget for referencing/creating entities with entity reference fields. Can be used in conjunction with Entity Browser
  • Drupal Console: A command line tool to generate boilerplate code for many Drupal features, including Field formatters and Field widgets
  • Field Group Module: Enables grouping fields into fieldsets, collapsible "Details" element, vertical and horizonal tabs
  • Chaos Tools Module: Chaos Tools: Many "goodies" that haven't been included into Drupal Core. In our demo it allows us to place entity view mode renderings, that use field groups

Additional Resources:

Jun 21 2021
Jun 21

Below is an extract from my book Drupal 9 Module Development from the early Chapter 5 (out of 18) on Menu Links. I introduce the menu system from a rather theoretical point of view, and catalogue the different types of menu links we have in Drupal.

Before we get our hands dirty with menus and menu links, let's talk a bit about the general architecture behind the menu system. To this end, I want to see its main components, what some of its key players are and what classes you should be looking at. As always, no great developer has ever relied solely on a book or documentation to figure out complex systems.

Menus

Menus are configuration entities represented by the following class: Drupal\system\ Entity\Menu. I previously mentioned that we have something called configuration entities in Drupal, which we explore in detail later in this book. However, for now, it's enough to understand that menus can be created through the UI and become an exportable configuration. Additionally, this exported configuration can also be included inside a module so that it gets imported when the module is first installed. This way, a module can ship with its own menus. We will see how this latter aspect works when we talk about the different kinds of storage in Drupal. For now, we will work with the menus that come with Drupal core.

Each menu can have multiple menu links, structured hierarchically in a tree with a maximum depth of 9 links. The ordering of the menu links can be done easily through the UI or via the weighting of the menu links, if defined in code.

Menu links

At their most basic level, menu links are YAML-based plugins. To this end, regular menu links are defined inside a module_ name.links.menu.yml file and can be altered by other modules by implementing hook_menu_links_discovered_alter(). When I say regular, I mean those links that go into menus. We will see shortly that there are also a few other types.

There are a number of important classes you should check out in this architecture though: MenuLinkManager (the plugin manager) and MenuLinkBase (the menu link plugins base class that implements MenuLinkInterface).

Menu links can, however, also be content entities. The links created via the UI are stored as entities because they are considered content. The way this works is that for each created MenuLinkContent entity, a plugin derivative is created. We are getting dangerously close to advanced topics that are too early to cover. But in a nutshell, via these derivatives, it's as if a new menu link plugin is created for each MenuLinkContent entity, making the latter behave as any other menu link plugin. This is a very powerful system in Drupal.

Menu links have a number of properties, among which is a path or route. When created via the UI, the path can be external or internal or can reference an existing resource (such as a user or piece of content). When created programmatically, you'll typically use a route.

Multiple types of menu links

The menu links we've been talking about so far are the links that show up in menus. There are also a few different kinds of links that show up elsewhere but are still considered menu links and work similarly.

Local tasks

Local tasks, otherwise known as tabs, are grouped links that usually show up above the main content of a page (depending on the region where the tabs block is placed). They are usually used to group together related links that have to deal with the current page. For example, on an entity page, such as the node detail page, you can have two tabs—one for viewing the node and one for editing it (and maybe one for deleting it); in other words, local tasks:

Local tasks take access rules into account, so if the current user does not have access to the route of a given tab, the link is not rendered. Moreover, if that means only one link in the set remains accessible, that link doesn't get rendered as there is no point. So, for tabs, a minimum of two links are needed for them to show up.

Modules can define local task links inside a module_name.links.task.yml file, whereas other modules can alter them by implementing hook_menu_local_tasks_ alter().

Local actions

Local actions are links that relate to a given route and are typically used for operations. For example, on a list page, you might have a local action link to create a new list item, which will take you to the relevant form page.

Modules can define local action links inside a module_name.links.action.yml file, whereas other modules can alter them by implementing hook_menu_local_ actions_alter().

Contextual links

Contextual links are used by the Contextual module to provide handy links next to
a given component (a render array). You probably encountered this when hovering over a block, for example, and getting that little icon with a dropdown that has the Configure block link.

Contextual links are tied to render arrays. In fact, any render array can show a group of contextual links that have previously been defined.
Modules can define contextual links inside a module_name.links.contextual.ymlfile, whereas other modules can alter them by implementing hook_contextual_links_ alter().

For more on the menu system and to see how the twist unfolds, do check out my book, Drupal 9 Module Development.

Thanks for the support.

Jun 21 2021
Jun 21

In this short article I want to introduce you to a new module we recently released on Drupal.org, namely Multi-value form element.

This small module provides a form element that allows you to easily define multi-value elements in your custom forms. Much like what field widgets provide with the Add another item Ajax button.

So how does it work? Easy, really. All you have to do is define a form element of the '#type' => 'multivalue' with one or more children, defined like you normally would. So for example:

$form['names'] = [
  '#type' => 'multivalue',
  '#title' => $this->t('Names'),
  'name' => [
    '#type' => 'textfield',
    '#title' => $this->t('Name'),
  ],
];

Would give you this:

drupal multi value form elements example

And you can also use multiple form element children if you want:

$form['contacts'] = [
  '#type' => 'multivalue',
  '#title' => $this->t('Contacts'),
  'name' => [
    '#type' => 'textfield',
    '#title' => $this->t('Name'),
  ],
  'mail' => [
    '#type' => 'email',
    '#title' => $this->t('E-mail'),
  ],
];

So as you can see, no big deal to use. But all the complex Ajax logic of adding extra values is out of your hands now and can easily build nice forms.

Check out some more examples of how to use this element and what options it has above the Drupal\multivalue_form_element\Element\MultiValue class.

This module is sponsored by the European Commission as part of the OpenEuropa initiative and all the work my colleagues and myself are doing there.

Jun 21 2021
Jun 21

Maybe you have banged your head against the wall trying to figure out why if you add an Ajax button (or any other element) inside a table, it just doesn’t work. I have.

I was building a complex form that needed to render some table rows, nicely formatted and have some operations buttons to the right to edit/delete the rows. All this via Ajax. You know when you estimate things and you go like: yeah, simple form, we render table, add buttons, Ajax, replace with text fields, Save, done. Right? Wrong. You render the table, put the Ajax buttons in the last column and BAM! Hours later, you wanna punch someone. When Drupal renders tables, it doesn’t process the #ajax definition if you pass an element in the column data key.

Well, here’s a neat little trick to help you out in this case: #pre_render.

What we can do is add our buttons outside the table and use a #pre_render callback to move the buttons back into the table where we want them. Because by that time, the form is processed and Drupal doesn’t really care where the buttons are. As long as everything else is correct as well.

So here’s what a very basic buildForm() method can look like. Remember, it doesn’t do anything just ensures we can get our Ajax callback triggered.

/**
 * {@inheritdoc}
 */
public function buildForm(array $form, FormStateInterface $form_state) {
  $form['#id'] = $form['#id'] ?? Html::getId('test');

  $rows = [];

  $row = [
    $this->t('Row label'),
    []
  ];

  $rows[] = $row;

  $form['buttons'] = [
    [
      '#type' => 'button',
      '#value' => $this->t('Edit'),
      '#submit' => [
        [$this, 'editButtonSubmit'],
      ],
      '#executes_submit_callback' => TRUE,
      // Hardcoding for now as we have only one row.
      '#edit' => 0,
      '#ajax' => [
        'callback' => [$this, 'ajaxCallback'],
        'wrapper' => $form['#id'],
      ]
    ],
  ];

  $form['table'] = [
    '#type' => 'table',
    '#rows' => $rows,
    '#header' => [$this->t('Title'), $this->t('Operations')],
  ];

  $form['#pre_render'] = [
    [$this, 'preRenderForm'],
  ];

  return $form;
}

First, we ensure we have an ID on our form so we have something to replace via Ajax. Then we create a row with two columns: a simple text and an empty column (where the button should go, in fact).

Outside the form, we create a series of buttons (1 in this case), matching literally the rows in the table. So here I hardcode the crap out of things but you’d probably loop the same loop as for generating the rows. On top of the regular Ajax shizzle, we also add a submit callback just so we can properly capture which button gets pressed. This is so that on form rebuild, we can do something with it (up to you to do that).

Finally, we have the table element and a general form pre_render callback defined.

And here are the two referenced callback methods:

/**
 * {@inheritdoc}
 */
public function editButtonSubmit(array &$form, FormStateInterface $form_state) {
  $element = $form_state->getTriggeringElement();
  $form_state->set('edit', $element['#edit']);
  $form_state->setRebuild();
}

/**
 * Prerender callback for the form.
 *
 * Moves the buttons into the table.
 *
 * @param array $form
 *   The form.
 *
 * @return array
 *   The form.
 */
public function preRenderForm(array $form) {
  foreach (Element::children($form['buttons']) as $child) {
    // The 1 is the cell number where we insert the button.
    $form['table']['#rows'][$child][1] = [
      'data' => $form['buttons'][$child]
    ];
    unset($form['buttons'][$child]);
  }

  return $form;
}

First we have the submit callback which stores information about the button that was pressed, as well as rebuilds the form. This allows us to manipulate the form however we want in the rebuild. And second, we have a very simple loop of the declared buttons which we move into the table. And that’s it.

Of course, our form should implement Drupal\Core\Security\TrustedCallbackInterface and its method trustedCallbacks() so Drupal knows our pre_render callback is secure:

/**
 * {@inheritdoc}
 */
public static function trustedCallbacks() {
  return ['preRenderForm'];
}

And that’s pretty much it. Now the Edit button will trigger the Ajax, rebuild the form and you are able to repurpose the row to show something else: perhaps a textfield to change the hardcoded label we did? Up to you.

Hope this helps.

Jun 21 2021
Jun 21

Today I want to introduce a new contrib module called Composite Reference. Why is it called like this? Because it’s meant to be used for strengthening the “bond” between entities that are meant to live and die together.

In many cases we use entity references to entities that are not reusable (or are not meant to be). They are just more complex storage vehicles for data that belongs to another entity (for the sake of explanation, we can call this the parent). So when the parent is deleted, it stands to reason the referenced entity (the child) is also deleted because it is not supposed to exist outside of the context of its parent. And this is a type of composite relation: the two belong together as a unit. Granted, not all parent-child relations are or have to be composite. But some can and I simply used them as an example.

So what does the module do? Apart from the fancy name, it does nothing more than make an entity reference (or entity reference revisions) field configurable to become composite. And when the relation is composite, the referenced entity gets deleted when the referencing one is deleted. And to prevent all sorts of chaos and misuse, the deletion is prevented if the referenced entity is referenced by yet another entity (making it by definition NOT composite). This should not happen though as you would mark relations as composite only in the cases in which the referenced entities are not reusable.

And that is pretty much it. You can read the project README for more info on how to use the module.

This project was written and is maintained as part of the OpenEuropa Initiative of the European Commission.

Jun 21 2021
Jun 21

In order to really understand how entity data is modelled, we need to understand the TypedData API. Unfortunately, this API still remains quite a mystery for many. But you're in luck because, in this section, we're going to get to the bottom of it.

Why TypedData?

It helps to understand things better if we first talk about why there was the need for this API. It all has to do with the way PHP as a language is, compared to others, and that is, loosely typed. This means that in PHP it is very difficult to use native language constructs to rely on the type of certain data or understand more about that data.

The difference between the string "1" and integer 1 is a very common example. We are often afraid of using the === sign to compare them because we never know what they actually come back as from the database or wherever. So, we either use == (which is not really good) or forcefully cast them to the same type and hope PHP will be able to get it right.

In PHP 7, we have type hinting for scalar values in function parameters which is good, but still not enough. Scalar values alone are not going to cut it if you think of the difference between 1495875076 and 2495877076. The first is a timestamp while the second is an integer. Even more importantly, the first has meaning while the second one does not. At least seemingly. Maybe I want it to have some meaning because it is the specific formatting for the IDs in my package tracking app.

Drupal was not exempt from the problems this loosely typed nature of PHP can create. Drupal 7 developers know very well what it meant to deal with field values in this way. But not anymore because we now have the TypedData API in Drupal.

What is TypedData?

The TypedData API is a low-level and generic API that essentially does two things from which a lot of power and flexibility is derived.

First, it wraps "values" of any kind of complexity. More importantly, it forms "values". This can be a simple scalar value to a multidimensional map of related values of different types that together are considered one value. Let's take, for example, a New York license plate: 405-307. This is a simple string but we "wrap" it with TypedData to give it meaning. In other words, we know programmatically that it is a license plate and not just a random PHP string. But wait, that plate number can be found in other states as well (possibly, I have no idea). So, in order to better define a plate, we need also a state code: NY. This is another simple string wrapped with TypedData to give it meaning—a state code. Together, they can become a slightly more complex piece of TypedData: US license plate, which has its own meaning.

Second, as you can probably infer, it gives meaning to the data that it wraps. If we continue our previous example, the US license plate TypedData now has plenty of meaning. So, we can programmatically ask it what it is and all sorts of other things about it, such as what is the state code for that plate. And the API facilitates this interaction with the data.

As I mentioned, from this flexibility, a lot of power can be built on top. Things like data validation are very important in Drupal and rely on TypedData. As we will see later in this chapter, validation happens at the TypedData level using constraints on the underlying data.

Check out the book for a getting a deeper understanding on how this API is used to model the entity system in Drupal.

Jun 21 2021
Jun 21

Automated testing is a process by which we rely on special software to continuously run pre-defined tests that verify the integrity of our application. To this end, automated tests are a collection of steps that cover the functionality of an application and compare triggered outcomes to expected ones.

Manual testing is a great way to ensure that a piece of written functionality works as expected. The main problem encountered by most adopters of this strategy, especially those who use it exclusively, is regression. Once a piece of functionality is tested, the only way they can guarantee regressions (or bugs) were not introduced by another piece of functionality is by retesting it. And as the application grows, this becomes impossible to handle. This is where automated tests come in.

Automated testing uses special software that has an API that allows us to automate the steps involved in testing the functionality. This means that we can rely on machines to run these tests as many times as we want, and the only thing stopping us from having a fully-working application is the lack of proper test coverage with well-defined tests.

There's a lot of different software available for performing such tests and it's usually geared toward specific types of testing. For example, Behat is a powerful PHP-based open source behavior testing framework that allows the scripting of tests that mirror quite closely what a manual tester would do—interact with the application through the browser and test its behavior. There are other testing frameworks that go much lower in the level of their testing target. For example, the PHP industry standard tool, PHPUnit, is widely used for performing unit tests. This type of testing focuses on the actual code at the lowest possible level; it tests that class methods work properly by verifying their output after providing them with different input. A strong argument in favor of this kind of testing is that it encourages better code architecture, which can be (partly) measured by the ease with which unit testing can be written for it.

We also have functional or integration tests which fall somewhere in between the two examples. These go higher than code level and enlist application subsystems in order to test more comprehensive sets of functionality, without necessarily considering browser behavior and user interaction.

It is not difficult to agree that a well-tested application features a combination of the different testing methodologies. For example, testing the individual architectural units of an application does not guarantee that the entire subsystem works, just as testing only the subsystem does not guarantee that its individual components will work properly under all circumstances. Also, the same is true for certain subsystems that depend on user interaction—these require test coverage as well.

Testing methodologies in Drupal 8

Like many other development aspects, automated testing has been greatly improved in Drupal 8. In the previous version, the testing framework was a custom one built specifically for testing Drupal applications—Simpletest. Its main testing capability focused on functional testing with a strong emphasis on user interaction with a pseudo-browser. However, it was quite strong and allowed a wide range of functionality to be tested.

Drupal 8 development started with Simpletest as well. However, with the adoption of PHPUnit, Drupal is moving away from it and is in the process of deprecating it. To replace it, there is a host of different types of tests—all run by PHPUnit—that can cover more testing methodologies. So let's see what these are.

Drupal 8 comes with the following types of testing:

  • Simpletest: exists for legacy reasons but no longer used to create new tests. This will be removed in Drupal 9.
  • Unit: low-level class testing with minimal dependencies (usually mocked).
  • Kernel: functional testing with the kernel bootstrapped, access to the database and only a few loaded modules.
  • Functional: functional testing with a bootstrapped Drupal instance, a few installed modules and using a Mink-based browser emulator (Goutte driver).
  • FunctionalJavaScript: functional testing like the previous, using the Selenium driver for Mink that allows for testing JavaScript powered functionality.

Apart from Simpletest, all of these test suites are built on top of PHPUnit and are, consequently, run by it. Based on the namespace the test classes reside in, as well as the directory placement, Drupal can discover these tests and know what type they are.

PHPUnit

Drupal 8 uses PHPUnit as the testing framework for all types of tests. In this section, we will see how we can work with it to run tests.

On your development environment (or wherever you want to run the tests), make sure you have the composer dependencies installed with the --dev flag. This will include PHPUnit. Keep in mind not to ever do this on your production environment as you can compromise the security of your application.

Although Drupal has a UI for running tests, PHPUnit is not well integrated with this. So, it's recommended that we run them using the command line instead. Actually, it's very easy to do so. To run the entire test suite (of a certain type), we have to navigate to the Drupal core folder:

cd core

And run the following command:

../vendor/bin/phpunit --testsuite=unit

This command goes back a folder through the vendor directory and uses the installed phpunit executable (make sure the command finds its way to the vendor folder in your installation). As an option, in the previous example, we specified that we only want to run unit tests. Omitting that would run all types of tests. However, for most of the others, there will be some configuration needed, as we will see in the respective sections.

If we want to run a specific test, we can pass it as an argument to the phpunit command (the path to the file):

../vendor/bin/phpunit tests/Drupal/Tests/Core/Routing/UrlGeneratorTest.php

In this example, we run a Drupal core test that tests the UrlGenerator class.

Alternatively, we can run multiple tests that belong to the same group (we will see how tests are added to a group soon):

../vendor/bin/phpunit --group=Routing

This runs all the tests from the Routing group which actually contains the UrlGeneratorTest we saw earlier. We can run tests from multiple groups if we separate them by a comma.

Also, to check what the available groups are, we can run the following command:

../vendor/bin/phpunit --list-groups

This will list all the groups that have been registered with PHPUnit.

Finally, we can also run a specific method found inside a test by using the --filter argument:

../vendor/bin/phpunit --filter=testAliasGenerationUsingInterfaceConstants

This is one of the test methods from the same UrlGeneratorTest we saw before and is the only one that would run.

Registering tests

There are certain commonalities between the various test suite types regarding what we need to do in order for Drupal (and PHPUnit) to be able to discover and run them.

First, we have the directory placement where the test classes should go in. The pattern is this: tests/src/suite_type, where suite_type is a name of the test suite type this test should be. And it can be one of the following:

  • Unit
  • Kernel
  • Functional
  • FunctionalJavascript

So, for example, unit tests would go inside the tests/src/Unit folder of our module. Second, the test classes need to respect a namespace structure as well:

namespace Drupal\Tests\[module_name]\[suite_type]

This is also pretty straightforward to understand.

Third, there is a certain metadata that we need to have in the test class PHPDoc. Every class must have a summary line describing what the test class is for. Only classes that use the @coversDefaultClass attribute can omit the summary line. Moreover, all test classes must have the @group PHPDoc annotation indicating the group they are part of. This is how PHPUnit can run tests that belong to certain groups only.

So now that we know how to register and run tests, let's order the book and start by looking at unit tests first.

Jun 21 2021
Jun 21

In this short article I wanted to draw your attention to a neat little feature introduced in Drupal 8.8 related to migrations. And I mean the ability to force entity validation whenever Migrate saves destination entities.

As we already know, entities can be validated using their typed data wrappers like so:

$violations = $entity->validate();

This calls the general validation over the entity and all its fields. Very handy.

Something that you may or may not know is that when we create and save an entity programatically, this validation is not run by default. So for example:

$entity = Node::create([
  'type' => 'page',
  'title' => 'My title',
]);

$entity->save();

If we had some validation on the title field, it would not run and potentially bad data would be saved in the field. Sometimes this is fine, other times it’s bad and in many times it’s critical as things can break spectacularly. So always good to run validation.

When it comes to the Migrate entity destination, there was no validation being run. But with Drupal 8.8, we have a destination plugin option that indicates we want to run the validation when saving the entity. Like so:

destination:
  plugin: 'entity:node'
  validate: true

This will ensure the nodes are validated before being saved by the migration.

Moreover, apart from the configuration on the migration plugin, you can also control this from the entity level if the entity type is defined by you. You can do this by implementing the Drupal\Core\Entity\FieldableEntityInterface::isValidationRequired() method in the entity class. Do note, however, that most entity types do not implement it, nor is this method checked before doing regular entity saves. I expect its use will be extended but so far it is only used within the migration context.

Hope this helps.

Jun 21 2021
Jun 21

Using migrate in Drupal is a very powerful way to bring data into a Drupal application. I talked and wrote extensively on this matter here and elsewhere. Most of my examples use the CSV source plugin to illustrate migrations from CSV-formatted data. And if you are familiar with this source plugin, you know you have “configure” it by specifying all the file’s column names. Kind of like this (a very simple YAML array):

  column_names:
    0:
      id: 'Unique Id'
    1:
      column one: 'What this column is about'
    2:
      column two: 'Another column'

I wrote many many migrations from CSV files, of various sizes, but it took me years to finally utter the following out loud:

Can’t I just generate these stupid column names automatically instead of manually writing them every time?

As you can imagine, there can be files with 30 columns. And a given migration effort can even contain 20 migration files. So, a pain. I’m not the sharpest tool in the shed but finally my laziness got the best of me and decided to write a Drush command I want to share with you today. Also, I have not written anything in such a long time and I feel proper shame.

So what I wanted was simple: a command that I can run, point it to a file and it would print me the column names I just paste into the migration file. No fuss, no muss. Or is it the other way around?

So this is what I came up with.

First, in the module’s composer file, we have to add an extra bit to inform Drush about the services file used for Drush commands. Apparently this will be mandatory in Drush 10.

    "extra": {
        "drush": {
            "services": {
                "drush.services.yml": "^9"
            }
        }
    }

Then, we have the actual drush.services.yml file where we declare the command service:

services:
  my_module.commands:
    class: Drupal\my_module\Commands\MigrationCommands
    tags:
      - { name: drush.command }

It’s a simple tagged service that says that it should be treated by Drush as a command class that can contain multiple commands.

And finally, the interesting bit, the command class:

getConfig()->cwd() . DIRECTORY_SEPARATOR . $file, 'r');
    $spl->next();
    $headers = $spl->fgetcsv();

    $source_headers = [];
    foreach ($headers as $header) {
      $source_headers[] = [$header => $header];
    }

    $yml = Yaml::encode($source_headers);
    $this->output()->write($yml);
  }

}

What happens here is very simple. We first read the file whose path is the first and only mandatory argument of the command. This path needs to be relative from where the Drush command is called from because we concatenate it with that location using $this->getConfig()->cwd(). Then we take the values from the first row of the CSV (the header) and we build an array that is in the format expected by the CSV source plugin. Finally, we output a YAML-encoded version of that array.

Do note, however, that the column description is just the column name again since we don’t have data for that. So if you wanna add descriptions, you’ll have to add them manually in the migration file. Run the command, copy and paste and bill your client less.

Hope this helps. Can’t believe I’ve been writing CSV based migrations since like the beginning and I just came up with this thing now.

Jun 21 2021
Jun 21

In this article we are going to explore some of the powers of the Drupal 8 migration system, namely the migration “templates” that allow us to build dynamic migrations. And by templates I don’t mean Twig templates but plugin definitions that get enhanced by a deriver to make individual migrations for each of the things that we need in the application. For example, as we will explore, each language.

The term “template” I inherit from the early days of Drupal 8 when migrations were config entities and core had migration (config) templates in place for Drupal to Drupal migrations. But I like to use this term to represent also the deriver-based migrations because it kinda makes sense. It’s a personal choice so feel free to ignore it if you don’t agree.

Before going into the details of how the dynamic migrations works, let’s cover a few of the more basic things about migrations in Drupal 8.

What is a migration?

The very first thing we should talk about is what actually is a migration. The simple answer to this question is: a plugin. Each migration is a YAML-based plugin that actually brings together all the other plugins the migration system needs to run an actual logical migration. And if you don’t know what a plugin is, they are swappable bits of functionality that are meant to perform a similar task, depending on their type. They are all over core and by now there are plenty of resources to read more about the plugin system, so I won’t go into it here.

Migration plugins, unlike most others such as blocks and field types, are defined in YAML files inside the module’s migrations folder. But just like all other plugin types, they map to a plugin class, in this case Drupal\migrate\Plugin\Migration.

The more important thing to know about migrations, however, is the logical structure they follow. And by this I mean that each migration is made up of a source, multiple processors and a destination. Make sense right? You need to get some data (the source reads and interprets its format), prepare it for its new destination (the processors alter or transform the data) and finally save it in the destination (which has a specific format and behaviour). And to make all this happen, we have plugins again:

  • Source plugins
  • Process plugins
  • Destination plugins

Source plugins are responsible for reading and iterating over the raw data being imported. And this can be in many formats: SQL tables, CSV files, JSON files, URL endpoint, etc. And for each of these we have a Drupal\migrate\Plugin\MigrateSourceInterface plugin. For average migrations, you’ll probably pick an existing source plugin, point it to your data and you are good to go. You can of course create your own if needed.

Destination plugins (Drupal\migrate\Plugin\MigrateDestinationInterface) are closely tied to the site being migrated into. And since we are in Drupal 8, these relate to what we can migrate to: entities, config, things like this. You will very rarely have to implement your own, and typically you will use an entity based destination.

In between these two, we have the process plugins (Drupal\migrate\Plugin\MigrateProcessInterface), which are admittedly the most fun. There are many of them already available in core and contrib, and their role is to take data values and prepare them for the destination. And the cool thing is that they are chainable so you can really get creative with your data. We will see in a bit how these are used in practice.

The migration plugin is therefore a basic definition of how these other 3 kinds of plugins should be used. You get some meta, source, process, destination and dependency information and you are good to go. But how?

That’s where the last main bit comes into play: the Drupal\migrate\MigrateExecutable. This guy is responsible for taking a migration plugin and “running” it. Meaning that it can make it import the data or roll it back. And some other adjacent things that have to do with this process.

Migrate ecosystem

Apart from the Drupal core setup, there are few notable contrib modules that any site doing migrations will/should use.

One of these is Migrate Plus. This module provides some additional helpful process plugins, the migration group configuration entity type for grouping migrations and a URL-based source plugin which comes with a couple of its own plugin types: Drupal\migrate_plus\DataFetcherPluginInterface (retrieve the data from a given protocol like a URL or file) and Drupal\migrate_plus\DataParserPluginInterface (interpret the retrieved data in various formats like JSON, XML, SOAP, etc). Really powerful stuff over here.

Another one is Migrate Tools. This one essentially provides the Drush commands for running the migrations. To do so, it provides its own migration executable that extends the core one to add all the necessary goodies. So in this respect, it’s a critical module if you wanna actually run migrations. It also makes an attempt at providing a UI but I guess more of that will come in the future.

The last one I will mention is Migrate Source CSV. This one provides a source plugin for CSV files. CSV is quite a popular data source format for migrations so you might end up using this quite a lot.

Going forward we will use all 3 of these modules.

Basic migration

After this admittedly long intro, let’s see how one of these migrations looks like. I will create one in my advanced_migrations module which you can also check out from Github. But first, let’s see the source data we are working with. To keep things simple, I have this CSV file containing product categories:

id,label_en,label_ro
B,Beverages,Bauturi
BA,Alcohols,Alcoolice
BAB,Beers,Beri
BAW,Wines,Vinuri
BJ,Juices,Sucuri
BJF,Fruit juices,Sucuri de fructe
F,Fresh food,Alimente proaspete

And we want to import these as taxonomy terms in the categories vocabulary. For now we will stick with the English label only. We will see after how to get them translated as well with the corresponding Romanian labels.

As mentioned before, the YAML file goes in the migrations folder and can be named advanced_migrations.migration.categories.yml. The naming is pretty straightforward to understand so let’s see the file contents:

id: categories
label: Categories
migration_group: advanced_migrations
source:
  plugin: csv
  path: 'modules/custom/advanced_migrations/data/categories.csv'
  header_row_count: 1
  ids:
    - id
  column_names:
    0:
      id: 'Unique Id'
    1:
      label_en: 'Label EN'
    2:
      label_ro: 'Label RO'
destination:
  plugin: entity:taxonomy_term
process:
  vid:
    plugin: default_value
    default_value: categories
  name: label_en

It’s this simple. We start with some meta information such as the ID and label, as well as the migration group it should belong to. Then we have the definitions for the 3 plugin types we spoke about earlier:

Source

Under the source key we specify the ID of the source plugin to use and any source specific definition. In this case we point it to our CSV file, and kind of “explain” it how to understand the CSV file. Do check out the Drupal\migrate_source_csv\Plugin\migrate\source\CSV plugin if you don’t understand the definition.

Destination

Under the destination key we simply tell the migration what to save the data as. Easy peasy.

Process

Under the process key we do the mapping between our data source and the destination specific “fields” (in this case actual Drupal entity fields). And in this mapping we employ process plugins to get the data across and maybe alter it.

In our example we migrate one field (the category name) and for this we use the Drupal\migrate\Plugin\migrate\process\Get process plugin which is assumed unless one is actually specified. All it does is copies the raw data as it is without making any change. It’s the very most basic and simple process plugin. And since we are creating taxonomy terms, we need to specify a vocabulary which we don’t necessarily have to take from the source. In this case we don’t actually because we want to import all the term into the categories vocabulary. So we can use the Drupal\migrate\Plugin\migrate\process\DefaultValue plugin to specify what value should be saved in that field for each term we create.

And that’s it. Clearing the cache, we can now see our migration using Drush:

drush migrate:status

This will list our one migration and we can run it as well:

drush migrate:import categories

Bingo bango we have categories. Roll them back if you want with:

drush migrate:rollback categories

Dynamic migration

Now that we have the categories imported in English, let’s see how we can import their translations as well. And for this we will use a dynamic migration using a “template” and a plugin deriver. But first, what are plugin derivatives?

Plugin derivatives

The Drupal plugin system is an incredibly powerful way of structuring and leveraging functionality. You have a task in the application that needs to be done and can be done in multiple ways? Bam! Have a plugin type and define one or more plugins to handle that task in the way they see fit within the boundaries of that subsystem.

And although this is powerful, plugin derivatives are what really makes this an awesome thing. Derivatives are essentially instances of the same plugin but with some differences. And the best thing about them is that they are not defined entirely statically but they are “born” dynamically. Meaning that a plugin can be defined to do something and a deriver will make as many derivatives of that plugin as needed. Let’s see some examples from core to better understand the concept.

Menu links:

Menu links are plugins that are defined in YAML files and which map to the Drupal\Core\Menu\MenuLinkDefault class for their behaviour. However, we also have the Menu Link Content module which allows us to define menu links in the UI. So how does that work? Using derivatives.

The menu links created in the UI are actual content entities. And the Drupal\menu_link_content\Plugin\Deriver\MenuLinkContentDeriver creates as many derivatives of the menu link plugin as there are menu link content entities in the system. Each of these derivatives behave almost the same as the ones defined in code but contain some differences specific to what has been defined in the UI by the user. For example the URL (route) of the menu link is not taken from a YAML file definition but from the user-entered value.

Menu blocks:

Keeping with the menu system, another common example of derivatives is the menu blocks. Drupal defines a Drupal\system\Plugin\Block\SystemMenuBlock block plugin that renders a menu. But on its own, it doesn’t do much. That’s where the Drupal\system\Plugin\Derivative\SystemMenuBlock deriver comes into play and creates a plugin derivate for all the menus on the site. In doing so, augments the plugin definitions with the info about the menu to render. And like this we have a block we can place for each menu on the site.

Migration deriver

Now that we know what plugin derivatives are and how they work, let’s see how we can apply this to our migration to import the category translations. But why we would actually use a deriver for this? We could simply copy the migration into another one and just use the Romanian label as the term name no? Well yes…but no.

Our data is now in 2 languages. It could be 23 languages. Or it could be 16. Using a deriver we can make a migration derivative for each available language dynamically and simply change the data field to use for each. Let’s see how we can make this happen.

The first thing we need to do is create another migration that will act as the “template”. In other words, the static parts of the migration which will be the same for each derivative. And as such, it will be like the SystemMenuBlock one in that it won’t be useful on its own.

Let’s call it advanced_migrations.migration.category_translations.yml:

id: category_translations
label: Category translations
migration_group: advanced_migrations
deriver: Drupal\advanced_migrations\CategoriesLanguageDeriver
source:
  plugin: csv
  path: 'modules/custom/advanced_migrations/data/categories.csv'
  header_row_count: 1
  ids:
    - id
  column_names:
    0:
      id: 'Unique Id'
    1:
      label_en: 'Label EN'
    2:
      label_ro: 'Label RO'
destination:
  plugin: entity:taxonomy_term
  translations: true
process:
  vid:
    plugin: default_value
    default_value: categories
  tid:
    plugin: migration_lookup
    source: id
    migration: categories
  content_translation_source:
    plugin: default_value
    default_value: 'en'

migration_dependencies:
  required:
    - categories

Much of it is like the previous migration. There are some important changes though:

  • We use the deriver key to define the deriver class. This will be the class that creates the individual derivative definitions.
  • We configure the destination plugin to accept entity translations. This is needed to ensure we are saving translations and not source entities. Check out Drupal\migrate\Plugin\migrate\destination\EntityContentBase for more info.
  • Unlike the previous migration, we define also a process mapping for the taxonomy term ID (tid). And we use the migration_lookup process plugin to map the IDs to the ones from the original migration. We do this to ensure that our migrated entity translations are associated to the correct source entities. Check out Drupal\migrate\Plugin\migrate\process\MigrationLookup for how this plugin works.
  • Specific to the destination type (content entities) we need to import a default value also in the content_translation_source if we want the resulting entity translation to be correct. And we just default this to English because that was the default language the original migration imported in. This is the source language in the translation set.
  • Finally, because we need to lookup in the original migration, we also define a migration dependency on the original migration. So that the original gets run, followed by all the translation ones.

You’ll notice another important difference: the term name is missing from the mapping. That will be handled in the deriver based on the actual language of the derivative because this is not something we can determine statically at this stage. So let’s see that now.

In our main module namespace we can create this very simple deriver (which we referenced in the migration above):

namespace Drupal\advanced_migrations;

use Drupal\Component\Plugin\Derivative\DeriverBase;
use Drupal\Core\Language\LanguageInterface;
use Drupal\Core\Language\LanguageManagerInterface;
use Drupal\Core\Plugin\Discovery\ContainerDeriverInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;

/**
 * Deriver for the category translations.
 */
class CategoriesLanguageDeriver extends DeriverBase implements ContainerDeriverInterface {

  /**
   * @var \Drupal\Core\Language\LanguageManagerInterface
   */
  protected $languageManager;

  /**
   * CategoriesLanguageDeriver constructor.
   *
   * @param \Drupal\Core\Language\LanguageManagerInterface $languageManager
   */
  public function __construct(LanguageManagerInterface $languageManager) {
    $this->languageManager = $languageManager;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container, $base_plugin_id) {
    return new static(
      $container->get('language_manager')
    );
  }

  /**
   * {@inheritdoc}
   */
  public function getDerivativeDefinitions($base_plugin_definition) {
    $languages = $this->languageManager->getLanguages();
    foreach ($languages as $language) {
      // We skip EN as that is the original language.
      if ($language->getId() === 'en') {
        continue;
      }

      $derivative = $this->getDerivativeValues($base_plugin_definition, $language);
      $this->derivatives[$language->getId()] = $derivative;
    }

    return $this->derivatives;
  }

  /**
   * Creates a derivative definition for each available language.
   *
   * @param array $base_plugin_definition
   * @param LanguageInterface $language
   *
   * @return array
   */
  protected function getDerivativeValues(array $base_plugin_definition, LanguageInterface $language) {
    $base_plugin_definition['process']['name'] = [
      'plugin' => 'skip_on_empty',
      'method' => 'row',
      'source' => 'label_' . $language->getId(),
    ];

    $base_plugin_definition['process']['langcode'] = [
      'plugin' => 'default_value',
      'default_value' => $language->getId(),
    ];

    return $base_plugin_definition;
  }

}

All plugin derivers extend the Drupal\Component\Plugin\Derivative\DeriverBase and have only one method to implement: getDerivativeDefinitions(). And to make our class container aware, we implement the deriver specific ContainerDeriverInterface that provides us with the create() method.

The getDerivativeDefinitions() receives an array which contains the base plugin definition. So essentially our entire YAML migration file turned into an array. And it needs to return an array of derivative definitions keyed by their derivative IDs. And it’s up to us to say what these are. In our case, we simply load all the available languages on the site and create a derivative for each. And the definition of each derivative needs to be a “version” of the base one. And we are free to do what we want with it as long as it still remains correct. So for our purposes, we add two process mappings (the ones we need to determine dynamically):

  • The taxonomy term name. But instead of the simple Get plugin, we use the Drupal\migrate\Plugin\migrate\process\SkipOnEmpty one because we don’t want to create a translation at all for this record if the source column label_[langcode] is missing. Makes sense right? Data is never perfect.
  • The translation langcode which defaults to the current derivative language.

And with this we should be ready. We can clear the cache and inspect our migrations again. We should see a new one with the ID category_translations:ro (the base plugin ID + the derivative ID). And we can now run this migration as well and we’ll have our term translations imported.

Other examples

I think dynamic migrations are extremely powerful in certain cases. Importing translations is an extremely common thing to do and this is a nice way of doing it. But there are other examples as well. For instance, importing Commerce products. You’ll create a migration for the products and one for the product variations. But a product can have multiple variations depending on the actual product specification. For example, the product can have 3 prices depending on 3 delivery options. So you can dynamically create the product variation migrations for each of the delivery option. Or whatever the use case may be.

Conclusion

As we saw, the Drupal 8 migration system is extremely powerful and flexible. It allows us to concoct all sorts of creative ways to read, clean and save our external data into Drupal. But the reason this system is so powerful is because it rests on the lower-level plugin API which is meant to be used for building such systems. So migrate is one of them. But there are others. And the good news is that you can build complex applications that leverage something like the plugin API for extremely creative solutions. But for now, you learned how to get your translations imported which is a big necessity.

Jun 21 2021
Jun 21

I’ve been working on a Drupal 7 to 8 migration of content and I encountered in the source body fields a bunch of tables which had a class that styled them in a certain way. One of the requirements was clearly to port the style but improve it using the Bootstrap tables styles and responsiveness. What to do, what to do.

In the source file I was encountering something like this:

...

Which you can argue is not that bad, it only has one class on it and an external style does the job. But obviously it would be better if the stored data didn’t even have that class. So then in the migration I could just kill all the table classes from the body field and apply those stylings externally (all the tables inside the body field). This is a first good step. But what about Bootstrap?

I needed something like this instead to pick up Bootstrap styles:

...

So to make the tables show up with Bootstrap styles I’d have to step on my earlier point of not storing the table classes in the body field storage. Even if I could somehow alter the CKEditor plugin to apply the classes from the widget. And not to mention that if I wanted responsive tables, I’d have to wrap the table element with a

...

. So even more crap to store. No.

Then it dawned on me: why not just store the clean table elements and then, upon rendering, apply the Bootstrap classes, as well as wrap them into the necessary div? After replying Hold my beer to my self-challenging alter ego, I went and I did. So I came up with this little number (will explain after):

getElementsByTagName('table');
    if ($elements->length === 0) {
      return new FilterProcessResult(Html::serialize($dom));
    }

    /** @var \DOMElement $element */
    foreach ($elements as $element) {
      $classes = explode(' ', $element->getAttribute('class'));
      $bootstrap_classes = [
        'table',
        'table-sm',
        'table-striped',
        'table-hover'
      ];

      foreach ($bootstrap_classes as $class) {
        $classes[] = $class;
      }

      $new_element = clone $element;
      $new_element->setAttribute('class', join(' ', array_unique($classes)));

      $wrapper = $dom->createElement('div');
      $wrapper->setAttribute('class', 'table-responsive');
      $wrapper->appendChild($new_element);
      $element->parentNode->replaceChild($wrapper, $element);
    }

    return new FilterProcessResult(Html::serialize($dom));
  }

}

So what do we have here? Well, it’s a Filter plugin that you can add to your text format and which processes the text before it’s rendered. And obviously gets cached after.

In the plugin annotation I used the type \Drupal\filter\Plugin\FilterInterface::TYPE_MARKUP_LANGUAGE because this doesn’t seem to be skipped by core anywhere and its purpose is to generate HTML. Then I implement the process() method to achieve my goal. And I do this quite easily with the following steps:

  1. Find all the table DOM elements and return early if none are found
  2. Loop through all the table DOM elements, clone them and apply the classes to the clone
  3. Create a wrapper div DOM element with the Bootstrap responsive class and append the table element clone to it
  4. Replace the initial table DOM element with the new wrapper
  5. Profit

The return of the method needs to be a FilterProcessResult object that contains the HTML in the same format as the method receives it in. So I serialize the DOM object back into an HTML string and use that.

And that’s it. After clearing the cache you can add this to a text format and all the tables found in the content rendered using that format will be Bootstrap ready. And tables are just an example. Imagine all the possibilities you have to turn simple HTML tags into the markup required by your corner frontend framework. All the while keeping your data clean and not pissing off the developer that will have to migrate that content somewhere else or render it in some other place differently.

Jun 21 2021
Jun 21

Due to globalization and local talent shortages, outsourcing parts of their digital initiatives has become an essential business tactic for many companies. There are various ways and degrees of outsourcing, but the bottom line is that it’s a different kind of project work, with its own unique advantages and drawbacks.

In times of disruption and uncertainty, e.g. ever since the start of the Covid pandemic, however, the benefits of outsourcing become much more pronounced than its drawbacks. In this article, we’ll dive deeper into why outsourcing is a particularly great fit for such periods of time and how businesses that decide to outsource can benefit from it.

Why outsource in times of uncertainty?

Many of the benefits of outsourcing are the same both in normal times and during periods of great uncertainty. The key difference is that, with the latter, these are exactly the features that companies are looking for, and the disadvantageous ones are often already characteristic of the disruption in question (e.g. right now issues with remote collaboration).

Here are 6 factors that make outsourcing strategies particularly well positioned for realizing companies’ digital strategies in times of disruption and uncertainty.

1. Flexibility

At a time of huge business transformations, the ability to be flexible represents a unique competitive advantage. Rather than having to lay off employees if market trends and demands shift, you can instead partner with experts for specific projects and needs, without any organizational overhead beyond that project or need.

Outsourcing allows for exactly that flexibility, whether you’re looking for help in design, marketing, development, or even more specific fields. Especially when partnering with a specialized digital agency, you’re able to get the exact amount of people with the right skills needed, and you’re even able to easily add new members to the team or let certain people off the project, depending on your needs at the time.

2. Speed

Another factor which often pairs with flexibility is speed. If you choose to partner with an agency, you can get a whole team of proven experts in a matter of days, without any need for elaborate vetting or onboarding. The partner agency will have already taken care of that, and their achievements and references will speak for themselves. 

This also enables a faster time to market for the project, which is key to beating out the competition at a time where essentially every competitor company has a well established digital presence, or is just now implementing it. 

Faster time to market will also help in case of sudden market shifts, as you’ll be able to more quickly respond to them and make the most use out of the new trends.

3. Project uncertainty

Rapidly shifting trends entail a higher degree of project uncertainty, with the scope and goals potentially changing significantly throughout the duration of the project. 

As we previously pointed out, outsourcing strategies are particularly well suited for projects whose course is likely to change, thanks to the speed and flexibility offered by this working arrangement. The ability to easily add or remove personnel from the project with no strings attached goes a long way toward alleviating the concerns of scope changes down the line.

4. Cost

Business disruption is often a reflection of a broader, society-wide disruption. In the case of Covid, many companies have had to downsize their staff and/or budget, looking for novel ways to cut costs while maximizing the return on their investment.

With outsourcing, the only cost comes down to the fees for the work done. You’re able to avoid the overhead of hiring, onboarding and managing in-house employees, let alone other expenses such as health insurance, sick days and vacation pay.

When working with an agency, you’re typically also able to get timely replacements in case of an unexpected illness or another emergency. This contributes to the speed and efficiency so needed in times like these, and ultimately results in lower overall cost of the project with greater ROI.

5. High demand for top talent

While we’ve been seeing a rise of unemployment in certain sectors, all fields within the digital are blooming. Demand for top talent has never been higher, and digital job-seekers (and even job-holders) have more options and opportunities than ever before.

Acquiring the best in-house personnel thus becomes even more difficult with everyone competing for the top talent. And, even if you manage to hire the perfect person, chances that they switch companies (or decide to start freelancing) are proportionately higher depending on how skilled they are.

This means that it’s easier to find top talent for work on a per-project basis rather than in-house. A lot of digital experts decide to work as freelancers, especially now with the ease of remote work; but don’t forget that there can also be greater risk with freelancers than with an agency that thoroughly vets its employees.

6. Busted myths about remote work

Companies which were reluctant to outsource prior to 2020 due to the necessity of remote and asynchronous communication now no longer have any excuses. Why would you mind collaborating remotely with an outsourced team if you’re already used to communicating remotely with your in-house employees anyway?

The rise of remote work and the appeased concerns of people and companies previously inexperienced in it are making outsourcing a much more compelling approach.

Indeed, the past year and a half has proved that remote and asynchronous collaboration doesn’t come at the expense of productivity and innovation - on the contrary, it may in fact even contribute to the two; hence why a hybrid approach to work will likely become predominant even once we transition out of the pandemic.

So, in most ways, a remote partnership with a digital agency or freelancer will not differ at all from working with a full team of in-house employees - the only points where it will are the benefits we outlined in this article.

Conclusion

Big city skyline with the sky representing the cloud/world map

While we’re all hoping for the promised end to this tumultuous period, the trends which have arisen during it are unlikely to just be swept away as soon as it comes to an end. The future of work is now, but you still need to make sure you remain future-ready.

Outsourcing is definitely a great approach which will also come with significant long-term benefits if you’re able to find the right partner. If you happen to be looking for a development partner, learn more about the different ways of working with us:

or get in-touch with us and we can answer any further or specific questions you may have.

Jun 20 2021
Jun 20

The above stats come from the “8.x, 9.x, 10.x versions.” Meanwhile, 7.x also has 4,906 open issues, although these will only be relevant to Drupal 7 and not carried forward when support for Drupal 7 ends.

The number of open issues at each status has approximately tripled in the past eight years.

Why is the queue growing so fast?

  • We have adopted several large issue queues from contrib. There are 1,660 open core issues against Views/Views UI; 420 are against Migrate and 370 are against Media. Just these three subsystems account for more than 10% percent of the total issues against core.
  • Very few core modules have been removed during the same period. 
  • Even when modules don't have many issues against them specifically, they will be included in cross-core issues (cspell, phpunit deprecations, etc.), adding to the amount of code to be modified and reviewed.
  • Issue queue size itself causes a feedback loop. If you find a bug, it is harder to find among 18,000 issues than 300, so you are more likely to create a duplicate bug report. Every duplicate bug report could have been a review/test of an existing patch instead. If you're looking for an issue to work on, it is harder to know where to start. If you're involved in 100 open issues, it is harder to keep track of them and requires more context switching than if only 10 are open.
  • A lot of people are using core and opening issues against it. We may want to look into statistics for how many issues are opened per year, too.

What is preventing throughput?

The RTBC queue is a significant bottleneck. At many points in the past three to five years, we have reached over 150 RTBC issues. It can then take weeks of sustained effort to get back under 50. The queue very rarely goes under 30 issues. This is despite adding more committers to the team. However, this is not a new situation — there are always several issues RTBCed every day from a very large pull of issues at “needs review,” and the RTBC queue snapshot from 2013 shows 90 issues RTBC.

While there are more committers now than in 2013, there’s also a lot more core committer work that doesn't involve committing patches — minor releases every six months with alpha/beta/rcs, core initiatives, security support for two minor branches at a time plus the LTS branch, and often co-ordinating security releases with upstream projects.

RTBC issues take more time to review since Drupal 8 released, particularly backward compatibility and backport eligibility decisions. Steps such as assigning contribution credits can double or triple the time they usually take to commit a trivial issue. Therefore the slower commit rate doesn't necessarily indicate less activity; it also shows more effort spent on each issue.

When the RTBC queue is long, there is a knock-on effect on other contributions. Core contributors will have a threshold of patches they've worked on that are stuck at RTBC, after which other patches may be blocked on those commits, or they may just want to see progress on their RTBC issues before picking up the next ones. This varies for different people, but there is a limit to how many issues anyone can work on simultaneously.

A long RTBC queue also increases the chance that any one individual issue will be hit by either a random test failure or conflict with another patch that's committed. At the same time, it's RTBC, artificially taking it out and returning it to the RTBC queue.

The quality of RTBCs is not always high, meaning one issue can take multiple rounds of feedback from a committer before it can be committed. In many cases, these could have been found by an experienced reviewer earlier in the process.

There are approximately 1,500 commits made to the development branch of core per year, a rate that has been steady since 2017. This is lower than the peaks of 4,300 in 2013-14 (just before the release of Drupal 8) and 2,700 in 2009-10 (just before the release of Drupal 7).

Several commits don’t necessarily indicate the actual throughput of work. For example, code style scoping guidelines encourage a handful of issues with large patches in each, vs. dozens of issues with small patches, which was common in previous years for identical changes to the code base. Additionally, it says nothing about the type of issues being committed — i.e., whether they're critical bug fixes or typos in code comments. However, it does show that the frenzy of commits around Drupal's old big bang major releases has flattened out considerably with the new release cycle.

Possible solutions

  • Add more core committers, especially people who primarily focus on the RTBC queue.
  • Ensure that RTBC queue time is funded for new and existing core committers.
  • Fix or disable random test failures to reduce pointless RTBC queue noise. Some random failures have been around for over a year without resolution.
  • Look at ways to streamline the commit process, although these almost always have trade-offs vs. consistency/quality/issue credit etc.
  • Try to improve the quality of RTBCs by cultivating more experienced reviewers and rewarding more high-quality reviews.

Needs review, needs work, and active

While the RTBC queue is very busy and sometimes congested, the needs review queue is more like a giant car park, with nearly 3,500 issues waiting for review.

When an issue is RTBC, you have a reasonable expectation that a core committer will review the issue within x amount of time and either provide feedback or commit it. This is a small but concrete group of people.

“Needs review” issues can be reviewed by anyone other than the patch author — theoretically a very large number of people—but in practice, a lot less than those opening issues or even posting patches.

An issue at “needs review” for more than a week without any feedback is effectively stalled, and 3.5k needs review issues indicate this happens a lot.

The oldest issue in the needs review queue (at the time of this writing) is nine years and nine months old. The most recent patch was posted in 2011, passed 190 simpletest test, and was never retested. The most recent comment was in 2018, suggesting it should be closed as no longer relevant but not actually changing the status. In general, an issue that needs review is either ready to commit, needs more work, or isn’t applicable. The ideal state of the needs review queue then would be a shortlist of issues (tens, not hundreds) that have just been marked needs review, then quickly get feedback so they can go back for another round of work or on to commit.

On the other hand, there are many reasons why issues might be “active” or “needs work.” Many issues are legitimately at that state while solutions and patches are being worked on, while many more will be essentially unviable — duplicates, bugs that should have been posted against contrib modules, cannot reproduce, etc.

With a large number of issues against core, many are closely related or duplicates. This often results in split efforts and isolation; for example, people experiencing a bug post a new issue rather than finding an existing issue with a working patch, which they might be able to contribute to reviewing and getting committed. Or two sets of people work on patches in different issues, one ultimately getting discarded after weeks, months, or years when the other is committed.

The Drupal community is extremely reluctant to move issues to won't fix/by design/duplicate/outdated. While marking an issue duplicate usually connects it to another active one, the other statuses are a dead end. On the one hand, this makes the issue queue friendlier, where other projects might immediately reply to a bug report or feature request with “won't fix”; on the other, it makes it hard to know what has a viable route to commit and what doesn't.

For example, there are 549 issues marked “postponed, maintainer needs more info.” The oldest was put into that state 10 years and one month ago.

Possible solutions

  • Retesting of “needs review” issues. Many “needs review” issues haven't had a test run in several years.  
  • Show related issues via apachesolr-related content in addition to the current manual links, and display these in a block when viewing issues may enable duplicates to be more quickly identified.
  • When posting a new issue, we could try to find similar issues based on the contents of the form before it is posted, potentially preventing duplicate issues from being posted in the first place.
  • The above and a few other issues are collected under this meta.
  • Develop more precise criteria for invalid issues to be closed (cannot reproduce etc.) and deal with both.
  • Critical and major triage, carried out by core committers, sometimes with subsystem maintainers, effectively reduced the length of both queues (especially as Drupal 8 was nearing release). We could look at reviving this. However, the number of critical issues has remained relatively steady since Drupal 8’s release.
  • Bugsmash has lowered, or at least flattened, the total number of bugs against core, including fixing some very old issues. Could we set up a similar triage initiative for tasks?
     
Jun 19 2021
Jun 19

How do enable that module for just the live site again?

Kaleem Clarkson

Let’s first start off with understanding what the Configuration Split module does. In simple terms, it allows you to have different site configurations for your different environments. For example, you want to turn off the Views UI module on your live site. Or you want to turn off your Google Analytics module on your local, dev, and test environments but still have it active on your live site.

There are so many great articles on setting up configuration split. Here are a few good ones that I suggest reading to get you caught up to speed.

In other words, say that you have already set up your config-splits a few months ago but now your client wants to add email support using the SMTP module but you only want this enabled on the live site so you don’t accidentally send out an email from your local site to 1000’s of users. ps — I have done it before Ouch :)

Well in order to add to the live configuration you need to enable the live config split. The natural first step is to go to the config-split UI and disable the develop configuration and enable the live config. Click Click Click. But nothing. Such a simple thing but why in the heck are you not able to enable the live configuration?

Ohhh snap, that settings.php file got me again.

One thing all of these tutorials have in common is that they show you how to set the active and inactive config-splits in your settings.php. For some servers, the code is slightly different but the idea here is that Drupal needs to somehow know what environment you are on in order to import the right configuration.

Yep! That darn settings.php file. The reason you cannot activate the live configuration settings in the UI is that the settings.php file overrides the UI click clicks. So you need to go to your file and either temporarily comment out that code or just reverse the logic. So here is what I do.

  1. Reverse the logic in the settings.php file. For me, I just have to switch out a ! in my code so that my dev and live environments are reversed.
  2. drush cr — Cache rules everything around me — Clear it
  3. Enable my new SMPT module.
  4. Edit the live config split and check the box next to live
  5. drush cex — configuration export
  6. git checkout settings.php so that my file revers back to my old settings
  7. drush cr— you know the drill clear that cache
  8. Uninstall the SMTP module
  9. drush cex — to export the uninstalled module settings to your dev config
  10. git commit my changes

I then check my setting by doing a drush csim live. This command imports the settings from the live config. You can also check your config directories manually or with a git statusto ensure there are new files in both of your dev and live directories.

Jun 18 2021
Jun 18

Drupal 7's end of life is scheduled for November 28, 2022. Up until then, the Drupal Security Team will continue to provide patches to Drupal 7 core and contributed projects should any security threats arise. After that point, however, the Drupal Security team will no longer support Drupal 7.

Is it Safe to Stay on Drupal 7?

If your organization is currently running Drupal 7, you’re faced with a decision on whether to upgrade to Drupal 9 or not.

Crafting a business case can help with your decision because it contains projections for initial costs and ongoing costs for making the upgrade investment vs. maintaining the status quo, as well as projections for revenue and savings. The business case exercise can further forecast the break-even point for your upgrade investment. However, in the case of future security threats, we can’t be confident of what the future ongoing costs will be, because we can’t predict when such security threats will arise, nor will we know the severity of them. 

What we can do, however, is make organizations that use Drupal aware of the risks of not upgrading. There are three general areas of risk: security, integrations, and functionality.

Security Risks

After Drupal 7 reaches the end of life, whenever security issues are identified in core or contributed modules, there won't be very much support to fix them. Site maintainers could find themselves in the position of having to spend a lot of time searching for security holes and fixing them. This risk gets compounded if there are a lot of contributed modules in your Drupal configuration. 

There will be a few agencies that will offer the service of maintaining your Drupal 7 platform post end-of-life. This will help greatly to secure your site if you’re willing to invest in hiring such an agency. One of their main tasks is to backport fixes for core and contrib issues. These fixes will of course not be included in the D7 upgrade path because there won’t be an upgrade path at all. As a point of reference, after Drupal 6 had reached its end of life, there weren't a disproportionate amount of security fixes needed for its core nor contributed modules. Still, the risk is not zero. Every aspect of a Drupal application must be considered to ensure there are no security gaps.

Another aspect of taking this path is that much of the time maintaining a site like this is spent managing and mitigating security risks rather than making improvements or implementing new features. For a good many developers, this is not rewarding work. 

In the history of previous Drupal security fixes, some have been pretty small -- one-line changes that take an hour to review and fix -- while others have taken days or even weeks of development time to analyze and produce a solution for. 

An advantage of choosing to upgrade a Drupal 7 site to Drupal 9 is that you gain all of the advantages of security improvements that were included in Drupal 8 and each subsequent feature upgrade. In this blog post, Peter Wolanin of Acquia details some significant security improvements included in the initial Drupal 8 release. Drupal 9 has additional advantages such as support for PHP 8.0.

Integration Risks

Certainly, security risks will come along, but another risk area in maintaining the status quo is that key integrations will eventually start to fail. For example, your Drupal environment may be integrated with another platform, and a key API on that platform is getting deprecated. Because the Drupal module that connects to it is no longer being actively maintained, you (or an agency you hire) will have to update the module or write a new custom module to keep integration working.

Functionality Risks

As the Drupal community continues to diminish the amount of activity on Drupal 7 core and contributed modules, especially after end-of-life, you basically lose those “free” updates. This is especially so with bug fixes. This forces you to either live with them or to fix them, or again, hire an agency to do it. If you do hire someone, that person won’t be as familiar with the project as one of the maintainers would be, so you’d have to factor in that additional investment. Indeed, some of these risks can be so critical that you end up rewriting large chunks of code to deal with them.

Not only do you miss out on the security improvements of Drupal 8/9 discussed above, not upgrading means you're missing out on many other improvements. Drupal 8 and 9 are built around a modern PHP stack including features such as Composer compatibility, Symfony components, modern OOP coding techniques, and more. While Drupal 7 has served our community well, it is not built upon the latest PHP libraries and development workflows that developers expect. This allows Drupal 8/9+ site owners the advantage of further enhancing their security posture by adding the Guardr security distro or module. While Drupal 8 and 9 have good security features, Guardr adds additional community vetted modules and settings which meet industry security requirements.

Talk To Us

As already mentioned, there are too many future unknowns to create a blanket business case for an upgrade investment. However, we can craft a business case specific to you based on the complexity of your existing Drupal 7 solution. We will factor in the number of modules you’re using, their complexity, the nature of your integrations with external systems, and more. We at Mediacurrent have performed this type of analysis for some of our clients to help them with their technology investment decisions and can do the same for you. Please contact us to learn more!

Jun 17 2021
Jun 17

If you've installed or updated Acquia’s DevDesktop lately, you've seen this message:

Alternatives to Acquia’s DevDesktop For Local Drupal Development

And so you know that DevDesktop is approaching end of life.  

In this video, I'm going to give you two alternatives to Acquia’s DevDesktop For Local Drupal Development. You know I've used this software for years now and introduced literally thousands and thousands of people to Drupal using Acquia’s DevDesktop. It's a shame that it's going away, but we've got alternatives.

Let's dive in.

"Hi, I'm Rod Martin, and this is OSTips from OSTraining.

There are two excellent alternatives to DevDesktop:

  1. MAMP PRO: MAMP and MAMP PRO are available for both Mac and Windows.
  2. DDEV

The first option, MAMP PRO, offers a lot more flexibility, but it's $89 USD, and an upgrade from any previous version is $39.50.  The Windows version is $79 USD. Here's what MAMP PRO looks like when it's up and running.

Alternatives to Acquia’s DevDesktop For Local Drupal Development

If you prefer to have a graphical user interface,also known as a GUI, then MAMP is the one you want to go with.  Adding a new site is as simple as clicking the plus icon just like in DevDesktop. The difference here is that you won't get some of the tools that DevDesktop gives you built in. Composer and Drush both need to be added manually, but there is excellent documentation on doing just that.

MAMP PRO allows you to select:

  1. PHP versions
  2. Apache version 
  3. Nginx

You can have ssl certificates, you can create as many databases as you need, and you can even provide a public url or transfer the site to the cloud. Another nice thing with MAMP is it very easily allows you to update your php.ini files, set up op cache which is always an error when installing drupal when it's not present, and the interface just gives you a lot of tools to work with right out of the box. Here it is drupal on MAMP:

Alternatives to Acquia’s DevDesktop For Local Drupal Development

This was installed using Composer, but as I mentioned you'll need to install Composer manually if you're going to use MAMP. There's an excellent tutorial here by www.div.digital on how to install Homebrew Composer and Drush on your Mac or for Windows.

The second option, ddiv now DDEV,  is absolutely amazing, but it doesn't give you any graphical user interface. It's all command line base. One of the best tutorials I've seen for getting DDEV up and running on your site is the local web development book by Mike Anello that is included with an OSTraining subscription, or you can order it straight from amazon.com.   

It is a really terrific book that covers:

  1. The basics of dev.
  2. Getting a new site up and running with DDEV either drupal or wordpress.
  3. How you can use Drush, Composer, and other command line tools including git right, within DDEV.

It's a fantastic book, and I strongly recommend it. There is also terrific documentation at DDEV.com.

As you can see, I have a site up and running with DDEV as well installed via composer in just a few minutes to give you perspective. Both of these took me about 40 minutes to get set up and ready using the instructions I found either online or in the book.

Alternatives to Acquia’s DevDesktop For Local Drupal Development

All right, so not quite as convenient as DevDesktop, but two alternatives that will give you long-term stability in your local host drupal environment. I hope this has been helpful. 

If you'd like additional training using Mike Anello's book, OSTraining launched a new video course titled "DDEV Explained."  Be sure to check that out.

Thanks for tuning in today.  This is OSTips from OSTraining, and I'm Rod Martin."

[embedded content]


About the author

Rod holds two masters degrees and has been training people how to do "things" for over 25 years. Originally from Australia, he grew up in Canada and now resides just outside Cincinnati, Ohio.
Jun 17 2021
Jun 17

Lynette has been part of the Drupal community since Drupalcon Brussels in 2006. She comes from a technical support background, from front-line to developer liaison, giving her a strong understanding of the user experience. She took the next step by writing the majority of Drupal's Building Blocks, focused on some of the most popular Drupal modules at the time. From there, she moved on to working as a professional technical writer, spending seven years at Acquia, working with nearly every product offering. As a writer, her mantra is "Make your documentation so good your users never need to call you."

Lynette lives in San Jose, California where she is a knitter, occasionally a brewer, a newly-minted 3D printing enthusiast, and has too many other hobbies. She also homeschools her two children, and has three house cats, two porch cats, and two rabbits.

Jun 16 2021
Jun 16

You Might Also Like

Migrations from Drupal 7 can be as varied and diverse as humanity itself. Goals, audiences, servers, content models, and more all come together to form a unique fingerprint. No two migrations are ever really the same.

But despite the uniqueness of each, there are some commonalities. You can take steps to ensure your migration will be a success, no matter how complex or simple.

Map out your Source

You need to know where you are coming from. This is how you begin to determine the length of the journey and how many supplies you’ll need along the way. 

Start mapping out your current Drupal 7 website.

Fields and content types

Make a list of all of your current fields and the content types they are attached to. Include the field type, machine name, and the complexity of the data stored in that field. You can come up with your own complexity scale that makes sense to your team. Some examples:

  • Basic - A single-value textfield
  • Intermediate - A formatted textfield. These fields can range in complexity depending upon what is allowed. Some allow only links and basic formatting, which are not complex, but others might allow embedded snippets and images. Move the scale accordingly.
  • High - Multi-value fields with multiple columns of data almost always need extra work to migrate. Field Collections and Paragraphs might fall in this category.
  • File - Sometimes, categorizing complexity by the type of data can be helpful. It depends on your eventual workflow or how many fields of a certain type you may have.

Spend some extra time thinking about your entity reference fields. More on this later.

For a quick start on getting all of your content types and fields listed in a spreadsheet, check out the Migration Planner module. View a sample spreadsheet created by the module.

Taxonomy

Taxonomy vocabularies can also have fields attached, so you’ll want to make a list similar to what you have done with content types and their fields. But taxonomies have their own challenges.

Often, they help organize a Drupal site. By design, they are referenced by other entities and can have all the migration issues that come with that.

Taxonomy terms can also have their own hierarchy. If you have terms that are parents of other terms, find a way to record this relationship. Make a note of the depth.

Files

Drupal 7 has many ways to manage files, so you need to document exactly what you have. You are likely using the Media and File Entity modules. Document which file types you are using, how many of each file type you have, and get an estimate of the storage space being used. As you did with content types and taxonomies, list the fields for each file type and categorize them.

If you are using native file handling, you will still want to map out what you have. Your developers will thank you.

Views

There is no way to migrate your Views reliably, so each will need to be recreated on your new site. Each one represents some work. Some might rely on extra business logic in custom code or on plugins provided by contrib modules. Do an audit and determine any dependencies.

Architecture

You will start to map out the contours of this as you take stock of your content types, taxonomies, and fields, but other things make up your overall architecture.

  • Menus - Not just the main menu and footer menus. Make sure you pull in contextual menus that might be embedded conditionally or placed via a block.
  • Other entities - Do you have other content entities besides nodes and taxonomies? Don’t forget about them. These could be custom components or something like Paragraphs.
  • Hosting - Detail out your current hosting platform. Resources, apps, integrations, etc. This might be something else you need to upgrade to ensure the smooth running of Drupal 8/9. Or you might be migrating onto a different hosting provider, and if that is the case, you want to at least keep parity with your current solution.

Pay special attention to your entity reference fields that you identified in previous steps. These can mask hidden domain knowledge and act as pillars for your entire site architecture. Dig into them. Make sure you know their purpose.

Contrib modules

Make a list of all Drupal 7 contrib modules you have installed. Contrib modules can represent a lot of effort in a migration, so you’ll want to explore each one in-depth. Is there a Drupal 8/9 equivalent? If so, is the new module stable? How much work is left? 

Alternatively, a module could have been rolled into Drupal core. If so, you’ll want to see if there are any differences so you can take those into account during the migration.

If no current module exists, what is the estimated level of effort to recreate this functionality?

Custom functionality

Make a list of all of your custom modules. Be sure also to make a note of business logic that might be in your custom themes. A lot can be embedded in template files.

You should note the complexity of the functionality and what it is used for. You’ll want to check the Drupal 8/9 contrib ecosystem to see if any modules can replace this custom functionality. This might entail conversations with your stakeholders and team about priorities and goals. A contrib module might replace 80% of your custom module, and you need to know if that is acceptable or not.

If you have limited resources, you might also want to mark custom functionality on a spectrum of “mission-critical” to “nice-to-have.”

These conversations will continue as you map out your destination.

What do you not want to migrate?

Now is a good time to do a site audit. As you are compiling all of this information and start having conversations around your goals and content model, come up with criteria for what shouldn’t be migrated. The less you have to migrate, the less work you’ll need to undertake.

Is there a cutoff date for old articles? A certain taxonomy that is no longer used? Are some fields for cosmetic purposes, and they won’t be relevant for your new design? Do all users need to be migrated over? What about unpublished content? Or revisions? Cutting out revisions will reduce the size of your database drastically, but then editors will not be able to view past changes. 

This is where a little bit of work can pay big dividends later.

Some additional questions to ask

  • How will you handle files? Files can be transferred during the migration itself, but sometimes it’s better to do one big transfer, so your migrations run more quickly. This usually means some cleanup of unused files beforehand. If you have a lot of files not managed by Drupal, moving the filesystem in bulk might be necessary.
  • Will you keep the same paths for your content or change them? If your paths are changing, be sure you have a good 301 redirect solution in place.

Map out your Destination

You need to know where you are going. This can be done in parallel to mapping out your source because what you determine here will help answer some of the questions above.

Start by figuring out the literal destination for your new Drupal 8/9 site: the servers and hosting. Whatever your setup, you’ll need to plan some way for it to access your Drupal 7 site. This can be as simple as copying the database and files over to the new platform.

But if your site is huge, or you want to do a continuous migration until a final cutoff date, you’ll want access to the live production data. Or at least a replica that stays up to date.

You’ll also want to make plans for a development or testing environment that will mimic your final destination server as much as possible.

Evaluate your content model

You can keep everything exactly the way it is. Same content types. Same fields. Same everything. Drupal will do a good job of re-creating these for you on the new site. If you are limited in time and budget or your site is not complex, this might be the best way to go.

Be warned. If you have relied on the body field for creating complex content, you will still have a lot of work to do, even if you don’t plan on reworking your content model. And regardless, you still need to understand the gaps created by your custom functionality and reliance on certain contrib modules.

For these reasons and many others, a migration is a good opportunity to rework your content model to better align with your strategy and the needs of your audiences. Especially editors. Don’t forget them! It’s also a good opportunity to structure your content for the needs of a“publish everywhere” ecosystem.

Run some workshops that clarify your priorities. Document a new content model that takes those priorities into account. Add additional columns to the spreadsheet you created for your source content types and fields. These columns should contain the machine names of your new content types and fields.

Sometimes you’ll find a need to consolidate content types and/or fields. Two content types funnel into one new content type. Or three different fields are cleaned up into one single field. That should be expected. And Drupal’s migration tools make it simple to do this.

Clarifying your new content model will also help answer questions about your Drupal 7 site, like what content you need to leave out of the migration.

WYSIWYG cleanup

Your body fields (and other formatted text fields) might have accumulated a lot of cruft in the years your Drupal 7 site has existed. Different ways of displaying images, iframes, custom embed codes, etc. You’ll have to deal with each of these.

Cleaning up a formatted body field can balloon into an entire project by itself. Be sure you inventory everything that occurs in a body field, document what needs to be migrated as-is, document what needs to be transformed, and document what can be ignored or deleted. If you are changing your text filters and formatting rules, you’ll want to make sure your content meets those new requirements.

The SQueaLer module helps you find these issues, among other things. This is still in development and might need some tweaking to work with your particular Drupal site.

Again, having conversations around your new content model will pay heavy dividends here. You can come out of this migration with not just a new website but a better website that better accomplishes your goals.

And a website that your editors actually like to use. 

WYSIWYG cleanup goes a long way. Don’t limit cleanup to code, either. Sometimes, manual cleanup makes more sense.

Development Workflow

Once you have mapped your source and destination, you’ll be able to start estimating the level of effort involved. 

Team planning

A lot of migration work can happen in parallel, but the schedule will depend upon your budget and the team you have in place.

We have seen migrations be successful with one backend developer, and we have seen migrations be successful with four back-end developers. There is a point of diminishing returns. You don’t want too many cooks in the kitchen. If you have experienced developers on your team, they should be able to help you determine when you might reach that point.

Splitting up migration work based on field types has worked well. We have found it helpful to start with more basic fields to get momentum. You get the overall processes hashed out without worrying about complex data sets and transformations.

An example way of breaking up responsibilities between developers might look like the following:

  • Basic fields and simple formatted fields
  • File/image fields
  • Paragraphs, field collections, and other entity references
  • WYSIWYG cleanup
  • Contrib module updates and replacements
  • Custom functionality

There will be some overlap and bleeding across these boundaries, but these are a good starting point in terms of spheres of initial responsibility. Keep in mind that the complexity of your content model can have big ramifications for your team planning.

Solution preferences

When configuring a particular migration, you want a clear order of preference for the solutions you use. This will help save you from extra work and unnecessary technical debt.

The simplest migration entails mapping one field to another in a configuration file. Even basic transformations can be accomplished with this. These will usually invoke plugins that are included in Drupal core or contrib modules. Start here first.

If core and contrib fail you, move to writing your own source, process, or destination plugins. This should cover most of your use cases.

For certain edge cases, you can invoke hooks and react to events at different stages of the process.

QA and testing

Migrations require QA and testing, so be sure and budget time for that. Having a good development server or environment builder like Tugboat will allow migrations to be run and issues surfaced. 

Stakeholders can check migrated content to make sure everything shows up as expected. Other developers can validate results and look at the underlying data structure. 

This is also where you’ll want to grab any logs generated by the process.

Logging and exceptions

In our experience, migrations create a lot of noise. Warnings, errors, notices, etc. Some of these can be safely ignored, but to figure that out, you’ll want to pay attention to the logs. This is more important if you have several developers working in parallel or you are migrating several sites, each of which might have different edge cases.

But even if you have just one backend developer on the task, you’ll want to make a habit of going over the logs regularly.

  • Some can be ignored. Ignore them.
  • Some need greater clarification from a developer. Make those clarifications and see if a new ticket needs to be created.
  • Some need the attention of a stakeholder. Circulate these and discuss them in your status updates. If you aren’t sure how to handle that rogue iframe, ask.
  • Some need fixing. Though some might also be a low priority. If so, make a ticket, put it in the backlog, and keep moving.

When logging issues, be sure to record the current row id and the relevant migration id. This practice can help you find edge cases. Core and contrib migration plugins will provide logs, but if you end up writing custom plugins, be sure to add logging with clear messaging wherever issues might happen. Write custom plugins defensively.

Nested fields (Paragraphs and Field Collections)

If you are dealing with nested fields of structured data, pay special attention to how they are structured and how you will deal with them on the new site.

Paragraphs and Field Collections are the most common, but you might also have a custom solution build with entity references and content types. There are a lot of different ways you can go, and each way has its challenges. It also depends on how your editors best like to work.

Paragraphs to Paragraphs? If so, are you changing the structure?

Field Collections to Paragraphs?

Paragraphs or Field Collections to embedded entities in the body field?

Paragraphs or Field Collections to a custom structure implemented via entity references?

Or maybe you have embedded entities and want to migrate them to Paragraphs?

This is why evaluating your content model is so important. Each path has implications. You don’t want to simply choose the default path. You want to choose a path with intention, with your eyes open, understanding the challenges you need to overcome to get to the other side.

Summary

Planning properly can help you budget properly for both team size and timing. You can get a bigger picture and map out the potential minefields before you even start on your journey.

  • Map your source
  • Map your destination
  • Pay attention to WYSIWYG cleanup
  • Think about your development workflow to maximize resources

If you want an experienced partner that can help you through every stage of the migration process, reach out. We would love to help.

Jun 16 2021
Jun 16

In November 2022, the Drupal community and the Drupal Security Team will end their support for Drupal 7. By that time, all Drupal websites will need to be on Drupal 8 to continue receiving updates and security fixes from the community. The jump from Drupal 7 to 8 is a tricky migration. It often requires complex transformations to move content stuck in old systems into Drupal’s new paradigm. If you are new to Drupal migrations, you can read the official Drupal Migrate API, follow Mauricio Dinarte’s 31 Days of Drupal Migrations starter series, or watch Redfin Solutions’ own Chris Wells give a crash course training session. This blog series covers more advanced topics such as niche migration tools, content restructuring, and various custom code solutions. To catch up, read the previous blog posts Custom Migration Cron Job, Migration Custom Source Plugin, and Connecting a Transact SQL Database to Drupal.

Migrating from Drupal 7 to Drupal 8 often requires restructuring your content, like transforming an unlimited text field into paragraphs or a list text field into a taxonomy reference. The tricky part is that the migration pipeline wants each source entity to go to one destination entity, but each paragraph or taxonomy term is a new entity and a single node can reference several of these.

So how do you break up data from one entity and migrate it into multiple entities?

Manual entry

If there’s a small set of old content or you’re already manually adding new content, then manual entry is a viable solution, but it shouldn’t be the default for large migrations. Going this route, you want to set up content editors for easy success. If possible, reduce the number of actions needed for a repeated task. With some clever string concatenation in your query results, you can create exact links to all the node edit pages that need updating. This is much easier than giving someone a node id or page title and asking them to fix that page. Just because it’s not an automatic migration, doesn’t mean we can’t automate aspects of it.

The CSV Importer module is useful for simple data that already exists in a CSV file or can be quickly exported as a CSV file. For example, a spreadsheet with hundreds of country names could easily be imported as taxonomy terms with this tool. Or a list of emails and names could be imported as Users. Once you’ve migrated your data, you can reference them in other migrations using the static_map plugin or a custom process plugin to lookup the correct entity reference. Be careful not to abuse the static_map plugin with hundreds of mappings. In the country example, if the source data contains the name of the country that you want to reference in the destination, you could write a process plugin that gets the taxonomy id from the name. Remember that once entities are migrated you can use the full power of Drupal to find their id’s in later migrations.

Generate entities during migration

Use the entity_generate plugin or a custom plugin to create the entities during the migration process. This gives more control over how the data is transformed, but there’s no way to rollback or update the generated entities through the migration API. This shouldn’t be the default, but can be necessary for more complicated matters such as breaking down a dense wysiwyg field into separate paragraphs (see Benji Fisher’s custom process plugin).

Migrate entities separately with a custom source plugin

See our earlier blog post for a step-by-step guide on this. Drupal core provides lots of useful source plugins, but sometimes you need a custom query to migrate specific source data into entities. This approach gives you that flexibility within Drupal’s migration workflow. Unlike the previous option, you can still rollback and update entities and leverage all the other migration tools.

How you perform a data transformation like this is largely contextual, but these are powerful tools that can be used in many cases. Contact us with any questions regarding complex Drupal migrations, or if you are looking for a Drupal agency to help with your next website project.

Jun 16 2021
Jun 16
easy hreflang module install and configuration

https://www.drupal.org/project/hreflang

Credit and Thanks

Thank you to Mark Burdett for creating and maintaining this module.

About the Hreflang module

The Hreflang module automatically adds hreflang tags to your pages. Search engines reference the alternate hreflang tag to serve the correct language or regional URL in search results which is important for multilingual websites.

In Drupal, the Core Content Translation module does add hreflang tags to translated entity pages. However, hreflang tags should be added to all pages, even untranslated ones. This module manages this for you.

Install and Enable the Hreflang module

  1. Install the Hreflang module on your server. (See this page for more instructions on installing modules.)
     
  2. Go to the Extend page: Click Manage > Extend (Coffee: “extend”) or visit https:///admin/modules.
     installing the hreflang module
  3. Select the checkbox next to Hreflang and click the Install button at the bottom of the page.
     

There are no permissions to set or further settings to change.

an example of what hreflang looks like in the code

As you can see from the above screenshot in the example website, the hreflang for each different language version of the site has been set by the Hreflang module.

Did you like this walkthrough? Please tell your friends about it!

facebook icon twitter social icon linkedin social icon

Jun 16 2021
Jun 16

Enterprise agility is one of the most commonly adopted transformation approaches which comes up along with a lot of challenges. The companies need to reshape the organizational structures, make changes in the operational models and reform the old ways of working techniques. The agile transformation includes a big shift in organizational culture and that makes an organization ponder over it or even neglect it. But eventually, organizations realize the importance of it, apply agile transformation techniques, and receive immense benefits that help them evolve and move closer towards their goals and aspirations. This article will guide you through the right approach needed towards adopting the agile transformation in your organization. 

To successfully adopt an agile transformation, you need a plan

To succeed with the agile transformation, you need to clearly understand the fact that why are you making such time and effort to adopt this transformation and what exactly you want to gain from it. It is important to have clarity upon what changes you will have to make so that the desired outcomes can be achieved by your business. For this let’s firstly, understand the importance of preparing a proper business case for adopting agile transformation at your organization. It is important to convince the decision-makers to realize the significance of approaching agility in the work culture with the right business case. Before that, go through what agile development methodology actually means.

Making business case for agile transformation

A business case explains the main objectives of the organization in regards to agile transformation. Generally, adopting agile leads to desired business outcomes but only if it is approached in the right manner. Therefore, it is essential to have some set important goals which will help in the overall growth of the company with agility. Here is an insight into the recommended goals.

Illustration showing multiple arrows forming a circle to explain agile transformation

The first and foremost goal is to meet customer commitments on time. It helps in building trust between customers and the company leading to customer satisfaction. Secondly, it is essential to maintain high-quality products and services as at times companies fail to deliver suitable services as promised to the customers. This further helps in building a good brand reputation for the company. Thirdly, one of the aims of adopting agility in an enterprise is to efficiently reduce their costs and maximize profits. Lastly, the companies expect an early return on investment with agility as by practicing traditional working methods they struggle with long delivery cycles which do not allow them to receive early return on investment. Read this complete guide on agile transformation to know more.

Now, we will get an overview of the transformation hypothesis to help us approach agility in the right way.

A step closer to agility with Transformation Hypothesis

A Transformation Hypothesis describes the real purpose behind choosing agile transformation. Along with accepting agility, companies have to be flexible enough to embrace change in various working techniques. But sometimes it might not sound comfortable for the employees as they are accustomed to working with the same old traditional techniques. So, in situations where employees aren’t confident enough and are faced with certain challenges, the companies should proactively help them to overcome such difficulties and welcome change. Below are some of the concerns which need to be resolved to strongly practice agility in your organization.

Picture showing multiple circles to describe the concerns that are faced while practicing enterprise agility

Culture change isn’t the only solution 

It is observed that adopting agility brings a big shift in the work culture of an organization, so we assume that the culture change alone will look after all the necessary steps and efforts required to successfully implement such a transformation within the company. But in reality, it isn’t so. There is also a need for proper guidance in forming cross-functional teams which have various functional expertise to increase innovation in products and services. To succeed with agility, the company will have to look upon various factors, apart from considering culture as the only means to improve agility.  

Process training alone cannot bring agility

We get to see that employees are given training from coaches to learn new methods and techniques, also expecting them to be capable enough to face any challenges which they witness while practicing agility. But the problem here is, it is nearly impossible to handle technical, governance, and organizational issues by employees with the process education obtained during the training sessions. Such issues need to be resolved by providing essential support to the company employees to tackle such hard situations.

Need for a right ecosystem

To reach the desired level of agility, there is a need for an ecosystem that facilitates continuous improvement to achieve a company’s agility goals and objectives. If a company fails to build the right ecosystem, it will be challenging to sustain agility in an organization for better adaptability and resilience.

Strategize plans according to the size of the organization

One has to strategize plans depending on the size of the organization to sustain agility. For example, the strategy which you will use for transforming a single team will certainly differ from the strategy you plan for a large-sized company having 500 or more employees. In the same manner, if you are leading a group of five to seven people, sending them to training sessions might be sufficient. But if you lead 1000 employees, the planning and arrangement must be executed on a much different level. So, it justifies the fact that the size of the organization is to be considered while adopting agility.

Overcome the challenge of dependency 

Dependencies can bring hurdles in successfully attaining agility in organizations. When we have small teams, it is easier to manage dependencies but if we have multiple teams working towards the same goal, it becomes very difficult to handle inter-team communication and collaboration. So, removing dependencies shall be one of the primary tasks. For smooth delivery of products and services, it is very essential to strategize plans to overcome dependencies and develop agility in an organization.

Benefits achieved by adopting agile transformation

Many organizations have received benefits from practicing agile transformation in recent years. They need to put the right agile transformation approach to attain the desired business outcomes with this transformation. The agility in enterprises allows flexibility to adapt new organizational practices and techniques leading to maximization of business value. 

Moving forward, we will discuss some of the benefits which are attained by companies adopting agile transformation in their business. 

Picture formed by various circles describing the benefits of agile transformation

Maximise customer satisfaction

With agility, companies mainly focus on adding value to the customer experience by understanding their requirements and making early delivery of products and services. It helps in evolving customer satisfaction by prioritizing customer feedback to improve the product quality as per their expectations. Enterprise agility allows employees to provide services with expertise, proper collaboration among various teams, and transparency which leads to an increase in customer satisfaction.

Here is an example of Asia Pacific Telco, which adopted an agile operating model to meet customer needs and was successful in increasing customer satisfaction by implementing new ideas and techniques into their work process. Below is the diagram showing the transformation shift towards a positive direction leading to a great customer experience.

Illustration showing triangles formed by small circles to explain benefits of agile transformationSource: McKinsey & Company

Increase employee engagement

Adopting agility facilitates employees the ability to use their creativity to produce better work performance and results. It gives them a sense of ownership to take all the necessary decisions to improve their work productivity and help them feel valued in the workplace. Such flexibility helps in increasing employee engagement to a great extent also empowering companies to reach their desired goals and ambitions. For instance, read how you can build a diverse and inclusive team by leveraging agile techniques.

Raising operational performance

Agility helps in providing various business models to the organizations which further helps in improving the operational performance according to the desired expectations. Due to this agile transformation, the companies are availed with various approaches which help in increasing the speed of company decision making and product development. The target achievement rate can be seen improving remarkably, by the agile companies which prove to be one of the major achievements of a progressing enterprise. 

Growing competence towards changing priorities 

With agility, a behavioral transformation takes place among the employees to reach their highest potential in embracing change and innovation. They learn to handle the changing priorities within the organization in the form of resources re-location to a team who needs support and assistance to survive the challenges which come along with agility. They get comfortable with the changes that take place in their work process and techniques, accepting change for better company growth. 

Enhance project visibility

Project visibility provides a clear vision of a project performance which includes allocating resources, potential risks, and proper distribution of responsibilities. Increased visibility ensures everyone involved in the project understands the objective of the project and their role in meeting the business goals and aspirations. It gives clarity to stakeholders regarding the real-time status of the project. Agility helps in changing any project plan or initiative following customer or stakeholder needs and requirements for better project performance. For instance, read how imbibing agile documentation processes helps improve project management.

Improving Business and IT alignment

Business IT alignment can be regarded as a business strategy that helps in achieving the business objectives leading to improved financial performance. This alignment is necessary to adapt to the constant change in the company and environment due to agility. Therefore, both business agility and business-IT alignment should go hand in hand to maintain company growth and development. For instance, read how the inclusion of agile processes to the testing phases of software development can be immensely beneficial.

Lastly, the most important benefit which we witnessed recently by adopting the agile transformation is the flexibility of working at our convenience during the pandemic of COVID 19. Due to this pandemic, the organizations felt the need for agile transformation rather than sticking to the old traditional transformation techniques which created hurdles in the proper functioning of their business. According to McKinsey’s research with Harvard Business School during COVID-19, agile companies have received better results in comparison to non-agile companies.

Graphical representation of the agile companies' improving performance post COVID 19 crisisSource: McKinsey & Company

Companies sharing their successful journeys with agile transformation

With agility, many organizations have achieved immense success leading their business towards their set goals. Here are some of the companies sharing their success stories which can act as a motivation for everyone to move towards agile transformation. 

Ericsson

Ericsson aimed at improving the delivery of products within the stipulated time leading to an increase in customer satisfaction. To achieve this target, they adopted agility in 2008. They implemented cross-functional teams which could focus on specific projects along with building effective communication across several teams. Instead of individual targets, each team worked towards both organizational and group goals to receive desired results. After making such changes with the help of agile transformation, Ericsson could successfully achieve speedy development, faster customer feedback, and generate higher revenue according to desired company standards. 

Bank of America

In late 2012, the agile transformation in global markets at Bank of America began. Merrill Lynch, the director of global markets technology at the Bank of America expressed that their main aim was to improve the time to deliver better company solutions and also reduce key person dependencies across his technology team. They adopted Scrum (a specific Agile methodology) also providing an environment to the employees where they could experiment by taking risks to bring exceptional work results. The cross-functional team formation too was encouraged to turn business ideas into working products for achieving company targets. After a year of consistent efforts, they finally succeeded in meeting their business goals with agility.

LEGO 

LEGO attained success by adopting agile transformation in early 2018. They adopted this approach in their two large digital departments. After such adoption, they witnessed various improvements in several areas like market engagement, digitalization, and reduction in project delivery time. This further brought a sense of motivation and satisfaction among the employees. So, with this transformation, LEGO could set a successful journey of embracing change. 

To get more insight on a company's smooth agile transformation, you can go through this book-   “Agile Transformation: A Brief Story of How an Entertainment Company Developed New Capabilities and Unlocked Business Agility to Thrive in an Era of Rapid Change” which will give you an idea about a company in the entertainment industry who got excellent results by adopting agility in its work culture. This will be a good read.

Here is a video presented by Scrum Alliance about IBM’s wonderful experience of learning, implementing, and overcoming challenges with agile transformation. Without any further wait, take a look at their exciting agile transformation journey.

[embedded content]

Final thoughts

Agility is an approach to drive performance and provide endless innovation to organizations. Adopting this transformation can break the old traditional working methods enabling to achieve tremendous growth and advancement in business. So, the organizations will have to step out of their comfort and strive for something new which can deliver exceptional business outcomes.

Jun 16 2021
Jun 16

What’s new in Drupal 9.2.0?

The second feature release of Drupal 9 helps keep your site even more secure, and comes with increased visitor privacy protection, improved migration tools from Drupal 7, enhancements to the Olivero frontend theme and early support for the WebP image format.

Download Drupal 9.2.0

Security and privacy improvements

Critical security advisories and public service announcements will now be displayed on the status report page and certain administration pages for the site's administrators. This helps prepare site owners to apply security fixes in a timely manner. For increased privacy protection of your site visitors, Drupal 9.2.0 now blocks Google Federated Learning of Cohorts (FLoC) cookie-less user tracking by default.

Better building blocks out of the box

The Olivero theme, soon to be Drupal's new default frontend theme, has dozens of major improvements in this release, including a new form design and various accessibility fixes. The built-in Umami demo is now also more flexible with a built-in editor role and more versatile Layout Builder demonstration.

On the way to Drupal 10

In preparation for Drupal 10, all Symfony 5 and and several Symfony 6 compatibility issues have been resolved. As part of modernizing the frontend of Drupal 9, core's Tour feature now uses ShepherdJS instead of jQuery Joyride. This significantly improves accessibility of tours and removes one more reliance on jQuery.

Other improvements

The already stable migration path from Drupal 7 is now expanded with migrations for user settings, node/user reference fields and other previously missing pieces.

Drupal's GD toolkit integration, and, therefore image styles, can now manage WebP images. There is more to do for complete WebP support. Stay tuned for improvements in future releases.

Sneak peak at future core features

The upcoming core CKEditor 5 upgrade is being worked on in a contributed project. Progress has been made on various aspects of the roadmap, and the project is near to completing all issues identified as requirements for tagging a beta release. Core inclusion is expected in Drupal 9.3.0, but contributed projects are requested to build compatibility ahead of that.

The Automated Updates Initiative has been very active in the repositories under https://github.com/php-tuf building a PHP implementation of The Update Framework (TUF) with Typo3 and Joomla developers to provide signing and verification for secure PHP application updates. Results will be included with later Drupal releases.

Check out the initiative keynotes from DrupalCon North America 2021 on what else is in the works.

What does this mean for me?

Drupal 9 site owners

Drupal 9.0.x is now out of security coverage. Update at least to 9.1.x to continue to receive security support.

Drupal 8 site owners

Update to at least 8.9.x to continue receiving bug fixes until Drupal 8's end of life in November 2021. The next bug-fix release (8.9.17) is scheduled for July 7, 2021. (See the release schedule overview for more information.) Versions of Drupal 8 before 8.9.x no longer receive security coverage.

With only five months left until the end of life of Drupal 8, we suggest that you upgrade from Drupal 8 to Drupal 9 as soon as possible. Upgrading is supported directly from 8.8.x and 8.9.x. Of the top 1000 most used drupal.org projects, 94% are updated for Drupal 9, so the modules and themes you rely on are most likely compatible.

Drupal 7 site owners

Drupal 7 is supported until November 28, 2022, and will continue to receive bug and security fixes throughout this time. From November 2022 until at least November 2025, the Drupal 7 Vendor Extended Support program will be offered by vendors.

On the other hand, the migration path for Drupal 7 sites to Drupal 9 is stable. Read more about the migration to Drupal 9.

Translation, module, and theme contributors

Minor releases like Drupal 9.2.0 include backwards-compatible API additions for developers as well as new features.

Since minor releases are backwards-compatible, modules, themes, and translations that supported Drupal 9.1.x and earlier will be compatible with 9.2.x as well. However, the new version does include some changes to strings, user interfaces, internal APIs and API deprecations. This means that some small updates may be required for your translations, modules, and themes. Read the 9.2.0 release notes for a full list of changes that may affect your modules and themes.

This release has further advanced the Drupal project and represents the efforts of hundreds of volunteers and contributors from various organizations. Thank you to everyone who contributed to Drupal 9.2.0!

Jun 16 2021
Jun 16

What’s new in Drupal 9.2.0?

The second feature release of Drupal 9 helps keep your site even more secure, and comes with increased visitor privacy protection, improved migration tools from Drupal 7, enhancements to the Olivero frontend theme and early support for the WebP image format.

Download Drupal 9.2.0

Security and privacy improvements

Critical security advisories and public service announcements will now be displayed on the status report page and certain administration pages for the site's administrators. This helps prepare site owners to apply security fixes in a timely manner. For increased privacy protection of your site visitors, Drupal 9.2.0 now blocks Google Federated Learning of Cohorts (FLoC) cookie-less user tracking by default.

Better building blocks out of the box

The Olivero theme, soon to be Drupal's new default frontend theme, has dozens of major improvements in this release, including a new form design and various accessibility fixes. The built-in Umami demo is now also more flexible with a built-in editor role and more versatile Layout Builder demonstration.

On the way to Drupal 10

In preparation for Drupal 10, all Symfony 5 and and several Symfony 6 compatibility issues have been resolved. As part of modernizing the frontend of Drupal 9, core's Tour feature now uses ShepherdJS instead of jQuery Joyride. This significantly improves accessibility of tours and removes one more reliance on jQuery.

Other improvements

The already stable migration path from Drupal 7 is now expanded with migrations for user settings, node/user reference fields and other previously missing pieces.

Drupal's GD toolkit integration, and, therefore image styles, can now manage WebP images. There is more to do for complete WebP support. Stay tuned for improvements in future releases.

Sneak peak at future core features

The upcoming core CKEditor 5 upgrade is being worked on in a contributed project. Progress has been made on various aspects of the roadmap, and the project is near to completing all issues identified as requirements for tagging a beta release. Core inclusion is expected in Drupal 9.3.0, but contributed projects are requested to build compatibility ahead of that.

The Automated Updates Initiative has been very active in the repositories under https://github.com/php-tuf building a PHP implementation of The Update Framework (TUF) with Typo3 and Joomla developers to provide signing and verification for secure PHP application updates. Results will be included with later Drupal releases.

Check out the initiative keynotes from DrupalCon North America 2021 on what else is in the works.

What does this mean for me?

Drupal 9 site owners

Drupal 9.0.x is now out of security coverage. Update at least to 9.1.x to continue to receive security support.

Drupal 8 site owners

Update to at least 8.9.x to continue receiving bug fixes until Drupal 8's end of life in November 2021. The next bug-fix release (8.9.17) is scheduled for July 7, 2021. (See the release schedule overview for more information.) Versions of Drupal 8 before 8.9.x no longer receive security coverage.

With only five months left until the end of life of Drupal 8, we suggest that you upgrade from Drupal 8 to Drupal 9 as soon as possible. Upgrading is supported directly from 8.8.x and 8.9.x. Of the top 1000 most used drupal.org projects, 94% are updated for Drupal 9, so the modules and themes you rely on are most likely compatible.

Drupal 7 site owners

Drupal 7 is supported until November 28, 2022, and will continue to receive bug and security fixes throughout this time. From November 2022 until at least November 2025, the Drupal 7 Vendor Extended Support program will be offered by vendors.

On the other hand, the migration path for Drupal 7 sites to Drupal 9 is stable. Read more about the migration to Drupal 9.

Translation, module, and theme contributors

Minor releases like Drupal 9.2.0 include backwards-compatible API additions for developers as well as new features.

Since minor releases are backwards-compatible, modules, themes, and translations that supported Drupal 9.1.x and earlier will be compatible with 9.2.x as well. However, the new version does include some changes to strings, user interfaces, internal APIs and API deprecations. This means that some small updates may be required for your translations, modules, and themes. Read the 9.2.0 release notes for a full list of changes that may affect your modules and themes.

This release has further advanced the Drupal project and represents the efforts of hundreds of volunteers and contributors from various organizations. Thank you to everyone who contributed to Drupal 9.2.0!

Jun 15 2021
Jun 15

Drupal 7 will reach end-of-life (EOL) in November of 2022, which means that at least a half million webmasters & site owners have some decisions to make. What’s the next step for your organization’s website? What sorts of costs might you be looking at for this upgrade? What timeline can you plan on for this change? All good questions.

If you’re interested in this topic, take a moment to register for our Director of Engineering, Joel’s free webinar coming next week. He'll be covering aspects of these options in greater detail.

Webinar, June 23rd:
Options for upgrading your Drupal 7 or Drupal 8 site

Register Now

What is EOL and what does it mean to me?

The Drupal ecosystem of core and contributed modules is protected against hackers, data miners, automated exploits and other malicious actors by the Drupal Security Team — more than thirty developers working across three continents in almost a dozen countries to keep Drupal websites safe. The security team responds to reports of potential weaknesses in the Drupal core or contributed code and coordinates efforts to release new versions of the software that address those vulnerabilities.

The more than a million Drupal developers worldwide going about their day-to-day development tasks act as a passive network of quality control agents. Developers who discover security vulnerabilities while working with the code can confidentially report them so that the security team can go about fixing the problem before knowledge of the vulnerability is widely available. A million worldwide developers backing a thirty-something strong team of elite developers spells security — for your website, your data, and your organization.

In November of 2022, that all comes to an end for Drupal 7. That’s when the security team will officially retire from the Drupal 7 project in favor of modern versions of the platform. As new vulnerabilities in the code are discovered (and made public) you won’t have anyone in your corner to fight back with new security releases.

After EOL you can expect what’s left of the Drupal 7 community to move on, too. That means no new modules, new themes, or other new features built for the platform. It also means the pool of developers specializing in Drupal 7 starts to shrink very fast. If you’re still on Drupal 7 in late 2022, you’re out in the cold.

The good news is that there are options. Here’s are my top three picks:

Drupal 9: Drupal is dead, long live Drupal

Drupal 9 is the most modern iteration of the Drupal framework and CMS. It introduces a completely reimagined architecture and a rebuilt API more inline with modern development standards.

  • Cost: High
  • Build Time: High
  • Longevity: High
  • Support: High

Pros

Upgrading your site to the latest version of Drupal (9.1.10 at the time of this article) is, in most cases, the right move. Modern Drupal has grown to support a wide array of innovative features in core. Improved WYSIWYG content editing, a feature rich Media library, advanced publishing workflows, and rich JSON API are all available right out of the box. Couple that with Drupal’s new, highly modern architecture built on Symfony, the adoption of the Twig templating engine, and dependency management via Composer and you really do Build the best of the web in terms of technology, support, and longevity.

When you make the move to Drupal 9 you can count on Drupal’s huge and thriving community of developers (and security team) making the move right along with you. Existing modules from previous versions of Drupal are either — in almost all cases — already available, now packaged into core, or making their way to Drupal 9 at this moment. It’s very likely the agency you’re already working with has or is building a Drupal 9 proficiency, and it’s guaranteed that hundreds of other shops can pick up the slack in the odd case that your provider isn’t on board yet.

Finally, let’s not forget Drupal’s commitment to easy future upgrades which promises continuity in architecture that should facilitate easy upgrades to Drupal 10 and beyond. Gone are the days (probably) of “rebuild” style upgrades like those of Drupal 6 to Drupal 7, or Drupal 7 to Drupal 8/9.

Cons

Speaking of “rebuild” style upgrades... upgrading from Drupal 7 to Drupal 9 is one of them. While it could be your last major upgrade if you stick with Drupal for the long haul, moving from Drupal 7 to the more modern 8/9 and beyond architecture is a very heavy lift. For most organizations the move to Drupal 9 is the longest term, most feature-rich, most supported, and most modern option, but it generally entails a complete rebuild of your site’s backend and theme which basically means starting from scratch. Take a look at Joel’s post about the upgrade from Drupal 7 to Drupal 8 for more information — the process is comparable to a Drupal 7 to Drupal 9 upgrade.

Backdrop CMS: The same, but different

Backdrop CMS is a lightweight, flexible, and modernized platform built on Drupal 7 architecture with notable improvements. The software is a fork of the Drupal 7 code and boasts a simple, straightforward upgrade path.

  • Cost: Low / Medium
  • Build Time: Low / Medium
  • Longevity: Medium / High
  • Support: Medium

Pros

Drupal 7 sites moving to any other platform — including Drupal 8 / 9 — must be rebuilt. That’s not the case for Backdrop CMS, which gives you the option of protecting your investment in an existing Drupal 7 website and reaping the benefits of modern features like configuration management and advanced layout control. Backdrop CMS will prioritize backward compatibility with Drupal 7 until at least 2024, meaning that even fully custom Drupal 7 code — with very minor modifications — should work in the Backdrop CMS ecosystem. A large selection of widely used Drupal 7 modules are already available for Backdrop CMS, and more are on the way. And while backwards compatibility is a major selling point for Backdrop CMS, its architecture is forward thinking with the introduction of classed entities and an object-oriented approach to all of its new components and features.

While the Backdrop CMS developer community isn’t particularly large, Drupal 7 and Backdrop CMS development skill sets are virtually interchangeable for the time being due to the almost identical API. There’s also considerable overlap on the Drupal 8/9 front due to Backdrop’s preference for object-oriented code in all of its newly added features. That means your existing Drupal developers can help you make the switch, and while the upgrade process isn’t exactly seamless it’s definitely a far cry from a complete rebuild.

Backdrop CMS also has its own security team, which — for now at least — works closely with the Drupal security team. Active development for the current version of Backdrop CMS is planned through 2024 according to their Roadmap, with the next version of Backdrop CMS promising an even easier upgrade path compared to the Drupal 7 to Backdrop CMS upgrade.

Cons

Backdrop CMS implements both Drupal 7 style procedural coding and Drupal 8/9 style object-oriented coding, which in theory means that it caters to a wide range of developers. In practice it’s hard to predict the future of any up-and-coming development community. That makes the outlook for long term support a little opaque, in that it’s hard to say just how many developers will be supporting Backdrop CMS and building new features for it down the road a couple of years.

Also, while Backdrop CMS absolutely prioritizes backwards compatibility with Drupal 7, a greater number of contributed and custom modules in your existing site could complicate the upgrade process. Simpler Drupal 7 sites with fewer contributed and custom modules would probably encounter a low effort to complete the upgrade, while a greater number of contributed and custom modules are likely to see a medium effort as some of those modules may need to be converted.

Drupal 7 Vender Extended Support: Don’t move a muscle

Drupal 7 ES is the do nothing for now option. A small collection of approved and vetted vendors will be providing security updates and / or critical patches for Drupal 7 core and contributed modules following a variety of vendor-specific plans.

  • Cost: Low
  • Build Time: None
  • Longevity: Low / Medium
  • Support: Medium

Pros

The biggest plus here is simple: No further action is required at this time. If you’re planning to work with a vendor that provides extended support for Drupal 7, you won’t need to take any action to protect your website from aging software until Drupal 7 EOL in 2022. At that point, you’ll need to plan on a flat or adjustable monthly fee through the end of 2025 — or possibly beyond. This could mean avoiding major financial decisions regarding your digital strategy for at least a few years, and all at a cost that (depending on the size of your organization / website) is probably a fraction of the cost of a software upgrade.

With the Drupal 7 EOL recently extended until November of 2022, many of these Vender Extended Support plans haven’t fully materialized — so details are still forthcoming. Agencies like Tag1Quo or MyDropWizard advertise services from a surprising $15 / month to $1250 / month for a range of beyond EOL Drupal support plans. Acquia and Lullabot are also named by Drupal.org as ES vendors — but without any specifics about pricing or support levels. While the picture isn’t entirely clear yet, availability of an affordable ES plan is virtually guaranteed by 2022.

Cons

Drupal 7 Vendor Extended Support may protect you against vulnerabilities and exploits discovered after Drupal 7 EOL, but community support will be all but dead by that time. That means no new features or modules will be released and the pool of Drupal 7 developers will be rapidly drying up. Unless you have an in-house development team, you can plan on your website coming to a standstill in terms of new features.

The amount of contributed and custom modules your site implements has an impact here, too. The greater number of custom and contributed modules, the greater you can expect the effort to be in supporting those modules beyond EOL.

Another concern with Drupal 7 ES is PHP 7 end-of-life. Once PHP 7 (the language Drupal 7 is largely built on) is no longer supported towards the end of 2022, you can expect the security of your Drupal 7 site to rapidly degrade. Updating Drupal 7 to be compliant with the newer, more secure PHP 8 is doable — but could be a difficult process.

Finally, it’s likely that you will still need to consider upgrading to a supported platform in the future if your website will need to change and adapt in the coming years. You can expect this process to become more challenging as time goes on and the gap between your existing website and modern platforms grows ever larger.

Taking the first step

There are other options, too. Moving from Drupal 7 to another platform entirely (WordPress?) could make the most sense depending on the complexities of your website. Moving to a less robust CMS could be nominally more cost effective than an upgrade to Drupal 9, but it also bakes in some hard limitations to what your website will be able to do.

If you haven’t already, take a minute to register for Joel’s free webinar coming June 23rd. He'll be walking through a few of these options and more.

Webinar, June 23rd:
Options for upgrading your Drupal 7 or Drupal 8 site

Register Now

A lot of organizations are beginning to evaluate options for their Drupal 7 sites. The best way forward depends largely on your goals as an organization, your ambitions for your digital presence, and the amount of time and effort you’re willing to invest. We’d love to consider your questions or learn more about the specific challenges you’re facing as you sort through your options. Get in touch today with your questions about upgrade paths from Drupal 7.

Jun 15 2021
Jun 15
[embedded content]

Don’t forget to subscribe to our YouTube channel to stay up-to-date.

When someone tweets a link from your website, Twitter can use Twitter Cards to attach rich photos, videos and media to Tweets.

By doing some minimal configuration changes on your Drupal site using the Metatag Module and the Twitter Cards submodule, users can see a “Card” added below the tweet that contains neatly formatted information coming from your website, as shown in Image 1 below.

Image 1 shows an example of a “Card”.

The cards are generated using HTML markup in the HEAD region of your Drupal site; that’s why the Metatag module is used.

Twitter will scrape your site and generate the card using the HTML meta tags.

Table of Contents

Twitter Cards

There are four variations of Twitter cards. They are:

  1. Summary Card – Displays Title, description, and thumbnail
  2. Summary Card with Large Image – As the name suggests, similar to Summary Card but with a larger image
  3. App Card – A Card with a direct download to a mobile app. Use this Card to drive app downloads
  4. Player Card – Displays video/audio/media.

Image 1 above shows a “Card” of type Summary with Large Image.

In this tutorial, we will look at the steps involved in setting up the “Summary Card with Large Image” Twitter Card.

Getting Started

The Metatag module has a dependency on the Token module. However, if you download and enable the Drupal module using Composer and Drush, the dependency is automatically taken care of as we will show you now.

Use composer to download the module:

composer require drupal/metatag

Once the Metatag module is downloaded using composer, the Token module, which is a dependency, will be downloaded automatically.

Then enable the “Metatag: Twitter Card” submodule:

drush en metatag_twitter_cards -y

The above Drush command will automatically enable the Metatag: Twitter Card submodule, Metatag module and Token module.

Finally, it is always a good idea to clear the cache after enabling Drupal modules:

drush cr

Configure Twitter Cards

By default, Twitter Cards can be added to any content type. We will now configure the Twitter Cards for the Article Content type.

1. Go to Configuration > Metatag (admin/config/search/metatag) and click on “Add default meta tags”.

2. On the next page, select “Article” (or whatever content type you want to configure) from the Type dropdown.

3. Then click on Save. This is required for the correct tokens to appear in the “Browse available tokens” window.

4. Edit the “Content: Article” configuration from the Metatag page.

5. Click on “Twitter cards” to expand the field set and then select “Summary Card with large image” from the Twitter card type dropdown.

6. Now, we have to add tokens into the correct fields. Click “Browse available tokens.” then click on Nodes.

NOTE: If you can’t see “Nodes”, this means you need to save the “default meta tag” option first then edit it again.

Fill in the following fields:

  • Description: [node:summary]
  • Site’s Twitter account: Add your twitter account, i.e., @webwashnet
  • Title: [node:title]
  • Page URL: [node:url]
  • Image URL: [node:field_image] (adjust the field name accordingly)
  • Image alternative text: [node:field_image:alt] (adjust the field name accordingly)

Find Image Field Token

For this type of Twitter card, an image field must exist in your content type. We will show you how to use Token to grab that image data. Click on “Browse available tokens”.

Then drill down by going to Nodes -> Image. This assumes you’re using the Image (field_image) field on the Article content type.

The token should be [node:field_image].

Once you have found the image entity URL, make sure your mouse focus is in the empty Image URL Twitter Card meta tag field, and then click on the image entity URL token value. This will copy/paste the token value into the Image URL field.

Find Image Field Token on Media Asset

If you’re using a media field instead of an image field for handling assets, then use the following token, [node:field_media:entity:thumbnail] (change the field_media name accordingly).

7. Configure any extra fields as needed, then scroll down and click on Save.

8. Once you have filled out the other Twitter Card fields with their respective token values, you should validate the end result markup using the Twitter Card Validator tool. We will now show you how to validate your Twitter card.

As you can see, Twitter successfully recognised our “Summary with large image” card and displayed the information correctly.

NOTE: You’ll need to make sure your website is publicly accessible for the validator tool to work.

View HTML Source

If you want to see the generated markup, view the HTML source on your Drupal site and look for the “twitter:*” meta tags.

Summary

Twitter can display a neatly formatted version of your website’s content whenever someone’s tweets a link to your content. There are various types of Twitter cards depending on your needs.

We have shown how you can use the Metatag module and Twitter Cards submodule to configure Drupal 8 to correctly send your website’s content to Twitter and how to validate your markup to ensure Twitter correctly parses your website content.

FAQ

Q: I changed the default meta tag configuration, but the tags are not changing?

Try clearing the site cache. Go to Configuration > Performance and click on “Clear all caches”.

Editorial Team

About Editorial Team

Web development experts producing the best tutorials on the web. Want to join our team? Get paid to write tutorials.

Jun 15 2021
Jun 15

Imagine you created something and that something is a software. You wanted your creation to be used by as many people as possible, you wanted to make it universally accessible. So, you did just that, you made the software source code accessible so that anyone could inspect it, modify it and enhance its capabilities.

This is the scenario that makes an open source software what it is; a publicly accessible tool that is all for the community. It honours open exchanges, collaborations, transparency and perpetual development that is community-centric. These principles have made open source software become immensely popular today. And here is proof of that. 

The percentage of active public repositories that use OSS is shown through a graph.Source: Github

Many of the public repositories, like PHP, Java and .NET, use open source software and in heavy numbers. If we look at the revenue open source software is deriving, the numbers are again quite impressive.

The projected revenue of open source services can be seen in a graph.Source: Statista 

All these numbers speak volumes to the efficiency of open source software. However, if there is one aspect of open source software that needs some kind of assurance, I’d say it’s open source security. The reason is probably the fact that OSS is completely open for everyone, so it is assumed that something with this level of openness cannot be secure. 

In this blog, we’ll try to find an answer to the question, ‘what is open source security’ and see whether it is actually secure or not.

What Is Open Source Security?

Today, businesses try to leverage multiple software in their efforts to move forward in technology and open source is one software that is omnipresent in these efforts, be it just for its code. 

A graph shows an increase in the use of open source components per app.Source: Synopsys

The reasons for this elevated usage of open source components are plenty. 

The fact that you get to try the software before you buy it; 
The fact that support is free; 
The fact that there would be fewer bugs to deal with and faster fixes; 
The fact that software security would improve; 

To know more about the power of open source, read about the perks of being an open source contributor, leadership in open source, why are large enterprises investing in open source, why is open source recession-free, impact of open source during Covid-19 pandemic, and the significance of diversity, equity and inclusion in open source.

All of these account for open source to become a software that is quite pleasing to the eye. The last point that I mentioned may be the most pleasing factor of them all. But why? What is open source security? Is open source insecure? Let’s understand just that.

Like any other software out there, the OSS also goes through two main stages, the development and the production. And open source security works in both of them, managing and securing the OSS at all times by using certain tools and processes; all of this usually done through automation. 

Talking about the Software Development Lifecycle, open source security has three main responsibilities;  

  • It identifies open source dependencies in your applications; 
  • It provides critical versioning and usage information; 
  • And it detects and warns about any policy violations and its consequent risks. 

Moving on to the production phase, open source security continues to work diligently. Its main duties at this point are to focus on any and all open source vulnerabilities. It does so by; 

  • Monitoring vulnerability attacks; 
  • Blocking vulnerability attacks, if possible; 
  • And most importantly, alerting you for the same, thus making you ready to take action against them.

Be it a community driven open source or a commercial one, open source security works in much the same way. 

Delving a little deeper in open source security, is there an initiative or a body that is accountable for it. This was one question that I found myself asking while researching about this piece. And there is, it is The Open Source Security Foundation. It helps organisations relying on open source software to understand their responsibilities in terms of user and organisational security and verify it. 

The initiative focuses on aspects like vulnerability disclosure, security tooling and best practices, identification of threats and even digital identity attestation. All of these only aid in securing your projects, critical and otherwise, in a much better and efficient manner.

Is Open Source Good for Security?

The answer to the question ‘How does open source security work?’ is not a linear one. But if I had to answer it, I’d say open source security is nothing at all like Microsoft, which should provide a lot of clarity to you and instill a sense of faith in OSS.

According to Snyk’s The State of Open Source Security 2020 report, 

Open source ecosystems have expanded by a third in 2019; 
Open source security culture is focusing on shared responsibility; 
Open source vulnerabilities have reduced by a fifth.

A survey analysis is shown that depicts who the respondents thought was responsible for open source security.Source: Snyk

On top of this, the vulnerabilities that were found in open source as most reported weren’t high impact on software projects. 

These facts were enough for me to believe in the capability of open source security. However, for you, I am going to provide four more reasons.

Security that is transparent

The main benefit of open source security is that it is transparent. What I mean by transparent is that its source code is open. You can get information about the code base and potential bugs.

People can sift through the source code of any open source project and improve any imperfections, which would not have been possible if the source code wasn’t open. This further means that there won’t be any surprises as the chances of any malicious functionality would be quite slim with this level of scrutiny. 

Security that is reliable 

This advantage is relatable to the previous one. OSS openness has made it possible for its code to be continually tested. 

The online community, which is responsible for developing the code, is behind these tests, making the software more reliable and trusted. The software developed on such trust would most likely never crash and fail.

Security that provides quick catches and fixes 

After transparency and reliance comes the benefit of quick fixes. The open source community is again to be thanked for this. The many contributors of open source make it possible to detect any bugs and flaws and quickly patch and fix them, without any elongated downtime for your applications. 

Security that is sustainable 

Open source software isn’t going to go anywhere and would open source security become antiquated. The reason would be its growing community that would continue to expand indefinitely. Therefore, the platform would continue to improve and you would have the assurance of better security means as time continues to move ahead.

At the heart of every benefit of open source security is its openness and community. Is open source a security risk? Not really. Is it a full-proof solution? Again not really. Yes, open source security cannot provide you the guarantee of being full-proof at all times, but the fact that the open source security at least provides a better chance of being secure is enough to make it advantageous for us; after all, are there really any guarantees in life?

Are There Challenges That You Need To Overcome?

Moving on from the pretty picture of open source security, let’s focus on the dark side of the concept. Open source security isn’t always full of the joys of spring, there are certainly challenges that need to be overcome. Since open source has become prevalent in every business sector, so have the open source security vulnerabilities. 

The percentage of open source vulnerabilities in different business sectors are shown.Open source vulnerabilities by business sectors | Source: Synopsys

Ironically, most of the challenges coincide with the openness of an OSS, so the benefits become the drawbacks. Let’s take a look at them.

The openness isn’t without vulnerabilities

Much like any software out there, open source also comes with some vulnerabilities. Yes, the open source community aids in the remediation of these flaws, but they tend to widen the difference between open source safety and open source attacks.

A table elucidates OSS vulnerabilities.Vulnerabilities reported in OSS | Source: WhiteSource Software  

Yes, open source security issues come with their fair share of vulnerabilities, from XSS to information exposure, there is everything and these vulnerabilities keep on changing year after year. 

However, there is a silver lining in this challenge and that is the impact of these vulnerabilities. 

A graph reports open source vulnerabilities and the number of businesses they have impacted.Source: Snyk

XSS is one of the most reported vulnerabilities, however, it only impacts a low number of projects. This can be considered as a positive outcome of this particular challenge.

The openness lures attackers

The OSS code is open for everyone and so its vulnerabilities; and we certainly know that everyone includes people with malicious intent as well. So, open source vulnerabilities become an easy target for attackers.

The National Vulnerability Database, which is a platform providing information about the open source vulnerabilities that too publicly isn’t helping this challenge much. Don’t get me wrong, such platforms are indeed helpful in identifying the problems, but considering they are public and open, the attackers get their arsenal for the next target.

You may think that the known vulnerabilities should get fixed before the attackers are lured in by them. But that is easier said than done. The problem here is that the open source vulnerabilities are published at multiple platforms, thus tracking them becomes difficult. Even if they have been located, updating, patching or fixing can require some time and during that phase, you’d be at risk.

The openness might overlook quality 

There are a number of people who contribute to open source security and you cannot be sure that all of them would be security experts. Everyone in the community will not have the same level of skills and expertise. Therefore, the way they would create a piece of code would be different. This makes quality assurance a task that could almost be impossible to take on. Furthermore, the fact that there are no set standards for the quality of open source code makes it all more convenient to overlook quality.. 
 
All of this means that the quality might be overlooked and even compromised. The fact that only 8% of the WhiteSource survey respondents were concerned about the quality is a testament to this challenge.

The openness comes with licensing risks 

OSS may be free to use, but it does come with a number of licenses that need compliance; 110 licenses to be exact, according to the Open Source Initiative. These act as the guidelines for OSS source to be used.

With these many licenses, there is bound to be a risk of compatibility. Let’s understand this, some licenses are compatible, this means you can use them together. However, some aren’t, which means that using them together would put you at risk, like the Apache 2.0 and GPL v2 license.

What’s more is that, if you do not comply with the licensing guidelines of open source, you’d be making yourself open to a lawsuit. While I know this isn’t the kind of security concern we've been talking about so far, it is a security concern all the same.

Can You Overcome the Security Challenges?

The major challenge in open source security are the vulnerabilities. Detecting them and resolving them has to be the priority, if you want to overcome the challenges. Given the fact that open source vulnerabilities have risen in 2020, you need to be sure that you are not at an elevated level of risk.

A bar graph shows open source security vulnerabilities from 2009 to 2020.Source: WhiteSource Software 

Let’s see how these vulnerabilities can be caught in time, so that they do not affect your business by implementing some of these open source best practices.

Prioritising security, always 

The first part in overcoming open source vulnerabilities is to always prioritise security. This starts with the choice, whenever you choose an open source component to work with, security has to be one of the considerations in the choice. 

Usually, functionality comes as the main reason for choosing an OSS. However, just focusing on that can put you at a disadvantage. Think of it this way, an open source component that does not require any integrations with your codebase would remove any and all security risks, along with reducing the complexity of your source code.

Prioritising automation as a means to detect and monitor vulnerabilities 

Next comes the detection of the security vulnerabilities and automation comes quite handy here. Organisations, especially large ones, have a pretty massive codebase and going through it would be a mammoth task, if not automated. Detecting susceptibilities is already quite a lengthy process, even with automation.

You have to identify which packages are being used; 
You have to pinpoint the vulnerable functionality in your code; 
You have to map out the way that particular vulnerability is impacting; 
And then you have to work on rectifying the findings.

Such a process may only include four steps, however, it isn’t a trivial task.

One of the problems in overcoming the vulnerability challenge is that organisations, sometimes, have no clue that they are actually susceptible. The fact that the open source community has an extensive amount of data means that the vulnerabilities would be spread across that expansiveness. So, running automated scans for identifying vulnerabilities would never let them go unidentified.

Taking help of automation tools would not only help you get to the problem areas faster, but also keep doing it continuously. When you enforce automated tools to continually monitor security problems, you come closer to protecting your project and taking control over the open source components you are using.

Prioritising the involvement of the team in security

The last point to cover in order to overcome the open source challenges involves your team. There is a high likelihood that your developers would not be experts in security. And the people you may have in security would be lost in the developers’ realm. Since open source vulnerabilities require you to be efficient at both development and security, there has to be some training involved.

The ways to track open source dependencies are shown.Source: Snyk

Such a response for detecting open source dependencies is not ideal. So, aim for cross training your staff, the developers should be able to at least identify certain security vulnerabilities and the security team should have some understanding of the development process. 

If you think that isn’t a possibility, you can hire outside help to assist you in overcoming the challenges posed by the open source components. 

The Verdict 

OSS is on the rise and it will continue to grow in the future, there isn't any doubt about it. Along with that open source security will also strive to improve. Yes, there are issues that surround open source security, it isn’t perfect. I think that’s a good thing, because perfection cannot be improved upon and that means open source security has a lot of strides to make. 

Open source security operates on visibility and openness, and it also teaches its adoptive organisations to preach the same. Aiming for visibility in your source code would always keep you ahead of the vulnerabilities you might have. It would also provide you with knowledge of your dependencies and a clear understanding of your code. 

So, in that sense, open source software would be a great low cost addition to your project and open source security isn’t something that would ever hold you back. With the amount of open source security tools available today, that’s almost a guarantee.

Jun 15 2021
Jun 15

We’re back with a recap of our favorite Drupal blog posts from last month. We hope you enjoy our selection for May!

Drupal 7 to 9 Migration Planning Guide

First off, we have a guide for planning Drupal 7 to 9 migrations shared by Anne Stefanyk of Kanopi Studios. With the end-of-life date for Drupal 8 in 2021, one year earlier than for Drupal 7, it makes the best sense for Drupal 7 site owners to skip Drupal 8 and migrate straight to 9.

Anne’s post includes a step-by-step planning and migration process, as well as some of the main considerations and challenges, such as making sure to have the website use the latest version of Drupal 7 to facilitate the migration, and taking into account the changes in certain practices in newer Drupal versions.

Read the whole migration planning guide

Write Better Code with Typed Entity

In the next post from May, Mateu Aguiló Bosch of Lullabot presents Typed Entity, a module that he built which can help you write cleaner, more maintainable code with fewer costs.

With Typed Entity, you create a plugin serving as a typed repository and containing the business logic rather than that of a specific entity. This can then be used to extend the class and create specific instances of the wrapped entities.

Mateu demonstrates this approach more in depth and shows examples of the module and how it works. He also includes a video version of the article for those who prefer watching.

Read more about Typed Entity

Streamlining Drupal Multisite Maintenance with Config

Next up, Byron Duvall and Bec White of Palantir.net show how to facilitate the maintenance of a Drupal multisite architecture with shared configuration rather than by using different install profiles. 

As opposed to the latter, shared configuration management is a much quicker and more straightforward process. It enables you to only make updates once (e.g. installing and setting up a module) and then rolls them out to all the sites with the shared config.

The set-up process as well as adding new sites and making updates are all simple as well; Byron and Bec also provide a step-by-step guide for each.

Read more about shared config management in Drupal

Migrating into Layout Builder

Moving on, we have a post by Chris Wells of Redfin Solutions recapping his session from the recent DrupalCon North America about migrating a Drupal 7 website to Drupal 8 with Layout Builder enabled on the new website.

The main challenge here was migrating the content which was created with WYSIWYG templates in Drupal 7 to pages created with the new paradigm of Layout Builder using overrides. For this, they used Drupal’s Migrate API.

Importing the blocks and nodes are both fairly straightforward parts of the migration; the key change is transforming the data into a format which can be read by Layout Builder, for which they use their custom process plugin.

Read more about migrating content to Drupal 8

Is Drupal Right for Universities?

Another enjoyable post from May was this next one by Mediacurrent which breaks down why Drupal is a top choice for higher education websites. Indeed, Drupal is particularly well suited to the digital needs of universities, with its high commitment to accessibility, multilingual capabilities, mobile optimizations and personalization features.

On top of that, Drupal also provides great multisite capabilities with easy and efficient management. It can help with brand visibility with its out-of-the-box SEO features, and its focus on security and privacy is now even more important with stricter privacy laws. It’s no wonder that over 70% of leading universities use Drupal to power their digital experiences.

Read more about Drupal in higher education

Books/ 31 Days of Drupal Migrations

We continue with David Rodríguez’s review of the book 31 Days of Drupal Migrations written by Mauricio Dinarte. The book is based on a series of Mauricio’s articles from a few years ago and is the most comprehensive resource for up-to-date information on the Drupal Migrate API.

David provides an overview of the book’s key information and its most important points. He highly recommends every development team should have a copy at their disposal; he has nothing but high praise for the book, in terms of both style and content usefulness, and gives it a 5 out of 5 in every category he looks at.

Read more about 31 Days of Drupal Migrations

Drupal 9: Setting Up Multilingual Content Views

Nearing the end of our selection for May, we have a blog post by Philip Norton on #! code explaining how to set up multilingual content views in Drupal 9. The issue with Views is that the wizard does not support multilingual content and this ends up creating lists of content with duplicated items.

As Philip shows, though, the solution is very straightforward. You need to first add the Default translation filter, then select a different rendering language, both of which can be done through the admin interface. This will also ensure that a content view will fall back to the original language if there’s no translation for it.

Read more about setting up multilingual content views in Drupal 9

Makers, takers and altruism

Topping off this month’s list, we have a post by Morpht’s co-founder and managing director Murray Woodman, who shares his thoughts and ideas regarding a recent post by Dries Buytaert on balancing makers and takers in open source.

Murray further elaborates on some of Dries’s key points with a simulation of “green beard altruism” by Justin Helps, which highlights the importance of encouraging altruistic behavior to ensure the sustainability of a project.

In the second part of the article, he explores how these lessons can be applied to the Drupal community, coming to the realization that we’re already taking a lot of the right steps, e.g. with the financial sponsorships of the Drupal Association and the contribution credit system.

Read more about altruism and sustainability in open source

Different hands holding up paper house

This concludes our recap for May. We hope you enjoyed (re)discovering the articles! 

Jun 14 2021
Jun 14

What are Layout Paragraphs?

According to their project page, Layout Paragraphs provides an intuitive drag-and-drop experience for building flexible layouts with paragraphs. Paragraphs are the preferred method of dealing with complex content structures inside of Drupal, and Layout Paragraphs are for dealing with complex layout structures in those paragraphs. 

Layout paragraphs are meant to help separate the content and structural needs of site builders and content editors. The concept of Layout paragraphs is not really a new idea. It builds upon a long history of layout systems including panels, display suite, blocks and panelizer, but utilizes more core API's like Entity API and Layout Builder API.

When using paragraphs in the past, extra paragraphs, or multiple versions of the same paragraphs, were needed in order to provide different layouts and styles for content creators. With a very plain editing experience, it could be very difficult for content creators to get an idea of how their page will look on the front end. Layout paragraphs are here to help simplify what content paragraphs are needed, and provide a better idea of what the page will look like when editing.

With some practice and refining of the steps below, it is possible to create and replicate a powerful layout and content system that your clients will love using. So let's get to it.

Setup steps

  1. Install modules and themes.
    • Modules required: 
    • Optional:
      • gin (recommended admin theme. Claro and some others work well too.)
      • gin_toolbar (recommended if using gin for your admin theme.)
  2. Setup the main settings for layouts and paragraphs.
    • Visit the "Layout Paragraph Labels" settings page - admin/config/content/layout_paragraphs/labels.
      • Select Show paragraph labels.
      • Deselect Show layout labels (I find this not needed, if layouts are easily distinguishable from each other).
  3. Setup a paragraph to use as the Layout source.

    This is known as a layout paragraph.

    • Call it Layout (or whatever helps you to know this is layout related) You can have multiple paragraphs that handle layouts (like one for large complex layouts and one for simple layouts inside of a layout), but best to keep simple.
    • In Behaviors, Set to Layout paragraph.
    • Select the layouts that you want to have available. You will have to come back to this as you create/update/delete layouts
      • Default core layouts are available.
      • Easy to create custom layouts using Core layout API.
    • No other fields are required on this paragraph, and nothing needed in the form display or display settings.
      • Text fields for headings and select fields with options for setting classes are things you can try to work in here, but really no other fields are needed.
  4. Setup some paragraph types that will be used in the layouts.

    These are known as content paragraphs. This part is up to you and your needs for the site.

    • Good recommendation for content paragraphs are just simple paragraphs. Some examples include:
      • One with just a WYSIWYG body field. 
      • One with an unlimited link field.
      • One for with an unlimited media field to post single images or multiple image slideshows.
  5. Setup a layout field on a content type of your choice.

    For this I prefer to keep it simple. I will have just the title field, the layout/content field, and then rename the body field to something like "teaser/summary" and set that up to only be used in teasers and meta tag descriptions.

    • Create a field of paragraph type. I usually call it field_content of field_paragraphs.
      • Set the field settings to Unlimited.
      • Include/exclude the paragraphs that you want available in your layouts. This is something you will probably have to come back to as you add new paragraph types to the site.
      • Must include the paragraph that is acting as the Layout paragraph.
    • Set the Form display.
      • Set Widget to Layout Paragraphs.
      • Set view mode to Preview. 
      • Set 0 nesting depth (prevent layouts in layouts. I like to keep it simple, but it will come down to your needs and desired user experience.).
      • Set require paragraphs to be added inside a layout (a layout must be chosen first before a content paragraph can be set).
    • Set the Display.
      • Set the label to Hidden.
      • Set to format to Layout paragraphs.
      • Set rendered as Default
  6. Place stuff in your new layouts.

    • Create a new page of the content type with your layout field.
    • When you hover over the field, an add icon will appear. Click on it and you will be able to add a layout type paragraph.
    • When you add a layout, a modal will popup where you can set the desired layout style as well as any other options that are available (this is based on what contrib you have installed or if you have any fields on your layout paragraphs).
    • Select a layout style, like One Column, and select accept.
    • You will see a "block" representing your new layout. Hover over and you will see an add icon appear at the top and bottom of the area.
    • Click on one of those icons, and you will have the option to set one of the Content paragraphs you've made available. 
    • The paragraph fields will display in a popup. Add your content accordingly and press accept.
    • Repeat with a couple more content paragraphs.
    • Make sure to add the node title, and press save.
    • Review your new node. Things may look slightly off (or wildly off). It's based on how the CSS is working in your theme. But rest assured, all the pieces of the puzzle are in place for you to work your magic.

You can change the layout and move content around.

Conclusion

Many of these steps are customizable based on your needs and your Drupal development skills. I've used select fields and view mode selection fields inside of the layout paragraphs and content paragraphs to allow content creators to get creative and apply different prebuilt styles. I've played around with preprocessing, twig templates, and extra fields to build complex displays, but still allowed for a simple user interface. The possibilities are up to you.

Jun 14 2021
Jun 14

I'm a fan of configuring things for display through Drupal's admin UI. It gives site builders confidence and power. What if you want to place blocks or views listings in amongst fields on pages of content? For example, to display:

  • A listing (view) of related content, such as accessories for a product
  • A standard contact block, advert, or some other calls to action in the middle of the content, exactly where the user is best 'caught' in their journey, rather than having to stick those in sidebars or after all the content fields.
  • Some specific value(s) pulled from fields on some indirectly related entity, through a token, such as details from a taxonomy term representing the 'section' that a page is in.
  • Consistent relevant links on user profiles to take people to common destinations

Drupal's usual blocks system allows you to put these in sidebars or above/below the usual node fields, but not between them. You could use a 'heavyweight' system like Layout Builder, Panels, or Display Suite, but those tend to entirely change the way you configure or edit your content. You could get a developer to override twig templates or write custom PHP. But is there a middle ground?

Well, of course! You might have noticed some modules already allow their page additions to be moved around amongst the usual fields on content. See these rows without widget or format settings in this following screenshot, which aren't for ordinary fields at all? Wouldn't it be great to be able to add your own?

Profile display settings with 'extra fields' highlighted

This is where the Entity Extra Field module (entity_extra_field) comes in. It supports embedding blocks, views or values to be replaced via tokens. So a site builder can set these up to be managed just like ordinary fields on the page (whether it's a node, term, paragraph, or any other type of content). Each one would act as a sort of 'pseudo-field', rendered as part of a display mode amongst the ordinary fields. It also works for form modes - so you can display useful content beside existing field widgets, perhaps displaying relevant related data to editors in the places that they would be entering content about that data.

Entity Extra Field supports visibility conditions (just like blocks, but for views & tokens too) and passing & selecting contexts for blocks. These give it quite a lot of power - for example, to conditionally hide a field rather than just using Drupal's ordinary display settings for it. So I believe this module does a better job than the older EVA module (for views), and my own similar EBA module (for blocks) did. In fact, I recommend that anyone using my EBA module in D7 should use Entity Extra Field in its place when moving to Drupal 9. Here are some screenshots of its interface - first for selecting a view to add to content:

Entity extra field administrative interface for selecting a view to add to an entity display.

And for choosing a block to display - in this case, a custom one that requires a context:

Entity extra field administrative interface for selecting a context-aware block to add to an entity display.

Each 'Extra field' gets shown on all entities of the type/bundle they are configured on. So there's no need to constantly remember to add a common block or view every time you create/edit a page. If you do want to have different ones on different pages, then you should use Views Reference or Block Field. These modules provide true fields for editors to choose which view/block to display on each individual page.

The code inside Entity Extra Field uses hook_entity_extra_field_info(), which acts just like its Drupal 7 predecessor, hook_field_extra_fields(), which I've written about before. So you could write code using that to add your own page additions too - but given that blocks, views, and content accessible via tokens are possibly the most common things to embed, that suddenly feels unnecessary. Even as a developer, I'm glad to avoid writing code that would need maintaining anyway.

I've been privileged to be able to contribute fixes & functionality to the Entity Extra Field project, resulting in a recent new release. My time for that was essentially sponsored by ComputerMinds and one of our clients who would use this site-building capability, especially around block contexts. So thanks to them! And of course a big thank you goes to Travis Tomka (droath) for making the module, and accepting my many issues & patches!

Photo by Y. Peyankov on Unsplash

Jun 11 2021
Jun 11

After quite some Google searching I found some answers that came close, but nothing that worked straight away. So to prevent you from puzzling and searching, here is how to get the image field url from a referenced node in a Twig node template

Use this code for example in node--blog--teaser.html.twig :

{{ node.field_REFERENCE.entity.field_IMAGE.entity.alt.value }}

Replace 'field_REFERENCE' with your node reference field system name, and replace 'field_IMAGE'  with the system name of your image field.

And by the way: more on Twig file naming conventions in Drupal here.

Drupal theming Planet Drupal Written by Joris Snoek | Nov 06, 2021

Join the conversation

Jun 11 2021
Jun 11

Is it possible for us to finish everything we start? Is it possible for us to achieve every milestone that we set for ourselves and stick to every new year’s resolution we make? In a perfect world it would be, but sadly we do not live in a perfect world. 

And it’s not necessarily a bad thing to take a step back from a project you know you won’t be able to finish. I started painting my room when the pandemic began as a way to waive off the boredom, and half-way through I realised painting wasn’t for me. It was too exhausting and I wasn’t even good at it and most importantly it made me lose focus from my actual paying job. You can write a lot when you have paint all over your work desk, trust me. 

So, these unfinished projects have to be taken on by someone else, right? You can’t leave the room half painted, that would be a look the 21st century isn’t ready for. So what happens? Do you hand over the paint and the brush to the person taking over and forget about it? Not exactly. 

There are a whole bunch of things that you have to relay during the handover and keep a diligent eye on the new person to ensure that he is taking the project into the desired direction. You have to have the room painted as you had initially planned, you can’t expect a subtle lavender theme to turn out to be a neon orange at the end; that’d be a catastrophe of the highest order.

Now, we won’t be talking about painting rooms throughout this article. No, what we will be talking about is the way project managers handover business projects that are work-in-progress. What are aspects they focus on during the transition of duties, so that they do not affect the project’s completion? And does the transfer actually become fruitful for the project? Finally, I’ll share some instances from OpenSense Labs, wherein our project manager had to handover a project. So, let’s start.

The Handover Begins With Knowing the Company

If we look at project handovers, there are two scenarios that basically decide how much work it is going to take from the project manager himself. 

  • One of them is when a project is being transferred to a PM who is already a part of the organisation. 
  • And second is when a new project manager is hired within the organisation to take over an already in-progress project.

The first step we are going to discuss isn’t really necessary for the first scenario, but quite crucial for the second. And that is the knowledge of the company, its mission, its way-of-conduct and its overall cultural dimension.

Someone who has been a part of the organisation, even if it is for a little while, would already be familiar with it; however transferring project ownership to someone new would have to go through an acclimation process and that is what this step of project ownership transfer is all about.

Why is this acclimation important?

Because it provides perspective 

Being familiar with the organisation’s vision gives a perspective on things for the PM that he otherwise may not get. This perspective is important for things to sail smoothly throughout the remaining life of the project since it’d provide you with a purpose along with an overview.

Because it helps in communication 

Every organisation has its own culture. At OpenSense Labs, we follow the opposite of a traditional work culture with stringent rules and regulations that limit the scope of projects and employees. Liberty, openness and equality are some adjectives that would describe OSL’s office regime. This culture is directly related to how communications go down the hierarchy. Being familiar with it helps new PMs to fit in with the team and take things forward in a way it is used to.

Because it helps in decision making 

When you study the company you are going to take a project from, it would help you make better and more informed decisions without any disruptions.

A table is comparing a company's local process to global processes.Source: Toptal

The above image talks about some of the daily decisions a PM has to make, and knowing how to make them would make his/her work a lot easier.

Familiarising Yourself With the Nuances of the Actual Project Is Next

Now, you know the company, but do you know the project and what place it has in the company’s revenue stream? Knowing that is the next stage of project ownership transfer. This is also referred to as the knowledge transfer or at least its beginning.

Your organisation is going to have a number of projects running at all times, they could be about helping small entrepreneurs become more successful, however, all of these projects cannot be at an equal level of prominence. Some would be high priority and some would be on low. Identifying the significance of your project is what you would need to do first.

Once you have done that, you can start looking at your own project with a fine tooth comb. You would need to know everything about it to ensure that the outcome is what is expected. Start with the generic nature of the project. 

What is the project type, in-house or external?
What does the target audience look like?
What is the marketing strategy?
What are the competitors providing?

An answer to all of these questions will help you get a better understanding of the project. When you have that, then you have to dig deeper into the transition and learn about the change, everything preceding it and everything that has to follow.

What kind of progress has been made in the project?
Which aspects of the project are outstanding?
What tools and processes were being followed?
What are the restrictions and blockers holding that project back?

These questions are extremely important to ask as they would help you in knowing the deadlines and reaching them on time. Being familiar with all the issues hindering the project completion, be it about the team or client communication, won’t let you get blindsided, which can happen after a takeover.

Then You Get Acquainted With the Stakeholders and the Team 

In every project transfer, there are people who play a significant role in its completion. These are the people who are essentially responsible for all the work that goes into the project and its consequent success. As the new project manager, you ought to become acquainted with them from the very first day of the ownership transition because acclimating to people is the most difficult task of any process.

The Stakeholders

Starting with the stakeholders, these people are the ones who are going to directly benefit from the success of the in-progress project. It can be the client and his organisation and it can be people within your organisation, if it’s an in-house project affecting them.

Talking about the client, the focus is to make him comfortable with you and you being comfortable with him. During the entire transition, the client has to be kept in loop. Even if the previous PM had been fired, the client has the right to know. 

At OSL, we introduce the new PM to the client in phases. After some time, the new PM is involved more by making him prepare meeting agendas and answering client questions.

At OSL, we introduce the new PM to the client in phases. We ensure that once the former introduction has been completed, the new PM is always present in client calls even if he/she is not contributing anything. Even without the contributions, they’d be learning and that is what the transition is all about. After some time, the interaction is made more frequent and the new PM is involved more by making him prepare meeting agendas and answering client questions. During this time, the old PM is always there to handle any mishaps. Once those mishaps are no longer happening, it means the comfort is achieved and the new PM is given the command.

For an in-house project, the stakeholders would be the people using the end-product. Because they need it they’ll become your project’s advocates and in turn yours too. You have to capitalise on that. You should make yourself acquainted with them and get their feedback on the project you are delivering by testing an early version of the project on them as an option. 

Every stakeholder of the project would always want it to be successful and it is up to you to get them involved to improve your chances of success.

The Team 

Then come the people whom you would complete the project with. There are three things you have to be mindful of. 

  • One is the team’s structure and hierarchy, if there is any. You should know how they operate and what is the working dimension, remote or co-located or both. 
  • Second is to dig a little history and know about any grievances they might have had with the previous manager or even among themselves.
  • Finally, you need to know whether the team you have is of the right size, you could be understaffed or overstaffed.

These help you become one of them and make everyone feel included by eliminating any kind of friction between you and them. Having the old PM with you during the transition can help make the acquaintance process go by faster because you’d know the kind of authority and system the team is used to making the transition easier for them. Of course, if the PM has already been fired or there was no PM at all, it might be a possibility.

Understand how human psychology works in the project management here.

Knowing Exactly What Is Required of You 

Now comes the part you will play in the project. Of course, you are going to be handling it, but where would you start delivering?

Here the first important thing to know is the reason you are taking over. The previous manager could have left the organisation or he could have been made to leave. The former scenario doesn’t really have any relation to the project itself, but the latter could and you ought to know that. If a PM was removed or fired, there has to be a reason, right? He may not have done the job in the appropriate manner or he may have mismanaged the project and even the team, whatever the case, learn about it and start rectifying from the get go. Trust me that is the first plan of action expected from you.

You can only do that once you know what exactly the role of a PM is in the organisation. By this, I mean a few things.

  • You need to be aware of the way you are going the handle the client and the team;
  • You need to be aware of the extent of your duties and whether they go beyond the scales of the project;
  • You need to be aware of the procurement process as well as vendor selection as you may have to do it at some point;
  • You also need to be aware of the way your performance is going to be evaluated, how and who is going to review it.

A knowledge of all these aspects will only help you perform your duties better and get the project completed without any impediments.

Read our blog ‘Feature Prioritisation in Projects: How It's Done Right?’ to know more about project management and the feature prioritisation that goes in it.

The Final Handoff 

The above mentioned project handover necessities actually sum up the entire process and usually most of it is mentioned in the handover plan or document, which the old PM goes over with the new PM in due diligence. 

And it doesn’t happen overnight, it takes from a couple of weeks to a month, the gradual nature of the handover is what makes it fruitful for the project. Taking a few steps a day by breaking the transition into pieces that are easy to comprehend at a time is essential. Another aspect that is essential is you being shadowed, be it by the old PM or the team, that is what’ll help you learn the ropes faster.

You wouldn’t take the reins at once, it would come in increments of each step we discussed. 
You won’t be expected to answer the client worries from the get go;
You won’t have to deal with the developers from day 1;
You wouldn’t be expected to make a low performing project turn around at once. 

Everything would happen gradually. Once you have the apprehension of the company’s vision, the project itself, the stakeholders and the team along with everything that is expected of you, you’ll be ready to wear the PM hat and take the project on yourself. And the final handoff would be complete.

The Other Side of the Handover: OSL Handover Manual  

OpenSense Labs have successfully completed many projects in its life, however, sometimes these projects have been the rewards of more than a single project manager. There isn’t a particular reason for that. Sometimes the project manager had to hand over their work because he was leaving the organisation and sometimes it was because he was overburdened and couldn’t give his complete attention to the project. 

While researching this blog I talked to two of our project managers, Yash Marwaha and Abhijeet Sinha, to get a better understanding of project handover. Project handovers are a two way street, up until now we have discussed the side of the PM who will take the project forward, now let’s look at the other side and delve into the project transferer’s perspective.

Yash and His Handover Precision 

Yash is all precision and accuracy with a set system to make the handover as smooth as possible. The first thing he does is identify the type of project, which could be a long term engagement or support and maintenance. For him, this identification decides the timeline of the transition.

The steps that he follows usually go like this.

  • Creating a handover document and going over it with the new PM; 
  • Informing the client; 
  • Planning induction sessions with a handson walkthrough; 
  • Introducing the new PM to the client; 
  • Being available on calls between the new PM and the client until a comfort level is reached;
  • Finally changing the ownership when that happens.

This is a great system to follow for a handover, yet Yash has had to take over a project even after the handover has been completed. The reason was the new PM not being comfortable with the client. Even after doing everything by the book, things can still not go as smoothly as you may have wanted. You cannot control all the variables, let’s learn that from Yash.

Abhijeet and His Handover Diligence 

While Abhijeet follows much the same steps as Yash, he doesn’t focus too much on the time, rather he focuses on diligence. What I mean is he doesn’t feel that a handover has to be confined to a specific timeline. A similar project could have been handed over in a week, but that doesn’t mean that the current ownership would go the same way. For him, when you rush things, diligence goes out the window and chaos ensues.

He has two project transfers to prove his point. 

  • He had to hand over a project, redressal of a major tourism website in Kansas City, to Yash. The handover happened within 3-4 days, pretty quick, right? The reason was that Yash was already in contact with the client making the transition as smooth as smooth could be.
  • Then there was Earth Journalism, wherein an all new PM had to be assigned the ownership. He kept her in the loop for the client and the developers. He helped in removing the friction between the new PM and the developers, which happens in every transition, at the same time he ensured that she knew the contextual needs of the project. This transition took about a month. Learn more about the work done on Earth Journalism Network by OpenSense Labs here.

Project transfers can be a tricky business. There are a lot of parties involved and all may not welcome the change. As the project transferer, you have to be patient with everyone. You have to ensure that everyone involved is in favour of the new person, if not the final handover would not be the end of it. 

There is another thing that the OSL team shared with me and that is never ever transfer ownership of a three-month project mid-way. If you are to do it, do it in the beginning itself. There is no point in bringing a new PM after two months, it’s not going to benefit anyone.

Whether you are taking up a new project or getting a handover of an ongoing one, learning through Drupal website projects can be very handy. Learn what are the infinity stones of Drupal development, how to start the Drupal project the right way, how to manage your development workflow for Drupal project, and why product mindset should be preferred over project mindset.

Conclusion

In the end, all I want to say is that people have a tendency to take time to learn things and perform them in an efficient manner. Rome wasn’t built in a day, right? So, what needs to be done during a project handover is valuing the learning curve. It’s going to take time and patience to make it fly. The kind of details an on-going project can have are quite diverse and making the new project manager get a hang of them is what matters. And that requires time from the organisation given to the new PM and his efforts in making that time worth it. 

Jun 10 2021
Jun 10

When you're working on your Drupal SEO, the Meta tags module is important to your site because it tells the search engines what the content you have is all about. That is right, it's pretty much data about data. And while this seems redundant, it is incredibly important for the overall optimization of your site.

A controversy occurred during the dawn of the search engines (circa 1997) when many people abused meta tags by stuffing them full of keywords. While they were invisible to visitors, search engines gave a lot of credence to the meta tags, so it was a viable way to get to the top of the search engines.

These days, search engines use metatags less for rankings and more for determining what your website and pages are about and which keywords they best fit. Making sure these tags clearly state what your site and pages provide to a visitor will go a long way to ranking in the search engines.

There are also specialized tags that can help with local SEO and provide information that can get your products listed in the Shopping sections of the search engines, which is always helpful.

To get step-by-step instructions on how to make sure your Drupal website's metatags are properly set up, download our free Metatag module guide available within our Drupal SEO Guide section.

***

If your website has specific needs (e-commerce, local, etc.),
feel free to sign up for a free no-obligation consultation
to see if Volacci can help you with any SEO problem you might have.

Jun 10 2021
Jun 10

As a company that believes that the best outcomes are achieved when people are able to create and collaborate in open, diverse, and inclusive environments, we’ve spent the last few years strengthening Palantir’s commitment to being an equitable and just organization. We have evolved our compensation, performance, and reporting structures in an attempt to proactively identify and remove systemic barriers to equality, becoming less hierarchical and more agile. To date, these efforts have included:

  • Establishing an equitable compensation structure with defined salary levels that provide equal pay for equal responsibilities.
  • Instead of reporting to a manager, each person now has a P.O.D. (Professional and Organizational Development) team that provides facilitation and coaching in performance, growth, and development.
  • Creating a career grid that’s supported with a role-based structure that demonstrates what opportunities exist for advancement and articulates the skills and expectations for each level so that individuals and P.O.D.s are orienting learning and growth conversations around a standard for promotions and opportunities.

We know, based on our experience with the Drupal open source community, that diverse teams drive innovation and improve quality. As Drupal’s Values and Principles state, “the people who work on the Drupal project should reflect the diversity of people who use and work with the software.” We agree. And while Palantir is already one of the more diverse and representative teams in our industry, we are not yet where we want to be. We are committed to doing the work necessary to build a team that incorporates diverse experiences and strengths where everyone can bring their best selves to their work and make space for others to do so as well.  

Ironically, the tumultuous pandemic year brought tremendous stability to the Palantir team itself. Our internal commitment (which we nicknamed CODENAME Armadillo) ensured that there were no layoffs or salary reductions in 2020, which gave us individually and collectively the space to focus on what we needed to be healthy. We invested in our existing team’s ability to build the resolve and resilience we would need to reimagine and redesign what we wanted our next normal to be. 

Now as we re-emerge, we have begun adding to the Palantir team, allowing us to examine what has and hasn’t worked for us in our hiring process. Adding several positions in succession has given us the chance to experiment with this process as we seek to redesign a more effective, equitable and agile process. 

The Challenges: 

  • Workplace inequality often has its roots in hiring processes that prioritize privilege and connections over potential. Qualified candidates can be passed over if they don’t fit into patterns of what the company or those in a position to hire might have seen work previously.
    How might we create opportunities for truly interested candidates to demonstrate how they would be “culture adds”, rather than just looking for “culture fits”? 
  • In the past, Palantir’s hiring process was most successful if a candidate was already familiar with the company (and was even better if the candidate knew one of us well).  
    How might we create a scalable, equitable environment that allows everyone to experience the advantages of insider access to a Palantiri?
  • Palantir’s hiring process could be long, with three rounds of interviews (screener, team and CEO) all conducted by in-house team members with other primary responsibilities. That long duration created an unintentional slant toward those who already had jobs (as those who didn’t often took other positions while our process was still unfolding). 
    How might we accelerate the process, without compromising our ability to manage hiring in-house and involve the team?

Thinking about these questions, we have spent the last few months conducting a series of experiments designed to reduce bias in our hiring process. Here are some of the approaches we have found most successful in addressing the questions above in the initial gates of the process: the application and the initial screening conversation.

Anonymous Applications

Our efforts begin with the application stage. Our job postings include a salary range and are reviewed for gendered or biased language. When we receive resumes and cover letters, a team member who isn’t involved in the hiring decision anonymizes them, removing names, addresses, and education information that shouldn’t influence our hiring decisions. (There is software that will do this automatically for larger firms, but it doesn’t seem available to small firms and our HRIS system doesn’t yet offer this feature.)

Knowing that our process (and indeed our company!) works a little different and can ask a lot of our applications, we want to make sure that we’re making the process informative and valuable for them as well. To that end, we host a webinar that candidates can attend or submit questions for and watch afterward, if they cannot attend. During this webinar, they have the opportunity to learn more about Palantir and the position and may anonymously ask questions. Our hope is that this is a very safe and welcoming environment where they can begin to see what it might be like to work at Palantir.

A Conversation, not a Challenge

After the webinar, if a candidate chooses to continue pursuing the open position and Palantir chooses to invite them, they are invited to a video interview with our Employee Experience Manager and another team member. Prior to that interview, candidates are asked to share something that demonstrates a required skill for the position. 

For technical positions, they are provided with some sample code from a variety of languages and frameworks. The code samples we use are drawn from public code repositories and aren’t written by one of our team members. 

During the technical interview, the candidates are asked to choose one of the code samples and share any observations, experiences, or thoughts they may have about it. Unlike in a coding challenge, we don’t ask candidates to author code on a whiteboard, and there are no planted bugs or tricky logic to uncover in the samples we provide. The questions are scripted in advance and asked in the same order and same way for each candidate. 

The point of this exercise is to demonstrate the candidate's ability to abstract from the code level to talk about functionality, purpose, and risks. As consultants, we often need to talk about our work with both technical and non-technical audiences. 

For non-engineering candidates, we ask that they record a presentation on a topic about which they are knowledgeable and passionate, or that the candidate write, draw, or record a video about which Palantir value resonates for them. As we continue to hire people for additional roles, we will find ways for them to demonstrate their unique skills and point-of-view. 

During the first interview, the candidate can also ask questions and learn more about Palantir. If the candidate and Palantir both choose to move forward, the next interview is a team interview, followed by an interview with one of our CEOs prior to an offer being made. (Those interviews are largely unchanged from the previous hiring process.)

Preliminary Results

Our focus in this round of iteration has been on those candidate gating decision points where we are able to leverage antiracist, bias interruption research. As we engage in this process, we solicit feedback from the candidates about their experience and monitor the data at each stage (how many applications advance, are dropped or self-select out at each stage, etc.). Following each hiring cycle, we get together as a team to reflect on what we’ve learned and what experiments we might try in the next hiring cycle. 

So far, we have increased the number of applicants who were previously unknown to us (and vice versa). At each stage in the process, we have ended up with just about the expected number of candidates and overall candidate quality has been very high. The feedback we’ve received is that it is certainly an unusual process, but one that gives applicants (especially those unfamiliar with Palantir) a much better sense of us and our culture. 

Image "Scrabble - Resume" by Flazingo Photos licensed under CC BY-SA 2.0.

Jun 09 2021
Jun 09

expert metatag module install and config

https://www.drupal.org/project/metatag

Credits & Thanks

Thank you to:

About the Metatag module

The Metatag module allows you to set up Drupal to dynamically provide title tags and structured metadata, aka meta tags, on each page of your site.

what metatags look like in web code

Giving you control over your HTML title tag is the most important thing that the Metatag module does for SEO. That all-important tag is critical to your search engine ranking.

Note: It may be confusing that the Title Tag functionality resides within the Metatag module, but it makes sense from a technical standpoint. Both the HTML title tag and meta tags are placed in the header of a web page. By handling them both in the Metatag module, it requires less code and enables (slightly) faster rendering of your web pages.

Besides handling the title tag, the Metatag module programmatically creates meta tags for your website. Meta tags are snippets of text that tell a search engine about your pages. Meta tags help your SEO by communicating clearly to the search engine and social networks what each page on your website is about and how you want them to describe it in the search results. If you don’t do this, you will have to rely on the search engines to identify and classify your content. While they’re kind of good at this, it’s important enough that you don’t want to leave it to chance.

Global metatag suggestion screenshot

Because the Metatag module is so important, and there are many nuances to configuration, the documentation on this module is rather extensive. For this reason, we have created a free download that you can use to follow along to install, enable, and configure with the same base settings Volacci uses for all our clients.

Fill out the form below, and we'll send your copy right away!

Need help with your Drupal SEO?
Contact Volacci and we'll set up a no-obligation consultation.

Jun 09 2021
Jun 09

Do you ever stop to think about how many "things" make up the internet? 

Not necessarily websites and social media networks, but instead the individual pieces of information. Every button. Every callout. Every image. Every teeny-tiny item description in your shopping cart.

A long time ago, if you wanted to write about something on the internet, you had to create it and publish it. And if you wanted to write about it again somewhere else, you had to create it again and publish it again. 

But with "create once, publish everywhere" the manual weight of creating multiple things, publishing multiple places, and tweaking little bits of HTML across a landscape of pages is lifted. You get the most out of your content creation efforts.

What is create once, publish everywhere (COPE)?

Simply put, COPE is structured content. Rather than creating content multiple times across multiple pages, you instead create it and manage it in one place, whether you’re publishing it for the first time or the thousandth.

COPE was pioneered by National Public Radio (NPR) in a bold redesign to make it easier to share their multimedia content across devices, social media, email marketing, and more. For instances of this discussion, we’re focusing on publishing across your website.

Let’s start with a small example. Pretend you’re opening a new shoe store in your town. You really want people to visit, so you decide you’ll publish your business address and hours on every page of your website. 

But your website has 20 pages. And what if you change your business hours to be open later during the holiday season? That means you have to change that information 20 times. 

Twenty. 

But with COPE, you create a block or panel that holds your address and hours, and you tell your content management system (CMS) that you want that block to appear on every page of your website. Now when your business hours change, you only have to change that one block, and it updates across all the pages of your website with a single click.

And maybe today your website is 20 pages, but it’ll grow with more products, to 100 or 150 pages. That block means it will publish across every new page without having to be written from scratch or manually added, time and time again.

Where you’ve seen COPE

You may not realize it, but you’re running into this type of content across many websites you engage such as:

  • Entertainment websites: Upcoming episodes, shows, or movies that might interest you
  • Health care websites: Related doctors, clinic locations, or blog articles
  • News websites: Related headlines or news 
  • Recipe websites: Related recipes or blog posts
  • Retail websites: Recently viewed products or products that may interest you

COPE allows website administrators to set certain content types and attributes, or fields, which appear to your web visitors at different points in their journey. The same content or product can be seen in search listings, on cart pages, as a featured product on the front page, or in the "customers also bought" section. Different contexts, but the same content.

For example, if you’re looking online for a pair of red shoes, you may want to find something in your size (size 10) and the color you prefer (red). A website that offers these filters on its search page has categorized its content with taxonomy.

So, with these filters selected, you find a pair of retro-style Converse sneakers. You open the page to learn more about the shoes and find a description, price, and reviews. All of those things are part of the structure of that content. The website can display the same shoe on your “size 10, red” search, as well as someone’s “size 7, casual” search. 

That red pair of Converse lives on a product page that’s categorized and built in a way so that it only needs to exist once. Wouldn’t it be terrible if you had to publish and maintain an entirely new “Converse shoe” page for each size? 

COPE separates content and design

COPE doesn’t only mean you have far fewer things to keep track of when a piece of content needs to be updated or maintained, but it also separates content from design. 

Wait, what?

Yes. The process of structuring your content means you’re not reliant on how it looks on the website. It can, in fact, make internal content governance easier because you’re taking the subjective visual “feelings” away from the substance of the words. 

Let’s go back to the Converse shoes example. As a website administrator for the shoe store, you’re tasked to create a page for every shoe you offer. In setting up the page, you have to include: 

  • Shoe brand
  • Shoe name or model
  • Picture of the shoe
  • Bulleted description of the shoe’s features
  • Price of the shoe
  • Rating of the shoe, with reviews from customers

No matter if your shoe is a red Converse or a blue pair of high-heels, each product should be able to fill out all of these attributes. 

Now enter a stakeholder who says there needs to be a field for the shoe material because some of the nicest shoes are leather. But can a materials field be used across all the shoes you offer? It sure can!

And while historically, things like this always focused around “what it would look like” before “what it would say,” using structured content means the words hold just as much power as the design.

Structured content and a COPE model make it possible for you to grow and expand the content you offer across your site as the needs of your organization or audience change. A consistent experience makes it easy for people to find what they need quickly and the information to help them make their decision (or, in this case, purchase).

COPE makes cross-platform possible

You’ve built your website, and all your pages are tagged with taxonomy and are ready to share with the world. Time to launch.

Great! But your website being a destination for your brand means you want to build roads for people to reach you. Marketing has entered the chat.

With COPE, you don’t only structure content in a way that makes it easy to govern and manage that content in your CMS, but you also create a structured way to share that content across other channels.

Maybe your social posts use the product's short description and a thumbnail image of the shoes. Or maybe your email newsletters tease the top-rated shoes of the week, with a thumbnail and their star rating shown. 

COPE makes it easy to keep all of this information stored in one place. It also makes it easier to share your content across different channels in any way you need.

This marketing approach isn’t only less of a headache for your website administrators but also your whole organization. It makes your business and brand more agile, and in the long run, more efficient and successful.

Before you start COPE-ing

If you’re jazzed to get started, we’re jazzed for you, too. But before you go rushing off to your web team with this new idea in hand, let’s talk about what you should have in place before you get started. 

  1. Know what you have and what you need. A content inventory is a perfect way to start. You must understand your current digital structure and presence for both what you have and what you need. 
  2. Identify internal (and external) resources. COPE content becomes a piece of cake over time, but it might take a lot of decision-making and creation to give it legs upfront. Know if you have the team ready to take the project on or if you can outsource to an agency or partner with a vendor to get started.
  3. Get stakeholders on board early. You’ll need their input when the project kicks off and gets moving, so have conversations early and show how COPE can eliminate waste, drive sales, and encourage cost-efficiency. Stakeholders and internal experts can also help you start defining your ubiquitous language, which is integral to successful COPE.
  4. Set some high-level goals. Mostly, identify where your analytics are and where you want to go, but don’t make them your only focus. Set a SMART (specific, measurable, achievable, relevant, and timely) goal around your web traffic, conversions, and sales as a starting point.
  5. Document your plan forward. You’re not done with COPE when you hit ‘Publish’ on your website. The web is never done, and neither is your work. If you have a team, document how often you’ll meet to plan social posting, blogs, or other marketing efforts. Identify when and how content will be governed across the site, including how often and with what stakeholders or approvers. Answer these questions and write them down.

Most importantly, these discovery steps will (and should) lead your team or partner toward a domain model and, eventually, content models. Both of these help map the entire ecosystem of your website and plan for what type of pages, layouts, and templates are needed to present the information to the end-user. 

Get more content-first tips for preparing for COPE with the Content Marketing Institute.

Use COPE to master your destiny

It’s a long, winding road. Sometimes you’ll hit dead ends. Sometimes it’ll be a downhill breeze. But COPE is a worthwhile adventure for you and your team to explore as a way to more efficiently and effectively manage your website and user experience. 

If you’re ready to get started with a COPE model and need a digital partner who can lend a hand, reach out to our Lullabot team to get started.

Jun 09 2021
Jun 09

A small leak can sink a great ship. ~ Benjamin Franklin

We have seen the basic setup and configuration for Mautic plugins that leverage the integrations bundle, in the previous blog post. The key part of any IntegrationBundle is handling the authentication mechanism.  

So in this blog post, we will be covering various types of authentication and using one authentication type in the plugin that we built in the last blog post. We will continue developing the same plugin.

IntegrationBundle from Mautic Core supports multiple authentication provider methods- like API-based authentication, Basic Auth, OAuth1a, OAuth2, OAuth2 Two-Legged, OAuth2 Three-Legged, etc. The IntegrationBundle provides all these authentication protocols to be used as Guzzle HTTP Client.

In this blog post, we will implement Basic Auth authentication with a third-party service.

The following steps enable our plugin to have the Basic Auth authentication:

  • Have a form with fields for storing the basic auth credentials Form/Type/AuthType.php.
  • Prepare a “Credentials” class to be used by the Client class.
  • Prepare a “Client” service class to be used by a dedicated APIConsumer class.
  • Use Client service and implement API call-related methods in APIConsumer service class.

Step 1

The plugin depends on third-party APIs to have data to manipulate. And these APIs are gated with the authentication and authorization mechanisms. For the course of this post, we have chosen  Basic Auth as the authentication method. 

Basic Auth needed a username and password to communicate with the API. So we need a form that accepts the username and password as a key. And this key is required when connecting with API endpoints.

Let's create a form and name it  “ConfigAuthType.php” under the “MauticPlugin\HelloWorldBundle\Form\Type” namespace. This class extends the AbstractType class of Symfony. We need to implement the "buildForm()" method to add the required field.  Example code should look like this:

You can see the full version here.

It's now time to tell Mautic to pick up this form during configuration. To do so, we have to define an Integration service class implementing ConfigFormInterface, ConfigFormAuthInterface. The ConfigFormAuthInterface is the interface that lets you specify the configuration form using the getAuthConfigFormName method. 

So we name this class "ConfigSupport" and place this under the "MauticPlugin\HelloWorldBundle\Integration\Support." Here are the snippets from the class ConfigSupport:

You can find the complete ConfigSupport class here.

Time to let the IntegrationBunddle know about our "ConfigSupport" class. To do so, add a service as integration or create a service listing with the mautic.config_integration tag. The following is the code snippet of the Config.php (the plugin configuration file).

Now, at this point, we have the following things ready:

  • A service class to register the configuration support class.
  • A class to provide the configuration.
  • One can view all the code changes for step 1 here in this commit.

Step 2

For the Basic Auth, the Integrations bundle uses the “HttpFactory” class to build the http-client. Now, this class needs an object called “Credentials,” consisting of all the required keys for authentication.

If you notice the “getClient()” method of HttpFactory class under the “Mautic\IntegrationsBundle\Auth\Provider\BasicAuth\” namespace, it needs an object of “AuthCredentialsInterface.”

So our next step will be to create a separate class for credentials and create a new custom client to use those credentials.

For that, create a new class called “Credentials” under MauticPlugin\HelloWorldBundle\Connection

The class should be like given below:

This is a trimmed version of the class, and you can find the full version here.

Now that we have completed the Credentials class, we need to create a client who will make HTTP requests. Typically, we don’t need to create a separate client class if we don’t have additional logic to handle. In such cases, we can just call HttpFactory class and get the client like:

In our case, apart from fetching data, we need to cache it and polish it to be easily used for Mautic’s Lead entity.

So we will create a new class called “Client” under the namespace MauticPlugin\HelloWorldBundle\Connection.

The job of the “Client” class is to get the object of ClientInterface (\GuzzleHttp\ClientInterface).

If you need the full class details, you can just follow this link here. Because we are kind and we want to share more, we will quickly review a few methods that interest us and work with the Credentials class we wrote previously.

Here in the “getClient()” method, we are calling the “getCredentials()” method, which is creating a “Credentials” object using API keys.

By using the credentials object, we will get the client via the HttpFactory service call.

So at the end of this phase, we have the following things:

  • Credentials object to pass into getClient() method.
  • New Client class to manipulate get() method and fetch other configuration.
  • New Config.php file inside the “HelloWorldBundle/Integrations” folder to bring configuration and different integration settings.
  • Commit.

Step 3

We are now ready with the entire setup to store credentials and send the request. Now, it is time to use them in any other class or anywhere that we want to use.

In our current plugin, we have created a separate class called “ApiConsumer.” The reason is, we have several other get methods and API calls, so consolidating all the API methods into a single class is easier to manage.

To use our Client service, created via Client.php, we need to create a service that can use this class. That way, we can reuse this class without worrying about anything else.

Create a new service called “helloworld.connection.client” and add it to the Config.php in the other services section.

Similarly, we need to add additional services for the ApiConsumer class to call from other services.

You can refer to the source code to view the entire ApiConsumer class. Here is a snippet of the get() method.

As you can see, we are directly using the Client service’s reference and call the get method from the Client.php.

So at this point, we are done with the third step, where we used our authentication mechanism to fetch the data from the API.

You can refer to the commit to see the code for this step.

Conclusion

Now that we have the plugin ready to communicate with third-party API to churn out more leads, let us thank IntegrationBundle's authentication support. 

You can find about different authentication supports here.

Also, we have the third blog post coming up about how to manage and sync data coming from API. So stay tuned!!

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web