Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Apr 29 2021
Apr 29

Hello there! In this new post I want to focus on a very interesting topic of Drupal, which pertains to its extension capabilities through its Plugins system. It is not a very extensive topic, but it is also true that there is not much documentation about it. This is a very common case when you’re building some kinds of Drupal Blocks in order to render specific data and you need that its visualization and its behaviour responds to special conditions.

Picture from Unsplash, user Benjamin Child, @bchild311

Table of Contents

1- Introduction
2- What are Condition Plugins
3- Existing Condition Plugins in your Drupal installation
4- Available Condition Plugins in Drupal Contrib Modules
5- Writing your own Condition Plugin
6- :wq!

1- Introduction

Every Drupal Site Builder works with blocks, blocks are a basic and essential piece of functionality in Drupal and a very important building element in Drupal-based projects. And one of the most important phases of working with blocks in Drupal is managing the rules for the visibility of this important resource: We need determine when show it and when hide it.

We have all had at one time or another, to work with visibility conditions for a Block. It happens when we’re using the config page of a Block in Drupal, seeing something like this:

Basic visibility conditions for Blocks in Drupal

Well, for this post I was thinking on writing about how to expand these visibility conditions for our own custom needs in a Drupal Project. We’re going to talk about the Condition Plugins in Drupal.

2- What are Condition Plugins?

Although there is not a lot of information on this subject, nor does it form part of the documented APIs of Drupal, We can assemble interpretative pieces about how this small extensible sub-system based on Drupal Plugins works.
Basically, we can say that Condition Plugins are an extensible way to generate new visibility conditions for Blocks in Drupal, based in the format of Plugins that you can implement following the Drupal rules for Plugins.

The Condition Plugins are context-aware (many of them requires explicit context in its annotations blocks), but let’s see some information located within the Drupal documentation:

From the Condition Plugin System, as was initially described in 2013:

“To implement a condition in a module create a class in {module}/src/Plugin/Condition/{ConditionName}.php and extend ConditionPluginBase (which implements ConditionInterface). The class must also declare a plugin annotation in its docblock comment."

And about the Contexts, from the Plugin Contexts Definition:

“Sometimes plugins require another object in order to perform their primary operation. This is known as plugin context. Using a practical example, almost all condition plugins require a context. Let’s look at the NodeType condition’s plugin definition […] The context_definitions key stores an array of named context definitions the condition requires in order to perform its “evaluate()” method."

Actually, the Condition Plugin it’s not a complicated concept, but it’s true that you need to know quite well some basic previous steps in order to reach here the Satori. These are key concepts for Drupal Development and maybe you’ll need understand in deep the next topics:

3- Existing Condition Plugins in your Drupal Installation

You can discover some existing Condition Plugins available in your Drupal installation from different core modules:

Current Condition Plugins in Core:

These previous Plugins are the three basic items availables by default in a Block Visibility Configuration, I mean:

Condition Classes related to basic visibility options

But there are a few more in other locations (in addition to some used in test classes):

And the central Interface for Conditions:

4- Available Condition Plugins in Drupal Contrib Modules

As in Drupal Core you can find Condition Plugins in Contrib modules too. For instance, there are some modules developed by Cambrico with many resources within:

And one of the biggest contrib modules of Drupal -Webform- also provides its own Plugin for Conditions:

5- Building your own Condition Plugin

Ok, now let’s go to build something useful. We already know conceptually what they are, what they do and where cand find them. So is the time to implement Condition Plugins!.

Our Goals

For this case, I have come up with a very simple and straightforward idea:
We want to show a block only for nodes of type “Article” when they have checked a field.

In order to prepare this, I made the quickest way:

  • First I added a new field for the content type “Article”, the new “field_selected_article_check”.
  • Then I created a new custom block “My Block” in the most easy way, just clicking in /block/add.
  • Finally I implemented scaffolding for a new custom module, called “visibility_conditions”

Placing Block and new field for the example

We’re going to make a new folder in path:
/modules/custom/visibility_conditions/src/Plugin/Condition/

and creating the new file SelectedArticle.php This will be the new Plugin class and will have all the basic resources. Let’s see.

Annotations

In order to create a new vertical tab for the block configuration page, we hace to register our new Plugin, and in Drupal the Plugins uses annotations to register themselves and provide all the relevant information for indexing. For instance:

/**
 * Provides a condition for articles marked as selected.
 *
 * This condition evaluates to TRUE when is in a node context, and the node is 
 * Article content type and the article was marked as selected article in a field.
 *
 * @Condition(
 *   id = "selected_article",
 *   label = @Translation("Selected Article"),
 *   context_definitions = {
 *     "node" = @ContextDefinition("entity:node", label = @Translation("node"))
 *   }
 * )
 * 
 */

What about context? Well It’s a way to know where is running our resource. Some Drupal’s Plugins require context information to operate. In this case, we need to know if our Block is inside a node corpus, or not.

For the next lines, we have to extend the ConditionPluginBase Class and implement the ContainerFactoryPluginInterface Class:

 class SelectedArticle extends ConditionPluginBase implements ContainerFactoryPluginInterface {

And this is the most relevant information from the top of our new class (as well as the ‘

Configuration

Ok, our next step is define the configure by our new Plugin: what values are important and how we go to get these values.
For the first need, we build a basic method defaultConfiguration() from the mother Class. This will set the initial value of the new field in the tab:

 /**
   * {@inheritdoc}
   */
  public function defaultConfiguration() {
    // This default value will mark the block as hidden.
      return ['show' => 0] + parent::defaultConfiguration();
    }

  /**
   * {@inheritdoc}
   */
  public function buildConfigurationForm(array $form, FormStateInterface $form_state) {
    // Build a checkbox to expose the new condition.
    $form['show'] = [
      '#title' => $this->t('Display only in Selected Articles'),
      '#type' => 'checkbox',
      // Is using the previous config value as the default.
      '#default_value' => $this->configuration['show'],
      '#description' => $this->t('If this box is checked, this block will only be shown for Selected Articles.'),
      ];
     
      return parent::buildConfigurationForm($form, $form_state);
    }

And then we call to the submit function:

  /**
   * {@inheritdoc}
   */
  public function submitConfigurationForm(array &$form, FormStateInterface $form_state) {
    // Save the selected value to configuration.
    $this->configuration['show'] = $form_state->getValue('show'); 
      parent::submitConfigurationForm($form, $form_state);
    }

After that, we’ve built an extension for the Configuration Form of the visibility general block, adding a new item called show which contains the new element, a checkbox for select our new condition (or not).

Evaluate

Now we need a way for measure if our new condition applies or not and we’re using a function inherited from the mother class in order to evaluate the condition, returning TRUE or FALSE. It’s from ConditionBase Class, which implements ConditionInterface:

  /**
   * {@inheritdoc}
   */
  public function evaluate() {
    // First ensure that doesn't disable other blocks aren't using it.
    if (empty($this->configuration['show']) && !$this->isNegated()) {
      return TRUE;
    }
    
    // Gets the context value.
    $node = $this->getContextValue('node');

    // Then review if the node Article has setted the Selected Article field.
    if (($node->getType() == "article") && ($node->hasField('field_selected_article_check')) && ($node->field_selected_article_check->value)) {
      return TRUE;
    }
      // Finally if not exist marked value in Selected Article field.
      return FALSE;
    }

As you can see in the block above, we’re using a first part to ensure that if none of the above fields are checked, TRUE is returned and the block is displayed. This will ensure that by default if we have not activated the visibility of the block, we have it visible in general terms until we limit its visibility.

Summary

One more time, we have a function available from the mother class ConditionBase and required by ConditionInterface, just a method for expose descriptions about the works of the Conditions. These descriptions will not be available from the vertical tabs of the block configuration page, but will be only available from code.

  /**
   * {@inheritdoc}
   */
  public function summary() {
    // We have to check three options: 
    // 1- Condition enabled.
    // 2- Condition enabled and negated.
    // 3- Condition not enabled.
    if ($this->configuration['show']) {
      // Check if the 'negate condition' checkbox was enabled.
      if ($this->isNegated()) {
        // The condition is enabled and negated.
        return $this->t('The block will be shown in all Articles except the Selected.');
      } else {
        // The condition is only enabled.
        return $this->t('The block will be shown only in Selected Articles.');
      }
    }
      // The condition is not enabled.
      return $this->t('The block will be shown in all Articles.');
    }

Some Futher Details

In the previous section, we took care of preparing a Summary, but this is only available at code level, it’s not visible in GUI. Could we improve the usability of our new element by adding this summary? Yes, it’s possible and for this we can rely on a jQuery function in Drupal (sorry).

I introduce you the function “drupalsetSummary”, I discovered it consulting problems in StackExchange and then I identified it in real cases like the JavaScript added for Nodes in the Drupal core. I haven’t found much information about it, so I can’t explain much more than this: “it serves to place summaries” ¯_(ツ)_/¯ . Well, ok.

The important thing is that I was playing with this function and it seems to work well for inserting elements. The pre-conditions are that JavaScript must be added to our custom module and that we must use the Drupal Behaviors format. Please review this Drupal - JavaScript integration guide: Drupal Behaviors in order to get more info about the Drupal Behaviors format and how implementing it, and here from this github gist you can play with some examples of Drupal Behaviors.

File: visibility_conditions.libraries.yml:

block_special_articles: 
  js: 
    js/selected_articles_conditions.js: {}
  dependencies: 
    - block/drupal.block

File: selected_articles_conditions.js
The Drupal Behavior:

  /**
   * Provide the summary information for the block settings vertical tabs.
   *
   */
  Drupal.behaviors.blockSettingsSummarySelectedArticles = {
    attach: function () {
      // Check if the function drupalSetSummary is available.
      if (jQuery.fn.drupalSetSummary !== undefined) {
        // Add the summary on tab.
        $('[data-drupal-selector="edit-visibility-selected-article"]').drupalSetSummary(checkSummary);
      }
    }
  };

And the function for setting the strings of summary:

function checkSummary(context) {
    // Check if the condition has been selected.
    var selectedCondition = $(context).find('[data-drupal-selector="edit-visibility-selected-article-show"]:checked').length;
    // Check if the negate condition has been selected.
    var selectedNegate = $(context).find('[data-drupal-selector="edit-visibility-selected-article-negate"]:checked').length;

    // Reviews scenarios.
    if (selectedCondition) {
      if (selectedNegate) {
        // Condition and Negation were selected.
        return Drupal.t("The block will be shown in all Articles except the Selected.");
      }

      // Only the condition was selected.
      return Drupal.t("The block will be shown only in Selected Articles.");
    }

    // The condition has not been enabled and is not negated.
    return Drupal.t('The block will be shown in all Articles');
  }

Now you need enable the new custom module by drush en -y visibility_conditions from your command line. Clear cache by doing drush cr and…That’s all! interesting? See the new Condition Plugin in action:

Rendering new Visibility Condition Plugin

And the JavaScript rendering the info about summary:

It’s true that it requires basic knowledge of other Drupal elements (Annotations, Plugins, JavaScript Libraries, Drupal Behaviors…) but in itself, the result is quite interesting. Now we can build our own visibility conditions Plugins!

Rendering new Visibility Condition Plugin

I have uploaded this test module to my custom modules repository for testing, in Gitlab. You can download it from here. And remember, don’t use these modules in production environments!

6- :wq!

[embedded content]

Apr 28 2021
Apr 28

Welcome on board! We are happy and thrilled (!) to have you as a Liiper! Our onboarding program starts the day you sign your work contract. This is what you can expect.

Starting at a new company is a huge challenge. We feel you – been there, done that! ;-) That is why we have an onboarding program in place, which makes you feel part of the Liip family from day 1, hopefully. Your recruitment process was a success, let's make your onboarding great too. So let’s jump right in!

[embedded content]

Meet your onboarding buddy

Céline, Hendrik, Janina, Stefan... these are four of the almost twenty onboarding buddies at Liip spread over all the six locations. And you will get a buddy too! Your onboarding buddy is your go-to person, your sparring partner for social and cultural aspects at Liip.

They are usually based in the office you are joining, while not part of your work team. Additionally, you get a work buddy for work-related topics who tell you all about the projects you will work on. Responsibility and autonomy are key at Liip. We will support you to find your way and embrace our values.

Your buddies will guide you through the onboarding program. Providing you with an overview of the company, our processes and tools. Together, you will define your trial period goals and regularly meet to reflect and share impressions and feedback.

Last but not least, social gatherings! They are part of the #liipway, and your onboarding buddy aims for making you feel welcome at the (online) apéro, game session or yoga class – if you wish to take part. We are striving for a successful and pleasant integration to Liip for you.

At the end of your three-month trial period, you will be invited to the so-called “trial period meeting”. This is a dedicated time to identify what has been moving you forward, what has been holding you back and what opportunities and challenges might arise. You will self-reflect and get feedback from peers as well as your onboarding buddy. In other words, this is our ritual to finalize the onboarding process and confirm your hiring.

Go with the flow

Everything is organized so that you can “just” surf the wave. Technically speaking, you will get what you need to work, such as a brand-new laptop, an ergonomic desk and chair, a monitor, a keyboard, a mouse, etc. All of it is organized on-site or sent to your place (maybe not the desk nor the chair ;-) ) in case you start your onboarding program remotely.

Thanks to a clear and easy to follow checklist (aka your onboarding kanban board), you have an overview of your progress within the onboarding program at any time. Where to get support, how to contribute to the external communication of Liip, how our salary system works, and so much more will be taught to you.

Trainings are part of the program too. Holacracy, Agility and Feedback trainings are on the menu. And you will learn more about your personal education budget.

Take part in the #liipway

All you need is curiosity and enthusiasm. Let yourself be guided, and enjoy the ride! We can only advise you to participate in social and health activities to get a taste of the #liipway, either on-site (as soon as this will be possible again) or online like coffee breaks, game sessions, yoga or boot camp courses, ...and apéros! Our tool meet-a-liiper-with-donut makes meeting new work colleagues from all over Liip, even during remote times, a walk in the park.

Don’t be shy and dare to ask questions (all of them!). How you experience the onboarding program has the utmost importance to us. Please give us your honest feedback, so that we keep on improving it.

Apr 28 2021
Apr 28

Ok, maybe the title is a bit ambitious, but here's a little proof-of-concept video of any idea for sub-theming I have been working on.

tl;dr

  1. Create a theme.
  2. Use CSS variables for as many CSS properties as you can - spacing, line-height, font, etc.
  3. Create a sub-theme.
  4. Create a library in that sub-theme which just has a CSS file.
  5. Use that CSS file to set values for the variables in the sub-theme.

E.g. in the base theme, you could have padding on top and bottom of header. This could be a variable called --section-spacing-v (as in, section spacing vertical) and set to 2rem. Then in the sub-theme you could set that variable to 4rem. Continue for any other variables you want to override - fonts, line-height, etc.

For added points, you could take these variables and save them as theme settings, so site builders can set them.

I've started streaming some of my work, especially if it's contributions to open source, on twitch if you'd like to follow along/subscribe. As it turns out, Twitch deletes your videos after 14 days, so I have also been uploading them to YouTube. Feel free to subscribe.

Apr 28 2021
Apr 28

For Part Two, click here

30 Years Of Linux

Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.

For three decades, Linus Torvalds has led Linux Kernel development, inspiring countless other developers and open source projects. In 2005, Linus also created Git to help manage the kernel development process, and it has since become the most popular version control system, trusted by countless open source and proprietary projects.

The following interview continues our series with Open Source Leaders. Linus Torvalds replied to our questions via email, reflecting on what he's learned over the years from leading a large open source project. In this first part, we focus on Linux kernel development and Git. "[Linux] was a personal project that grew not out of some big dream to create a new operating system," Linus explains, "but literally grew kind of haphazardly from me initially just trying to learn the in-and-outs of my new PC hardware."

Regarding creating Git and then handing it off to Junio Hamano to improve and maintain, Linus noted, "I don't want to claim that programming is an art, because it really is mostly just about 'good engineering'. I'm a big believer in Thomas Edison's 'one percent inspiration and ninety-nine percent perspiration' mantra: it's almost all about the little details and the everyday grunt-work. But there is that occasional 'inspiration' part, that 'good taste' thing that is about more than just solving some problem - solving it cleanly and nicely and yes, even beautifully. And Junio had that 'good taste'."

Read on for the first in a two-part interview. Check back next week for the second part, where Linus explores the lessons and insights gained from three decades at the helm of the Linux kernel.

Translations: [Chinese], [Korean], [Vietnamese]: (Contact us if you'd like to translate this interview into another language.)

Linux Kernel Development

Jeremy Andrews: Linux is everywhere, and has been an inspiration to the entire open source world. Of course, it wasn't always that way. You famously released the Linux kernel back in 1991 with a modest Usenet posting on comp.os.minix. A decade later you wrote an engaging and personal book titled, "Just for Fun: The Story of an Accidental Revolutionary" exploring much of that history. This year, in August, Linux will celebrate its 30th anniversary! That's amazing, congratulations! At what point during this journey did you realize what you'd done, that Linux was so much more than "just a hobby"?

Linus Torvalds: This may sound a bit ridiculous, but that actually happened very early. Already by late '91 (and certainly by early '92) Linux had already become much bigger than I had expected.

30 Years Of Linux

And yeah, considering that by that point, there were probably just a few hundred users (and even "users" may be too strong - people were tinkering with it), it probably sounds odd considering how Linux then later ended up growing much bigger. But in many ways for me personally, the big inflection point was when I realized that other people are actually using it, and interested in it, and it started to have a life of its own. People started sending patches, and the system was actually starting to do much more than I had initially really envisioned.

I think that X11 was ported to Linux some time in April '92 (don't take my word for the dates - it's a loong time ago), and that was another big step where suddenly there was a GUI and a whole new set of capabilities.

To put this all in perspective - I really didn't start out with any big plans of high expectations. It was a personal project that grew not out of some big dream to create a new operating system, but literally grew kind of haphazardly from me initially just trying to learn the in-and-outs of my new PC hardware.

So when I released the very first version, it was really more of a "look at what I did", and sure, I was hoping that others would find it interesting, but it wasn't a real serious and usable OS. It was more of a proof of concept, and just a personal project I had worked on for several months at that time.

And going from that "personal project" to being something where others used it, sent feedback (and bug reports), and occasional patches, that was the big change for me.

Just to give an example of something really fundamental: the original copyright license was something like "you can distribute this in source form, but not for money".

That was because for me one of the issues had literally been that commercial unix was expensive (well, for a poor student who spent all his money on the new PC it was), and so to me a big important thing was that the source code be available (so that people could tinker with it), and I wanted it to be open to people like me who just couldn't afford the alternatives.

And I changed the license in late '91 (or maybe very early '92) to the GPLv2 because there were people who wanted to distribute it on floppies to local Unix Users Groups, but wanted to at least recoup the costs of the floppies and their copying time. And I realized that was obviously entirely reasonable, and that the important thing wasn't the "no money", but the "source needs to be openly available" part.

End result: not only did people distribute it at Unix user group meetings, but early floppy distributions like SLS and Slackware happened within months.

Compared to those initial really fundamental changes, everything else was "incremental". Sure, some of the incrementals were pretty big (IBM coming aboard, Oracle DB being ported, Red Hat IPOs, Android becoming big on phones etc), but they were still less personally revolutionary than that early initial "people I don't even know are using Linux''.

JA: Do you ever regret your choice of license, or how much money other people and companies have made off something you created?

Linus Torvalds talking

LT: Absolutely not.

First off, I'm doing quite well. I'm not insanely rich, but I'm a well-paid software engineer, doing what I like to do, on my own schedule. I'm not hurting.

But equally importantly, I'm 100% convinced that the license has been a big part of the success of Linux (and Git, for that matter). I think everybody involved ends up being much happier when they know that everybody has equal rights, and nobody is special with regards to licensing.

There's a fair number of these "dual license" projects where the original owner retains a commercial license ("you can use this in your proprietary product if you pay us license fees") and then on the other hand the project is also available under something like the GPL for open source cases.

And I think it's really hard to build a community around that kind of situation, because the open source side always knows it's "second class". Plus it leads to a lot of just licensing paperwork in order for the special party to always retain their special rights. So it adds a lot of friction to the project.

And on the other hand, I've seen a lot of BSD (or MIT or similar) licensed open source projects that just fragment when they become big enough to be commercially important, and the involved companies inevitably decide to turn their own parts proprietary.

So I think the GPLv2 is pretty much the perfect balance of "everybody works under the same rules", and still requires that people give back to the community ("tit-for-tat"). And everybody knows that all the other people involved are bound by the same rules, so it's all very equitable and fair.

Of course, another part of that is that you also get out what you put in. Sure, you can try to "coast" on the project and be just a user, and that's ok. But if you do that, you also have no control over the project. That can be perfectly fine too, if you really just need a basic operating system, and Linux already does everything you want. But if you have special requirements, the only way to really affect the project is to participate.

This keeps everybody honest. Including me. Anybody can fork the project and go their own way, and say "bye bye Linus, I'm taking over maintenance of my version of Linux". I'm "special" only because - and as long as - people trust me to do a good job. And that's exactly how it should be.

That "anybody can maintain their own version" worried some people about the GPLv2, but I really think it's a strength, not a weakness. Somewhat unintuitively, I think it's actually what has caused Linux to avoid fragmenting: everybody can make their own fork of the project, and that's OK. In fact, that was one of the core design principles of "Git" - every clone of the repository is its own little fork, and people (and companies) forking off their own version is how all development really gets done.

So forking isn't a problem, as long as you can then merge back the good parts. And that's where the GPLv2 comes in. The right to fork and do your own thing is important, but the other side of the coin is equally important - the right to then always join back together when a fork was shown to be successful.

Another issue is that you really want to have the tools to support that workflow, but you also have to have the mindset to support it. A big impediment to joining forks back is not just licensing, but also "bad blood". If the fork starts from very antagonistic reasons, it can be very hard to merge the two forks - not for licensing or technical reasons, but because the fork was so acrimonious. Again, I think Linux has avoided that mainly because we've always seen forking things as natural, and then it's also very natural to try to merge back when some work has shown itself to be successful.

So this answer kind of went off at a tangent, but I think it's an important one - I very much don't regret the choice of license, because I really do think the GPLv2 is a huge part of why Linux has been successful.

Money really isn't that great of a motivator. It doesn't pull people together. Having a common project, and really feeling that you really can be a full partner in that project, that motivates people, I think.

JA: These days when people release source code under the GPLv2, they generally do it because of Linux. How did you find the license, and how much time and effort did you put into reviewing other existing licenses?

Open Source

LT: Back then, people still had fairly big flame wars about BSD vs GPL (I think partly fueled by how rms really has a knack for pissing people off), so I'd seen some of the license discussions just through various usenet newsgroups I was reading (things like comp.arch, comp.os.minix etc).

But the two main reasons were probably simply gcc - which was very much instrumental in getting Linux going, since I absolutely required a C compiler - and Lars Wirzenius ("Lasu"), who was the other Swedish-speaking CS student at University in my year (Swedish being a fairly small minority in Finland).

Lasu was much more into license discussions etc than I was.

To me, the choice of GPLv2 wasn't some huge political issue, it was mainly about the fact that my original license had been so ad-hoc and needed updating, and I felt indebted to gcc, and the GPLv2 matched my "you have to give source back" expectations.

So rather than make up another license (or just edit the original one - just removing the "no money can change hands" clause could have been an option), I wanted to pick one that people already knew about, and had had some lawyers involved.

JA: What is your typical day like? How much of it is spent writing code, versus reviewing code, versus reading and writing emails? And how do you balance personal life and working on the Linux Kernel?

LT: I write very little code these days, and haven't for a long time. And when I do write code, the most common situation is that there's some discussion about some particular problem, and I make changes and send them out as a patch mainly as an explanation of a suggested solution.

In other words, most of the code I write is more of a "look, how about we do it this way" where the patch is a very concrete example. It's easy to get bogged down into some theoretical high-level discussion about how to solve something, and I find that the best way to describe a solution is often to just write the snippet of code - maybe not the whole thing - and just make it very concrete that way.

Because all my real work is spent on reading and writing emails. It's mostly about communication, not coding. In fact, I consider this kind of communication with journalists and tech bloggers etc to literally be part of my workday - it may get lower priority than actual technical discussions, but I do spend a fair amount of time on things like this too.

And yes, I spend time on code reviews too, but honestly, by the time I get a pull request, generally the code in question should already have been reviewed by multiple people already. So while I still look at patches, I actually tend to look more at the explanations, and the history of how the patch came to me. And with the people I've worked the longest with, I don't do even that: they are the maintainers of their subsystem, I'm not there to micro-manage their work.

So quite often, my main job is to "be there", and be the collection point, and be the person who manages and enforces the releases. In other words, my job is generally more about the maintenance process than the low-level code.

JA: What is your work environment like? For example, do you prefer a dark room with no distractions, or a room with a view? Do you tend to work in silence, or while listening to music? What kind of hardware do you typically use? Do you review code with vi in a terminal, or with a fancy IDE? And, do you have a preferred Linux distribution for this work?

Threadripper

LT: My room isn't exactly "dark", but I do have the blinds on the window next to my desk closed, because I don't want bright sunlight (not that it's necessarily very common this time of year in Oregon ;). So no views, just a (messy) desk, with dual 4k monitors and a powerful desktop computer under the desk. And a couple of laptops sitting around for testing and for when I'm on the road.

And I want to work in silence. I used to hate the ticking of mechanical disk drives - happily long relegated to the garbage bin as I've used exclusively SSD's for over a decade by now - and noisy CPU fans are unacceptable too.

And it's all done in a traditional terminal, although I don't use 'vi'. I use this abomination called "micro-emacs", which has absolutely nothing to do with GNU emacs except that some of the key bindings are similar. I got used to it at the University of Helsinki when I was a wee lad, and I've not been able to wean myself from it, although I suspect I will have to soon enough. I hacked up (a very limited) utf-8 support for it a few years ago, but it's really showing its age, and showing all the signs of having been written in the 80's and the version I use was a fork that hasn't been maintained since the mid 90's.

University of Helsinki used it because it worked on DOS, VAX/VMS and Unix, which is why I got introduced to it. And now my fingers are hardcoded for it. I really need to switch over to something that is actually maintained and does utf-8 properly. Probably 'nano'. But my hacked-up piece of historical garbage works just barely well enough that I've never been really forced to teach my old fingers new tricks.

So my desktop environment is fairly simple: several text terminals open, and a web browser with email (and several other tabs, mostly news and tech sites). I want to have a fair amount of desktop space, because I'm used to having fairly big terminal windows (100x40 is kind of my default starting size), and I have multiple terminals open side-by side. Thus the dual 4k monitors.

I use Fedora on all my machines, not because it's necessarily "preferred", but because it's what I'm used to. I don't care deeply about the distribution - to me it's mainly a way to get Linux installed on a machine and get all my tools set up, so that I can then replace the kernel and work on just that.

JA: The Linux Kernel Mailing List is where kernel development happens publicly, and is extremely high traffic. How do you keep up with so much email? Have you explored other solutions for collaboration and communication outside of a mailing list, or is there something about a simple mailing list that is perfect for what you do?

LT: Oh, I don't read the kernel mailing list directly, and haven't in years. It's much too much.

No, the point of the kernel mailing list is that it basically gets cc'd on all the discussions (well - one of the kernel mailing lists do, there are many - and then the traditional lkml list is the fallback for when there isn't some more targeted list). And that way, when a new person is brought into the discussion, they can see the history and the background by looking at the kernel mailing list.

So what I used to do was to be subscribed to the list, but have all the lkml email that I wasn't cc'd on personally be auto-archived, so I'd not see it by default. But then when some issue escalated to me, all that discussion would show up, because it was there in my email, just not in my inbox until it was needed.

These days, I actually use the lore.kernel.org functionality instead, because it works so well and we have some other tools built around it. So rather than having it auto-archived in my own mail archives, the discussions end up being visible that way instead. But the basic workflow is conceptually the same.

I do get a fair amount of email still, obviously - but in many ways it has been getting better over the years, rather than worse. A big part of that is Git and how well the kernel release flow works: we used to have many more problems with code flow, and tooling. My email situation was actually much much worse back around the turn of the century, when we still dealt in huge patch-bombs and we had serious scalability problems in the development flow.

And the mailing list (with tooling around it) model really does work very well. That's not to say that people don't use other means of communication in addition to email (both person-to-person, and the mailing lists): there are people who enjoy various realtime chat setups (IRC being the traditional one). And while that has never been my thing, it is clearly what some people like to use for brainstorming. But that "mailing list as an archive" model works very well, and works seamlessly together with the whole "send patches between developers as emails" and "send problem reports as emails".

So email remains the main communication channel, and makes it easy to discuss technical issues, with patches embedded in the same medium. And it works across time zones, which is very important when everybody is so spread out geographically.

KernelTrap logo

JA: I followed kernel development closely for about a decade, blogging about it on KernelTrap and writing about new features as they evolved. I stopped around the time the 3.0 kernel was released, which had followed 8 years of 2.6.x versions. Is it possible to summarize some of the more interesting things that have happened in the kernel since the 3.0 release?

LT: Heh. That's so long ago that I couldn't even begin to summarize things. It's been a decade since 3.0, and we've had a lot of technical changes in that decade. ARM has grown up and ARM64 has become one of our primary architectures. Lots and lots of new drivers, and new core functionality.

If anything, what is interesting about the last decade is how we've actually kept the actual development model really smooth, and what hasn't changed.

We've gone through many different version number schemes over the decades, we've had different development models, but the 3.0 release was in fact the one that finalized the model we've used ever since. It kind of made official the whole "releases are time-based, version numbers are just numbers, and don't have any feature dependencies".

We'd started the whole time-based releases with a merge window in the 2.6.x days, so that part wasn't new. But 3.0 was when the last vestiges of "the number has meaning" were thrown overboard.

We'd had the random numbering scheme (mainly before 1.0), we'd had the whole "odd minor numbers means development kernel, even means stable production kernel" model, and then in 2.6.x we started doing the time-based release model. But people still had that "what will it take to increase the major number" question. And 3.0 made it official that even the major version number has no meaning, and that we'll just try to keep the numbers easy to deal with and not let them grow too big.

So for the last decade, we've made absolutely huge changes (Git makes it easy to show some statistics in numbers: about three quarters of a million commits by over 17 thousand people). But the development model itself has actually been quite smooth and stable.

And that really hasn't always been the case. The first two decades of kernel development were full of fairly painful development model changes. This last decade has been much more predictable release-wise.

JA: As of now, the latest release is 5.12-rc5. How standardized is the release process? For example, what sorts of changes go into an -rc1, versus an -rc2 and so on? And at what point do you decide a given release is ready to be officially released? What happens if you're wrong and a large regression is found after the final release, and how often does this happen? How has this process evolved over the years?

LT: So I alluded to this earlier: the process itself really is pretty standard, and has stayed so for the last decade. It went through several upheavals before that, but it's actually been almost like clock-work since 3.0 (and in fact a few years before that - the switch to Git in many ways was the beginning of the modern process, and it took a while before everybody got used to it).

So we've had this cadence of "two weeks of merge window" followed by roughly 6-8 weekly release candidates before final release for almost 15 years by now, I think.

And the rules have always been the same, although they haven't always been entirely strictly enforced: the merge window is for new code that is supposedly "tested and ready", and then the subsequent roughly two months are for fixes and to actually make sure all the problems are shaken out. Which doesn't always happen, and sometimes that supposedly "ready" code gets disabled or outright reverted before the release.

And then it repeats - so we have a release roughly every 10 weeks or so.

And the release criteria is me feeling confident enough, which obviously in turn is based on what kinds of problem reports are still coming in. If some area still shows issues late in the rc game, I'm fairly aggressive about just reverting things, and saying "we'll deal with this in a later release once we've figured the thing out fully", but on the whole it's fairly rare that that is needed.

Does it always work out? No. Once the kernel is released - and particularly once a distro picks it up - you get new users, you get people who didn't test it during development that find things that didn't work and we didn't catch during the rc series. That's pretty much inevitable. It's part of why we have the whole "stable kernel" trees, which continue to add fixes after the release. And some stable kernels are maintained for longer than others, and get called LTS ("Long Term Support") kernels.

All of this has remained fairly unchanged in the last ten years, although we do end up having a lot more automation in place. Kernel testing automation is hard in general - partly because so much of the kernel is drivers which then obviously depends on hardware availability - but there are several farms doing both boot and performance testing, and do various randomized load testing. And that has improved a lot over the years.

JA: Last November you were quoted as being impressed by Apple's new ARM64 chips found in some of their new computers. Are you following the development effort to support them with Linux? I see work was merged into for-next. Is it likely Linux will boot on Apple's MacBook hardware as early as the upcoming 5.13 kernel? Are you likely to be an early adopter? What is the significance of ARM64?

Apple M1 ARM64 Chip

LT: I'm checking in on it very occasionally, but it's early days yet. As you note, the very early support will likely be merged into 5.13, but you need to realize that that is really only the beginning, and doesn't make Apple hardware useful with Linux yet.

It's not the arm64 part that ends up being the problem, but all the drivers for the hardware around it (the SSD and GPU in particular). The early work so far gets some of the really low-level stuff working, but doesn't result in anything useful outside of early hardware enablement. It will take some time for it to be a real option for people to try out.

But it's not just the Apple hardware that has improved - the infrastructure for arm64 in general has grown up a lot, and the cores have gone from "Meh" to being much more competitive in the server space. The arm64 server space was pretty sad not that long ago, but Amazon's Graviton2 and Ampere's Altra processors - both based on the much improved ARM Neoverse IP - are much better than what the offerings were a few years ago.

I've been waiting to have a usable ARM machine for over a decade by now, and it's not there yet, but it's clearly much closer than it used to be.

In fact, I guess I could say that I've been wanting an ARM machine for much longer than that - back when I was a teenager, the machine I really wanted was an Acorn Archimedes, but availability and price made me go with a Sinclair QL (M68008 processor) and then obviously a few years later a i386 PC instead.

So it's been kind of brewing for decades, but they still haven't been widely available and price/performance competitive as computers for me. One day. Hopefully in the not too distant future.

JA: Is there anything in the kernel which is not optimal, but would require a complete rewrite to address properly? In other words, the kernel is 30 years old and knowledge, languages and hardware have changed a lot in these 30 years: if you rewrote it from scratch now, what would you change?

LT: We've actually been really good about even completely rewriting things if necessary, so anything that would have been an unmitigated disaster has long since been rewritten.

Sure, we have a fair amount of "compatibility" layers, but they are usually not horrendous. And it's unclear if even those compatibility layers would really go away if rewriting from scratch - they are about backwards compatibility with older binaries (and often backwards compatibility with older architectures, e.g. running 32-bit x86 apps on x86-64). Since I consider backwards compatibility to be very important, I'd want to keep those even in a rewrite.

So there are obviously lots of things that are "not optimal" in the sense that anything can be improved, but the way you phrase the question, I'd have to say that no, there's nothing there that I despise. There's legacy drivers that nobody is ever going to care about enough to clean up, and so they may do ugly things, but a key part of that is "nobody cares enough". It hasn't been a problem, and when it does become a problem we tend to fairly actively remove true legacy support that we can't find anybody that cares about. So we've gotten rid of lots of drivers over the years, and we've gotten rid of whole architecture support when it no longer makes any sense at all to maintain.

No, the only major reason for a "rewrite" would be if you end up having some use-case where the whole structure no longer makes sense. The most likely scenario would be some small embedded system that just doesn't want everything that Linux offers, and has a hardware footprint so small that it simply wants something smaller and simpler than what Linux has become over the years.

Because Linux has grown a lot. Even small hardware (think cell phones etc) today is much more capable than the original machine Linux was developed on was.

JA: What about rewriting at least parts with Rust, a language that was specifically designed for performance and safety? Is there room for improvement in this way? Do you feel it’s ever possible for another language like Rust to replace C in the kernel?

Ferris the crab, unofficial mascot for Rust

LT: We'll see. I don't think Rust will take over the core kernel, but doing individual drivers (and maybe whole driver subsystems) in it doesn't sound entirely unlikely. Maybe filesystems too. So it's not "replace C", but more of "augment our C code where it makes sense".

Of course, drivers in particular is about half of the actual kernel code, so there's a lot of room for that, but I don't think anybody is really expecting to rewrite existing drivers in Rust wholesale, more of a "some people will do new drivers in Rust, and a couple of drivers might be rewritten where it makes sense".

But right now that's more of a "people are trying it out and playing with it" rather than anything more than that. It's easy to point to advantages, but there are certainly complexities too, so I'm very much taking a wait-and-see approach to see if the promised advantages really do pan out.

JA: Are there any specific parts of the kernel that you are personally most proud of?

LT: The stand-out parts I tend to point to are the VFS ("virtual filesystem") layer (and the pathname lookup in particular) and our VM code. The former because Linux just does some of those fundamental things (looking up a filename really is such a core operation in an operating system) so much better than anything else out there. And the latter mainly because we support 20+ architectures, and we still do it with a largely unified VM layer, which I think is pretty impressive.

But at the same time, this is very much a function of "what part of the kernel do you care about". The kernel is big enough that different developers (and different users) will simply have different opinions of what matters most. Some people think scheduling is the most exciting part of the kernel. Others like the nitty-gritty of device drivers (and we have a lot of those). I personally tend to be more involved in the VM and VFS areas, so I then naturally point to those.

JA: I found this description of pathname lookup, and it's more complex than I expected. What makes the Linux implementation so much better than what is done in other operating systems? And what do you mean by "better"?

LT: Pathname lookup is really such a common and fundamental thing that most people outside of kernel developers don't even think about it as a problem: they just open files, and take it all for granted.

But it's actually fairly complicated to do really well. Exactly because absolutely everything does pathname lookups all the time, it's hugely performance-critical, and it's very much one of those areas where you also want to scale well in SMP environments, and it has lots of complexity in locking. And you very much do not want to do any IO, so caching is very important. In fact, pathname lookup is so important that you can't leave it to the low-level filesystem, because we have 20+ different filesystems, and having each of them do their own caching and their own locking would be a complete disaster.

So one of the main things the VFS layer does is really handle all the locking and caching of pathname components, and handle all the serialization and the mount point traversal, and do it all with mostly lock-free algorithms (RCU), but also with some really clever lock-like things (the Linux kernel "lockref" lock is a very special "spinlock with reference count" which was literally designed for the dcache caching, and it's basically a specialized lock-aware reference count that can do lock elision for certain common situations).

End result: the low-level file systems still need to do the actual lookup for things that aren't cached, but they don't need to worry about caching and all the coherency rules and the atomicity rules that go along with pathname lookups. The VFS handles all that for them.

And it all outperforms anything any other operating system has done, while basically scaling perfectly to even machines with thousands of CPU's. And it does that even when those machines all end up touching the same directories (because things like the root directory, or a project home directory, are things that even heavily threaded applications all touch at the same time, and that don't get distributed to any kind of per-thread behavior).

So it's not just "better", it's "Better" with a capital 'B'. Nothing else out there comes even close. The Linux dcache is simply in a class all its own.

JA: The past year has been difficult all around the world. How has the COVID-19 pandemic affected the kernel development process?

LT: It actually has had very minimal effect, because of how we always worked. Email really ends up being a wonderful tool, and we didn't rely on face-to-face meetings.

Yes, it did affect the yearly kernel summit last year (and this year is still up in the air), and most conferences got cancelled or turned virtual. And people who worked in offices before mostly started working from home (but a lot of the core kernel maintainers already did so). So a lot of things around it changed, but the core development itself worked exactly as before.

And it obviously affected all our lives in other ways - just the social ramifications in general. But on the whole, being a kernel developer who already interacted with people almost entirely over email, we were probably some of the least affected.

Git Distributed Version Control System

JA: Linux is only one of your ubiquitous contributions to open source. In 2005 you also created Git, the extremely popular distributed source control system. You quickly migrated the Linux kernel source tree out of the proprietary Bitkeeper and into the newly created and open sourced Git, and in the same year handed off maintainership to Junio Hamano. There's a lot of fascinating history there, what led you to handing off leadership on this project so quickly, and how did you find and select Junio?

LT: So there's two parts to this answer.

Git Logo

The first part is that I very much did not want to create a new source control system. Linux was created because I find the low-level interface between hardware and software fascinating - it's basically a labor of love and personal interest. In contrast, Git was created out of necessity: not because I found source control interesting, but because I absolutely despised most source control systems out there, and the one that I found most palatable and had really worked fairly well in the Linux development model (BitKeeper) had become untenable.

End result: I've been doing Linux over 30 years (the anniversary of the first release is still a few months away, but I had started on what would become Linux already 30 years ago), and I've been maintaining it the whole time. But Git? I did not ever think I'd really want to maintain it long-term. I love using it, and I obviously think it's the best SCM out there by a huge amount, but it's not my fundamental passion and interest, if you see what I'm trying to say.

So I always wanted somebody else to maintain the SCM for me - in fact I would have been happiest had I not had to write one in the first place.

That's kind of the background.

As to Junio - he was actually one of the very first people who came in as Git developers. His first change came in within days after I had made the very first (and very rough) version of Git public. So Junio has actually been around some since pretty much the beginning of Git.

But it's not like I handed the project off to the first random person to show up. I did maintain Git for a few months, and the thing that made me ask Junio if he wanted to be the maintainer is that very-hard-to-describe notion of "good taste". I don't really have a better description for it: programming is about solving technical problems, but how you solve them, and how you think about them is important too, and it's one of those things you start to recognize over time: certain people have that "good taste" thing and pick the "right" solution.

I don't want to claim that programming is an art, because it really is mostly just about "good engineering". I'm a big believer in Thomas Edison's "one percent inspiration and ninety-nine percent perspiration" mantra: it's almost all about the little details and the everyday grunt-work. But there is that occasional "inspiration" part, that "good taste" thing that is about more than just solving some problem - solving it cleanly and nicely and yes, even beautifully.

And Junio had that "good taste".

And every time Git comes up, I try to remember to really make it very very clear: I may have started and designed the core ideas in Git, but I often get too much credit for that part. It's been 15+ years, and I was really only involved with Git in that first year. Junio has been an exemplary maintainer, and he's the one who has made Git what it is today.

Btw, this whole "good taste" thing and finding people who have it, and trusting them - that's very much not just about Git. It's very much the history of Linux too. Unlike Git, Linux is obviously a project that I still do actively maintain, but very much like Git, it's also a project with lots of other people involved, and I think one of the big successes of Linux is having literally hundreds of maintainers around, all with that hard-to-define "good taste", and all people who maintain parts of the kernel.

JA: Have you ever given control to a maintainer only to later determine it was the wrong decision?

LT: Our maintainership structure has never been so black-and-white and inflexible that that would ever have been an issue. In fact, it's not like we even make maintainership control be something very documented: we do have a MAINTAINERS file, but that's so that you can find the right people, it's not really a sign of some exclusive ownership.

So the whole "who owns what" is more of a fluid guideline, and a "this person is active and is doing a good job" than some "oops, now we gave that person ownership and then he screwed up".

And it's fluid also in the sense that maybe you are the maintainer of one subsystem, but if there's something you then need from another subsystem, you can often cross borders. Usually it's something that people talk about extensively before doing, of course, but the point is that it does happen and it's not some hard "you're only supposed to touch that file" kind of rule.

In fact, this is actually somewhat related to the earlier discussion about the licensing, and another example of how one of the design principles of "Git" was that whole "everybody has their own tree, and no tree is technically special".

Because a lot of other projects have used tooling - like CVS or SVN - that fundamentally does make some people special, and that fundamentally does have a "ownership" that goes along with it. In the BSD world, they call it the "commit bit": giving a maintainer the "commit bit" means that he's now allowed to commit to the central repository (or at least parts of it).

I always detested that model, because it inevitably results in politics and the "clique" model of development, where some people are special and implicitly trusted. And the problem isn't even the "implicitly trusted" part - it's really that the other side of the coin is that other people are not trusted, and are by definition outsiders, and have to go through one of the guardians.

Again, in Git that kind of situation doesn't exist. Everybody is equal. Anybody can do a clone, do their own development, and if they do a good job they can get merged back (and if they do an outstanding job, they become maintainers, and they end up being the ones doing the merging into their trees ;).

So there's no need to give people special privileges - no need for that "commit bit". And that also means that you avoid the politics around it, and you don't need to trust people implicitly. If they end up doing a bad job - or more commonly, just end up fading away and finding another interest - they don't get merged back, and they also don't stand in the way of other people who have fresh new ideas.

JA: Do new features of Git ever impress you, and become part of your workflow? Are there features you'd still like to see added?

LT: My use cases were obviously the first ones to be fulfilled, so for me it has seldom been about new features.

Over the years, Git has certainly improved, and some of it has been noticeable in my workflow too. For example, Git has always been fairly fast - it was one of my design goals, after all - but a lot of it was originally done as shell-script around some core helper programs. Over the years, most of that shell scripting has gone away, and it means that I can apply patch-bombs from Andrew Morton even faster than I could originally. Which is very gratifying, as that was actually one of the early benchmarks I used for performance testing.

So Git has always been good for me, but it's gotten better.

The big improvements have been about how much better it has become to use as a "regular user". A lot of that has been people learning the Git workflow and just getting used to it (it is very different from CVS and other things that people used to be used to), but a lot of it is Git itself having become a lot more pleasant to use.

Conclusion, Part One

In the second part of this interview, Linus talks about what he's learned from managing a large open source project. He offers much insight and advice to maintainers about what he's found works best for him, and how he avoids burn out. He also talks about the Linux Foundation, and what he does when he's not focused on developing the Linux kernel.

For Part Two, click here
Apr 28 2021
Apr 28

Bluehost is among the 20 largest hosting companies in the world, handling over 2 million domains. The company's offer includes a wide range of services. We will show you how to install Droopler, the Drupal distribution, using the basic shared hosting offer (Basic Web Hosting).

 

Step 1. Server preparation

First, you need to configure the database and PHP. This stage is necessary because, without it, it won't be possible to install Droopler. But before you start, you need to get familiar with the requirements for the system on which you want to run Droopler.

To configure your hosting, after logging in to the account using the login form, go to the "Advanced" tab, where you'll find all the necessary functions:

The Advanced tab in Bluehost, where you'll find all the necessary functions for the hosting configuration

 

 

1. Database configuration

First, you need to create a new database. To do this, click on the "MySQL Database Wizard" in the "Databases" section.

The Databases section in Bluehost

 

You'll be redirected to the database wizard page. In the first step, you need to define the database name. It can be any name but in our example, we’ll use the most intuitive one - droopler. The full database name will consist of a prefix (usually associated with the account id) and a second part/name defined by you.

Defining the database name in MySQL Database Wizard

 

In the next step, you need to define the database users (at least one) and their passwords. You can generate the password using the Password Generator located in the form.

Defining the database users in MySQL Database Wizard

 

In the third step, you need to assign the user permissions to your database. For the Droopler installation to work correctly, it's safest to assign all permissions by selecting the "ALL PRIVILEGES'' option.

Assigning all database privileges to a user to make the Droopler installation work properly

 

2. PHP configuration

Setting the appropriate version of the PHP language and adjusting its configuration is crucial and can significantly affect the installation process. To make the necessary changes, go to the "Software" section in the "Advanced" tab.

The Software section in the Advanced tab in Bluehost

 

To set a specific PHP version, click on "MultiPHP Manager". In our installation, we'll use the proven PHP 7.3 version.

Selecting a specific PHP version in MultiPHP Manager

 

To change the environmental settings, click on "MultiPHP INI Editor".

With MultiPHP INI Editor, we can change the environmental settings

 

In the first step, you need to select the domain for which you want to make the necessary changes.

Selecting the domain for which you want to make the changes in MultiPHP INI Editor

 

Then you need to adjust the configuration for the needs of the Droopler installer and for the smooth operation of the website. We increase the maximum script execution time (max_execution_time) to 180 and the memory limit to 512 MB.

Adjusting PHP configuration for the needs of the Droopler installer and the smooth operation of the website

 

We leave the other values in accordance with the default settings.

 

Step 2. Downloading Droopler

The Droopler distribution can be downloaded from the official website of the project.

1. Download the latest version of Droopler as a zip archive.

A place on the Drupal distribution's page from where you can download the Droopler

 

2. Go to the File Manager located in the "Files" section in the "Advanced" tab.

The Files section in the Advanced tab in Bluehost

 

3. After opening the file manager, click on the "Settings" button in the upper right corner to unlock the option to display hidden files.

After coming to the Settings section, we can unlock the option to display hidden files

 

4. In the file explorer located in the left column, click on the public_html directory.

The public_html directory in File Manager in Bluehost

 

5. Click on the "Upload" button in the top menu in order to upload the file from the archive downloaded in the first step.

The Upload button in File Manager

 

6. Drag and drop or select the file using the system file explorer.

Adding the Droopler file to File Manager in Bluehost

 

7. After loading the file, right-click on it and select the "Extract" option.

Selecting the Extract option for the Droopler file

 

8. Set the target directory for the extraction to /public_html.

Setting the target director for the extraction of the Droopler file

 

9. After the extraction is complete, go to the newly created directory with Droopler (in our example, it'll be the directory named droopler-8.x-2.1) and select all files.

The newly created directory with Droopler in File Manager in Bluehost

 

10. Move the selected files and subdirectories directly to the /public_html directory using the move button available in the top menu or after right-clicking.

Moving the selected files to the public_html directory

 

11. Leave only /public_html in the name of the path to which you'll move all the contents of the directory.

Defining the path to which you'll move all the contents from the directory with Droopler

 

12. After moving the files, go back to the /public_html directory and remove the uploaded .zip archive (droopler-8.x-2.1-core.zip) and the directory created during the unpacking (droopler-8.x-2.1).

Removing .zip archive and the Droopler directory from the public_hml directory

 

Step 3. Droopler installation

Everything is ready now. To start the installation process, all you need to do is to go to the website by entering your domain address in the browser bar.

The Droopler installer

 

The Bluehost web hostings don't offer PHP OPcache, so you'll see a warning when verifying your requirements. You can ignore it and proceed with the installation by clicking on the "continue anyway" link at the bottom of the page.

The warnings during the requirements verification in the Droopler installer

 

The last important step is database configuration. Here we enter the data that we previously set in the Bluehost panel, i.e., the database name, the database user's name and their password. The host and port number remain as in the attached screenshot.

Database configuration in the Droopler installer

 

After clicking on "Save and continue", you have nothing else to do other than to take a short break for coffee or tea, during which the rest of the Droopler installation will be performed.

Apr 28 2021
Apr 28

Compatibility

Before you download a module in Drupal 8, make sure it will be compatible with your Drupal 8 version. You cannot install a Drupal 7 module into Drupal 8 installation if it is not supported. In Drupal.org, to know the module version released by Drupal.org, navigate to the module's page and scroll right to the end.

Actively Maintained Modules

You need to check and verify that the modules you have chosen are actively maintained, updated, and published by the developers. Suppose you encounter any security vulnerabilities or any other issues while developing.

You will get an instant response from the developer/contributor. In Drupal.org, to know the module version released by Drupal.org, navigate to the module's project page and scroll right to the end.

Popularity

Make sure to choose popular modules to do the job, as you will get more minor issues while developing. To check this, go to the module's project page on drupal.org. Here, you will see the number of downloads and the number of websites currently using that particular module.

Now that you have chosen the module for development let's get started. In Drupal 8, the modules folder hosted in the root directory of Drupal 8 is where you will find the contributed or custom modules.

Things You Should Know Before Beginning Drupal 8 Module Development

Apr 28 2021
Apr 28

Introduction

The concept of Headless CMS has been a rage for quite some time. At Axelerant, we have been using Drupal as a Headless CMS in many projects. Headless drupal provides a JSON API for accessing the published content of Drupal, including menus via the Drupal Decoupled Menus module.

Since we will be building a cross-platform menu so it becomes necessary to talk about the mobile application ecosystem, which has changed considerably since the introduction of cross-platform technologies like React Native and Flutter. These technologies have made mobile application development a lot more accessible to web developers, both of them have generated strong momentum in recent years. React native has been easier to get started with for web developers due to its React roots but Flutter on other hand uses Dart, which also draws its syntax heavily from JavaScript, however, still has some differences.

In this tutorial, we will use the Flutter framework to render a material design-styled menu across Android, iOS, Web, Windows, and macOS. 

You might be inclined to ask why we choose Flutter instead of using React Native. The simple answer is we feel that Flutter is more polished as a framework. For more in-depth comparisons between the two frameworks, you can check this.
 

Getting Started

Head over to flutter.dev and follow instructions to install flutter on your machine & also install VS Code if you haven’t got that already. Let us create a flutter project by running:
flutter create drupal_flutter_menu

Open the drupal_flutter_menu  folder in vs code. The moment you open it inside vs code, you will be prompted for installing flutter and dart plugins, well go ahead and install them.

On the Drupal side, we need a Drupal instance running with Decoupled Menus module installed and enabled Before we move further let us first look at the JSON returned by Drupal menu API if you navigate to the Drupal menu endpoint (https:///system/menu/main/linkset)  and look for any menu, in this case, “main” menu, then the response JSON will look something like following:

The output will vary depending on what links are present in your specific Drupal menu.

If you look closely at this peculiar-looking JSON representation, it is a special media type called application/linkest+json which has been recently introduced by IETF. This media type is a special way to represent a set of links and their relations in JSON format. In order to know more about this representation head here. Our next step would be to parse this JSON in our flutter code and then create a simple Abstract Data type that will represent the parsed drupal menu, but wait wouldn’t it be better that we have something prebuilt that makes over lives easy, well we have already gone ahead and created a simple flutter package drupal_linkset_menu which takes a drupal menu API URL or a JSON string and returns a Menu object and then render it in Flutter.

Let’s add this package by running from the command line.
flutter pub add drupal_linkset_menu

This command will add the package to our package.yml file. The package.yml file is just like your composer.json file which is used to manage dependencies. Your updated package.yml should look like this,
The source code for a Flutter-based app resides inside the lib folder. We will be only working with the specially named file main.dart inside this folder. Let us delete all the code in the main.dart file and Replace with the following code, which will display “Hello World” in the center of the screen.

In order to run, click on the run and debug button inside the Run and debug section side menu section in vs code, choose dart and flutter on the next step, and then choose chrome on the next step.

Another way is to just type following the terminal /cmd:
flutter run -d chrome

Using Drupal Menu API To Create A Cross Platform Menu

This will run the app inside chrome browser, if you want to run on android you need to have android SDK and for iOS, you need to have Xcode installed, if both of the things are installed then you can use:

flutter run android && flutter run ios

to run on corresponding platforms, for more information on this head over to flutter.dev

Using Drupal Menu API To Create A Cross Platform MenuUsing Drupal Menu API To Create A Cross Platform Menu

Everything in flutter is a Widget! There are stateless widgets & stateful widgets, we will be working with stateless widgets today.

The code that we have put in the main.dart files does the following:

  1. It creates a Material app. Material is a visual design language that is standard on mobile and the web. Flutter offers a rich set of Material widgets.
  2. The main method uses arrow (=>) notation. Use arrow notation for one-line functions or methods.
  3. The app extends StatelessWidget, which makes the app itself a widget.
  4. The Scaffold widget, from the Material library, provides a default app bar, and a body property that holds the widget tree for the home screen. The widget subtree can be quite complex.
  5. A widget’s main job is to provide a build() method that describes how to display the widget in terms of other, lower-level widgets.
  6. The body for this example consists of a Center widget containing a Text child widget. The Center widget aligns its widget subtree to the center of the screen


Now update the code in main.dart  with the following code: 

 Also, add a package called url_launcher by typing:
flutter pub add url_launcher

This package will allow us to open a URL, when any link in the menu is clicked.

Let us break down step by step what the code adds:

  1. In the MyApp widget’s build method instead of showing a “Hello world” text at the center, we have introduced a new widget called HomePage that will show two menus “main” & “footer” menu of our drupal site.
  2. The HomePage widget is another widget that houses the necessary build method that describes how to show the two menus and a couple of helper functions.
  3. The getMenu function is responsible for interacting with the drupal_linkset_menu packages helper method called getDrupalMenuFromURL which takes API URL and the menu name/id and returns a Menu object which is used to construct the UI.
  4. The two functions buildMenu & buildMenuItem are used to recursively build the ui for the menu,A special flutter material inbuilt widget called ExpansionTile is used to create the menu items.
  5. The build method of HomePage contains a Column widget that lays out children in a vertical fashion, it is analogous to how flexbox works on the web. The column has two FutureBuilder widgets that call the getMenu function, till the getMenu function returns a Menu object, a CircularProgessIndicator widget is shown, and when the Menu object becomes available the menu is created.
  6. In buildMenuItem we are using a GestureDetector to listen to taps and when a mouse click or tap is performed on a menu item the URL is launched.

Run again or hot reload again by pressing “r” on your command line to see the changes.

Conclusion

The aim of this tutorial was to give a sense of how quickly we can build a native cross-platform app with Flutter and the new Decoupled Menu API. Now you might be wondering that we didn’t talk about running the project on Windows and macOS- the support for both these platforms is still in beta but as an exercise you can still run the project on Windows and macOS by changing the Flutter stable branch for which more information can be found here.  

All the code for this project can be found on GitHub.

Apr 28 2021
hw
Apr 28

Just as I published yesterday’s article on the tech stack, I realized that I missed a few important things. I had said that the list was only a start, so I think it is fitting to continue it today. As such, today’s post won’t be as long as yesterday’s or even as long as my usual posts. I should also add a caveat that today’s post won’t make the list complete. With the industry how it is and the requirement of constantly learning, I don’t think such posts stand the test of time; not from a completeness perspective in any case.

Our target persona still remains the same as the last post. We are talking about a Drupal developer building sites and functionality for a complex Drupal website with rich content, workflows, editorial customizations, varying layouts, and more features typically found in a modern content-rich website. Since today’s post focuses on the fundamentals of working on the web, the skills may apply to more than a Drupal developer but we’ll focus on this persona for now.

These are also skills that are often hidden, aka non-functional requirements. It is very easy to forget about this (as was evidenced by my yesterday’s post) but this is probably the most important of any of the skills. Not understanding this sufficiently leads to significantly large problems. Fortunately, Drupal is a robust framework and system and is able to handle all such issues by default. It is still important to understand how to work along with Drupal.

Performance

Performance is a very interesting challenge for a large dynamic website. There are several layers to unravel here and the answer for poor performance could be at any of those layers. This means a developer needs to understand how the web works, how Drupal works, and how does Drupal attempt to optimize for performance.

It is shocking how many people are unable to clearly articulate how the Internet works. To me, the lack of clear articulation is a sign of gaps in understanding in the chain of all systems that make the Internet work. To begin with, understand what is the sequence of events that take place when someone visits a link. Understanding DNS is important too. I know no one really understands DNS fully but you don’t need that. Get familiar with the interactions that take place between the client and the various systems such as DNS. Similarly, get familiar with CDN’s and other caching techniques that you can use.

Scalability is a different problem than performance but still very important to understand. Understand what are load balancers and how do they decide to route traffic. Also, understand how can you configure an application to work in such an environment. For this, you also have to understand the concept of state. Fortunately, PHP is already designed to be stateless and has a natural aptitude for such scalability (and serverless paradigm).

On the application front, understand how JavaScript and other assets can impact the rendering of your page. How would you find out if the front-end code is a bottleneck and if it is, what do you do about it? Same for back-end code. Understand how would you utilize caching at multiple layers to deliver performance to the user. Figure out what scenarios are better served with a reverse proxy cache like Varnish or an application cache such as Redis or memcached.

Accessibility

Accessibility is not an afterthought. Unless you are willing to review each line of code later, start with accessibility in mind from the first day. Again, Drupal meets many of the accessibility standards when you use the themes that ship with Drupal. The framework also supports many of the constructs that make writing accessible interfaces easier. Drupal 7 onwards has committed to meeting WCAG 2.0 and ATAG 2.0. Read the Drupal documentation to know more about how you, as a module developer, can write accessible code.

Security

Again, security is not an afterthought and many people who treat it as such have a fair share of horror stories. And just like before, Drupal provides constructs that make it very easy to write secure code. With Drupal 8 and the introduction of Twig, it became significantly harder for developers to write insecure code. However, it is important to understand what Drupal does for security, otherwise, you will be fighting against the core to make your code work.

Understand Drupal’s philosophy of content filtering of filtering on output, not input. Understand how to use PDO and DBAL to work with databases safely. Learn how to use the Xss helpers to write HTML output safely. Understand how you would use Drupal’s permissions and roles system to provide access control for your code.

Again, these are vast topics but I am only beginning to introduce them here. Do follow the links above to specific Drupal documentation pages to understand the topic better.

Apr 27 2021
Apr 27

easy enable clean URLs for seo

A query string is text in a URL preceded with a “?”. Drupal’s “clean URLs” rewrite query strings into human-readable text. Query strings get in the way of search engines. Google’s not bad at understanding URLs with query strings, but it doesn’t always get it right. The best practice is to make sure your URLs don’t contain query strings.

Clean URLs are installed on your Drupal 8 site by default and cannot be turned off. But, it is possible that the server your site is on hasn't been properly configured, so it’s worth checking to be sure.

Complete Drupal.org documentation on Clean URLs can be found here.
 

How to tell if clean URLs are enabled

  1. Open an Incognito window and go to the homepage of your website.
     
  2. Click on a piece of content on your site. You need to navigate to an actual blog post or node, not the home page.
     
  3. Look for "?q=" in the URL.clean urls vs unclean urls
    1. If the URL looks like this: https://drupal8.dev/my-blog-post-title then clean URLs are enabled and you can skip to the next section of this guide.
       
    2. If the URL looks something like this: https://drupal8.dev/?q=node/4 then clean URLs are not enabled. Continue in this section.
       

How to fix your URLs if they are dirty

Here’s the good news: there’s nothing you can do yourself to fix your dirty URLs. You need to get in touch with your developer or hosting company and say this magic sentence:

“It looks like my URLs are dirty because I’m seeing “?q=” in the paths. Would you please enable mod_rewrite for Apache on my server?”

You can point them to this URL: https://www.drupal.org/getting-started/clean-urls#dedicated which explains things in more detail, but the magic sentence above will normally get the job done.

Once mod_rewrite is turned on, you should use an incognito window to test the URLs again.

Did you like this walkthrough? Please tell your friends about it!

facebook icon twitter social icon linkedin social icon

Apr 27 2021
Apr 27

Now that you have enough reasons to migrate to Drupal 8 or 9 from Drupal 7. We have listed out ten essential things that you should remember before you begin the Drupal 7 to 8 migration:

Migrate from drupal 7 to drupal 9

1. Observe and Plan

For a smooth Drupal content migration, first Identify the content types and structure of the existing site and note down the observations. Note down the field types, blocks, content types, taxonomies, etc.

Note down what you need to migrate and what you need to merge based on these observations. Check the Views and other site configurations and note them to replicate them in Drupal 8.

2. Create a checklist of Drupal 7 website modules

First, identify modules that you still need, or if the Drupal 7 module has moved to Drupal 8 Core. keep in mind that not every Drupal 7 module is automatically migrated to Drupal 8. Some of the Drupal 7 modules may have put their functionality into a single Drupal 8 module, and some may have separated their features into two or more Drupal 8 modules.

3. Update to the latest available version

Update your Drupal 7 to the latest available version of Drupal 8 or 9. It will ensure cleaner automatic upgrades of Drupal 7 modules with direct Drupal 8 or 9 upgrade paths.

4. Access.

Before migration, make sure you can access the Drupal 7 website's database and files - both public and private.

5. Backup your website

Before you start the Drupal 7 migration process, make sure to create a backup of the Drupal 7 website and use it for the Drupal 8 migration. Although the Drupal migration does not modify the source, It is still not good to migrate a live functional website.

6. Download a fresh installation

Download a fresh installation of Drupal 8 and again, remember, it MUST be FRESH! If you have done configurations or created content, it will be overwritten automatically when a Drupal 8 upgrade is performed.

7. Familiarize yourself

Unlike in previous version upgrades, you cannot perform a direct upgrade from Drupal 7 to Drupal 8. Drupal 8 Migrate module, Drupal 8 Migrate Drupal module and Drupal 8 Migrate Drupal UI module are three modules in Drupal core. You need to familiarize yourself with Drupal 8's migration system and configuration.

8. Decide the choice of migration

Drush (which gives you granular control) browser user interface (more accessible but less control) are two of the choices you have. You can opt for the method that suits your familiarity level and experience.

9. Know your source

The flexibility of the Drupal content migration system allows you to extract content and load from older versions of Drupal and other sources like CSV, XML, JSON, MYSQL, etc.

10. Perform a content audit

Perform a thorough content audit on the Drupal 7 version to identify content you need to migrate to Drupal 8 or 9. For a smooth Drupal content migration, remove the unused and irrelevant content to avoid spending time and effort migrating them.

Also Read: Custom Drupal Development: 10 Things You 'Must' Know For Best Output

Apr 27 2021
Apr 27

Component-driven theming is becoming widely popular today because of various reasons. The most significant being reusability and portability of components. Pattern lab is a front-end framework that uses an atomic design architecture to implement a component-based design. In this article, we will discuss more about Pattern Lab and how you can enhance your Drupal frontend performance with it.

Pattern Lab

What is Pattern Lab?

Think of Pattern Lab as an application that can help you organize your UI components in a pattern-driven approach. It is basically a static site generator that binds together all the UI components together. Pattern lab uses atomic design to accelerate the process of creating modular designs. It starts out really basic and gets more complex as you move up the ladder.

Pattern Driven Approach

Credit: Pattern Lab

Let’s breakdown these visual building blocks:

Atoms: They are the most basic building blocks of your design. For example, UI elements like buttons, search fields, form elements, labels.

Molecules: They are groups of atoms that work together to perform a particular action. For example, a search navigation that involves a combination of atoms like a search button, a search field and a label.

Organisms: Organisms are molecules and atoms grouped together to define sections of the application. They’re more complex and have more interactions. For example, a header, a footer or a related blog post section.

Templates: Groups of Organisms form Templates. They are placeholders for organisms. For example, a blog page template, login page template or a shopping cart page template.

Pages: When you combine a template with real content, you get a page.

Page Template

Credit: Pattern Lab

Why should you use Pattern Lab?

Patten Lab makes frontend developers work an absolute pleasure. This is because it allows rapid design prototyping with demonstrable interface and interactivity.

It allows developers to maintain code consistency, implement and leverage reusable components and allows multiple developers to work at the same time. All of these benefits help in easy maintenance of the code.

In terms of Drupal, frontend developers can start their work independently without the dependency on Drupal development. We can work faster and with more consistency than ever before.

Patten Lab with Drupal

Integrating Pattern Lab with Drupal has many benefits and can improve and fasten frontend development. Thanks to the Drupal community, we now have a contributed theme Emulsify Drupal that comes with inbuilt the Pattern Lab architecture. This reduces the extra development effort of achieving a Pattern Lab integration.

Creating a Heading atom in Pattern Lab

For a better understanding of implementing Pattern Lab in Drupal, let’s take an example of creating a Heading atom and integrating it in Drupal Twig template.

Heading atom directory

Heading atom directory

In the atom directory we will have 3 required files.

_heading.twig : The heading markup will go in this file
_hading.scss : This is for heading style
heading.yml : This is for static data to be passed in twig

Heading atom Twig

Heading Twig

_heading.twig

In Heading twig, we can add the element markup with all the required attributes as variable. We can pass data from yml file or drupal field data with these variables.

Heading.yml / Heading.json

Heading json

heading.json

This is an example of a json file containing static data for the variable available in the _heading.twig file.

Integrating Heading atom in Drupal template with field values

Here, we will be overriding the page template and integrating it with the heading atom.

Barkit Page

Default bartik page-title.html.twig

Heading atom integrated with Page Title

Pattern Lab Heading atom integrated in page-title.html.twig

Here, in page-title.html.twig, instead of the default H1 element we are importing the heading atom as reusable element and adding Drupal field value to the defined variable in the heading atom - heading_level and heading.

Using the same pattern, we can create engaging Drupal websites using Pattern Lab.

Apr 27 2021
Apr 27
[embedded content]

Don’t forget to subscribe to our YouTube channel to stay up-to-date.

File Management Series

Removing files from Drupal is more tricky than you might think. There has been another tutorial about this using the module File Delete.  It requires going through a cycle of minimum 6 hours.  By default, Drupal protects files from removal which are still being used and referenced in other content. Instead of directly deleting a file, the linked usages need to be cleared first, and the files marked as temporary, then finally the system will remove them during cleanup.

This is actually a good policy. However, sometimes it’s annoying to wait for a few hours before these files are being cleaned up. In particular, if we are sure these files are unused or orphaned, there is another module called Fancy File Delete, which solves this problem.

Fancy File Delete allows you to delete files straight away without having to wait.

However, great care must be taken when using this module, because it has a ‘force delete’ option. Use of this module should be restricted to experienced administrators.

Table of Contents

Getting Started

Before we begin, go download and install Fancy File Delete.

Using the following Drush commands:

composer require drupal/fancy_file_delete

NOTE: The module requires the Views Bulk Operations (VBO) module which will be downloaded when you run the above command.

Deleting Files

To follow a proper procedure to delete files, administrators should still go through the proper file handling process by removing them from the nodes and the media library first, even though these processes can be bypassed using this module.

After installing and enabling the modules, go to Admin >> Configuration >>  Content authoring >> Fancy File Delete. In the first tab ‘Info’, it shows a brief description on how to use the module.

Enter the ‘List’ tab, a list of all the files in the system can be found from here.  Select the file(s) to be deleted using the checkbox in front of each file, then select ‘Delete Files’ in the ‘Action’ option on the top of the page, then click the ‘Apply to selected items’ button.

Note that there is another option for Action, which is ‘FORCE Delete Files (No Turning Back!)’.  When a file cannot be deleted, try this option but handle with care.

Can’t See Action Drop-Down

If you can’t see the Action drop-down on the list page that means you’ll need to reconfigure the view.

1. Edit the List view by clicking on the pencil icon.

2. Once you’re editing the view, click on the “Views bulk operations” field.

3. Enable the actions by clicking “Select all”.

Then save the view and you should see the Action drop-down.

Delete Files manually by FID

As can be seen above, there is a File ID for each file.  Files can be deleted using this FID by entering them under the ‘Manual’ tab. Just enter the FIDs of the files to be deleted, one per line, and press the ‘Engage’ button.

Deleting Orphaned and Unmanaged Files

Additional options are available to delete orphaned and unmanaged files if the system detects them.

Deleting Files from the ‘Files’ or ‘Image’ Fields

Note that in the above procedure using ‘Fancy File Delete’ module, it is not a requirement to remove them from the nodes first:

  • It bypasses the default file delete procedure in Drupal, and therefore it should be handled with care.
  • Files deleted will also be removed from previous node revisions too.
  • Files deleted will also be removed from the list of files under Admin >> Content >> Files
  • Files deleted are permanently removed totally from the system, including the physical file in the server.
  • It does not require waiting for the minimum 6 hours requirement.

When the files are uploaded through the media module and deleted without first removing them from the nodes and the media library first, this ‘Fancy File Delete’ module still remove the files same as above, except the following:

  • Files deleted will still be removed both physically from the server and from the list of files under Admin >> Content >> Files, like above.
  • The file will not be shown in node because it is empty, but an empty entity will remain both in the node and in the media library.

Using Drush

If you use Drush, you can delete files straight away using the “drush entity:delete” command.

For example:

drush entity:delete file 1

This command will remove the file from the Files page and delete it from the file system and you won’t have to wait.

Summary

This module offers a more straightforward and efficient way to remove files from a Drupal site permanently. It is powerful and capable of bypassing the default file handling procedures in Drupal. But that also means bypassing the file protection mechanism of Drupal. It handles file deletion in an efficient manner which is an advantage, but it is also a disadvantage at the same time.

On a large site where there are more users and content authors. When deleting files are granted to more people, it is difficult to guarantee that everybody is careful enough to understand the underlying structure and relationships between files and content. There is a higher risk of accidental deletion of files. In such situations, a better protection mechanism might be necessary.

On a small site or in a small team where content is easier to manage and control, efficiency might be preferred.

Editorial Team

About Editorial Team

Web development experts producing the best tutorials on the web. Want to join our team? Get paid to write tutorials.

Apr 27 2021
hw
Apr 27

The title of this post is not really accurate, but I can’t think of another way to say it. This post is related to my earlier one on what a Drupal Developer does day-to-day. Here, I will talk about some of the skills required of a Drupal developer. I am not aiming for completeness in this post (that’s a goal for another time) but I will try to list all skills required to build a regular Drupal site, deploy it, and keep it running. Some of these skills are foundational whereas others may be only needed for specific requirements. Moreover, not all skills are required at a high expertise level. For some roles, even awareness is enough.

This post would be useless without a target persona. Different organizations have different needs and what works at one organization may not work at another at all. Similarly, different roles within an organization have different needs. For example, at Axelerant, we do not have a dedicated site-builder role but this is something that is expected of a Drupal Developer. Such an arrangement may not work at all for your organization nor can I say this arrangement will work forever for Axelereant. In fact, it is even weird to use the word “forever” here.

The target persona for this post is a Drupal Developer building sites and functionality for a complex Drupal website with rich content, workflows, editorial customizations, varying layouts, and more features typically found in a modern content-rich website. Since this is Drupal we are talking about, the above definition is actually quite broad. It includes a web presence for organizations from publishing, healthcare, higher-education, finance, government, and more. With that settled, let’s look at the skills.

Site-building

A Drupal developer would need to have the following skills related to site-building. Even if a developer does not build sites, there is a good chance they would be interfacing with these aspects from code. Hence, most of these are foundational.

  • Content modelling: Understand what makes a good content structure for a site. You should be able to determine if the content structure you are building is the relevant one for the requirements.
  • Content moderation: Most sites need this. I have seen this being used even on simple sites when there is more than one type of user. Understanding how the moderation workflow works and how it ties with the entity-field structure along with its revisions is important.
  • Multiple languages: It is important to understand how Drupal implements multilingual functionality at both interface and content (translation) level. Depending on the requirements, certain aspects of how translated content is stored in fields could be very relevant. At a basic level, you should also be aware of how Drupal determines the language when it renders a page or response.
  • Configuration Management: While this may seem more to do with developers, it is important to understand this from a site building perspective. You need to understand how the content types you create are stored in configuration and how is it different from how, for example, site information is stored. This means you need to understand configuration entities and simple configuration. You also need to understand how configuration can be changed depending on the environment and how it could be overridden.
  • Common solutions like views and webforms: Before you set about developing for some functionality, you have to make sure if there is a solution out there already. You also need to know how to pick modules for your problems.

Development

I don’t think this section needs any explanation as the entire post is about this. So let’s jump in.

  • PHP: This might seem like a no-brainer. Drupal is written in PHP so it is a good idea to be very comfortable with PHP. Understand object-oriented programming, how Drupal uses different constructs such as traits. Also, understand patterns such as dependency injection and observer pattern (hooks and event subscribers are based on this). Then, keep up with what’s new in PHP, if possible, and use it.
  • Composer: Arguably, this is a part of site-building as well but there is a good reason to keep it here. Apart from using composer to download modules, understand how it helps in writing the autoloader. Also, understand how composer works to select a module or a package and what are the reasons it might fail. Most of this comes with experience.
  • Caching: This is probably the most important part to understand about Drupal. Caching is deeply ingrained at multiple levels in Drupal and it is important to understand how it works. Almost all your code will need to interact with it even if you aren’t planning to cache your responses.
  • Other Drupal API’s: There are some common API’s you should be familiar with; for example, Form API and Entity API come to mind. But also be aware of other API such as Batch, Queue, plugins, Field, TypedData, etc. You should know where to look when you need it for a problem.
  • Automated testing: The Drupal core has a very vast collection of test cases and that means you may only have to write functional test cases for your site. You should understand the different tools that you can use to write them.

Deployment

You have to understand where your site is going to live to build it well. We don’t live in that age where you would write code and throw it over a wall to the ops team for them to deploy it. You may not be deploying it but you have to understand the concepts.

  • Servers: Understand how servers work. You don’t need to know enough to configure one securely (unless that’s your job) but you should know enough to understand the terms. For example, you should know what are the common web servers and how they may be configured to run PHP. Also, understand what kind of configuration Drupal needs to run.
  • High Availability: Given Drupal’s market, there is a good chance that the site you are building will need to run in a highly available environment. Understand what it means and what that implies for your Drupal site. Most of these issues may be resolved for you if you are using one of the PaaS providers but if you are building your own server(s), you should understand how to configure Drupal to run on multiple servers. Even if you are using PaaS, you should still understand how running on multiple servers could affect how you write code, e.g., how you work with temporary files.
  • CI/CD: Continuous Integration is very common today and it is important you understand it to be a productive developer. There are dozens of solutions to choose from and most have some kind of a free offering. Pick one and try it out.

And more…

Like I said before, I am not aiming for completeness here but only in making a start. Please let me know if I have missed something obvious. Given the nature of these posts, I can only spend 1-2 hours planning and writing these, so there is a very good chance I have missed something. Let me know.

Apr 26 2021
Apr 26

I am a born and raised New Yorker. In the 80's and 90's many people, including my teenage self, would hop the subway turnstile and become an illegal free-rider on NYC's mass transit system. At some point, the city said enough is enough and started fining and even arresting turnstile hoppers. This action scared me straight, and I started paying my fare share (pun intended). It discouraged most of the subway's free-riders, increased the MTA's revenue, and changed the NYC mass transit system, making it safer and better. In addition to getting people to pay their fare, this policy also managed to catch criminals before they entered the subway system, thus reducing crime throughout the city.

Apr 26 2021
Apr 26

Now on Drupal 9, the community isn’t slowing down. This month, we continue our interview with Angie Byron, a.k.a Webchick, a Drupal Core committer and product manager, Drupal Association Board Member, author, speaker, mentor, and Mom, and so much more. Currently, she works at Aquia for the Drupal acceleration team, where her primary role is […]

The post Community Corner: Interview with Angie Byron, Part Two appeared first on php[architect].

Apr 26 2021
Apr 26

I am a born and raised New Yorker. In the 80's and 90's many people, including my teenage self, would hop the subway turnstile and become an illegal free-rider on NYC's mass transit system. At some point, the city said enough is enough and started fining and even arresting turnstile hoppers. This action scared me straight, and I started paying my fare share (pun intended). It discouraged most of the subway's free-riders, increased the MTA's revenue, and changed the NYC mass transit system, making it safer and better. In addition to getting people to pay their fare, this policy also managed to catch criminals before they entered the subway system, thus reducing crime throughout the city.

Apr 26 2021
Apr 26

Automated deployment of software - whether it’s new packages, patches, or configuration changes - is a fact of life in modern software development and management. Automated infrastructure, however, is a newer set of tools and processes. With Amazon EKS and Pulumi, Tag1 is tackling these challenges to meet the needs of Fortune 500 customers.

In this second part of our series on automating infrastructure, Managing Director Michael Meyers is joined by CIO Jeff Sheltren and Senior Infrastructure Engineer Travis Whitehead. They discuss how Tag1 is using these tools to create and deploy entire websites, ready for use, in just a few minutes.

From a standardized distribution, to Docker, to Pulumi, and finally to deployment, you’ll hear about the ins and outs of the workflows Tag1 is creating to be faster and more successful.

[embedded content]


Additional resources

For a transcript of this video, see Transcript: Deploying New Enterprise Web Applications in Minutes- Part 2.

Photo by Amir Hanna on Unsplash

Apr 26 2021
Apr 26

Drupal 7 is generally not quite the same as Drupal 8 and Drupal 9. Many structural changes that progress from a Drupal 7 website to a Drupal 8 are more like a relocation project from an unexpected CMS than a product redesign. For instance, the brand new theme and PHP library in Drupal 9 are features that attract most of the Drupal CMS owners.

Drupal Website Development

Fortunately, there is no large building hole between both Drupal 8 and Drupal 9.

Drupal 9 in reality, more like a next minor center overhaul like 8.8 to 8.9 with some newly added features. Remember when you upgraded Drupal 7 to Drupal 8 that led you to rewrite the Drupal from scratch? Don't be scared; Drupal 9 offers the easiest upgrade from Drupal 8. Since Drupal 8 is backward compatible, you will not have to write custom codes after upgrading. If you practice removing the old and deprecated code base, upgrading from Drupal 8 to Drupal 9 will be smooth as butter.

If you are still using Drupal 7, it will reach the end of life by November 2021. To avoid it, you can upgrade your Drupal website to Drupal 8 or Drupal 9. To help you do a smooth upgrade, here is a comparison guide for Drupal 7 Vs. Drupal 8 Vs. Drupal 9.

Drupal Features

Apr 26 2021
Apr 26

In Annertech's Technical SEO Service for Drupal Websites, we talked about what we do during a technical SEO implementation. But let us now look at the longer-term benefits of implementing one for your organisation.

1. Increase Reach Across all Channels

When your content is created in such a way as to allow Google, Bing and others to easily discover it, index it, and display it, it stands to reason that more people will also find it. With this increase in reach across different channels, you have the inevitable increase in potential sales and engagement.

2. Increase Search Engine Ranking

Following-on from the last point, if Google can easily understand your content, it stands a much better chance of appearing higher in search results, up to and including “Position Zero”. This means not just being at the top of general search results, but also featuring within the "People also ask" and "Featured snippets" section. Being placed here is Google gold dust.

3. Rich Snippets in Search Results

When your content appears in search results, you want more than a title with a short description; you want images, video, ratings, call to actions, etc. Anything that makes the search result showing your website more enticing than your competitors is a real bonus. Remember, a more enticing or prominent result will mean more engagement.

Apr 26 2021
hw
Apr 26

Over the years, I have significantly lesser time for development and an increasing need to move around. After dealing with back-pain due to the weight of my Dell laptop while travelling for conferences, I bought a 15″ MacBook Pro. More recently, with the issues with Docker performance on Mac, I have been thinking of getting a Linux box. I was further motivated when I bought an iPad last year and wanted to use that for development. Now, with my old MacBook Pro failing because of keyboard and hard disk, I have a new MBP with the M1 chip and just 8 GB RAM. I am more and more interested in making remote development work efficiently for me.

It is not hard to set up a remote machine for development in itself. The problem I want to solve is to make it easy to spin up machines, maintain them, and tear them down. I also want to make it easy to set up a project along with the necessary tooling to get it running as quickly as possible. For now, the only problem I am solving is to set up the tooling quickly. I am calling this project Yakht (after yacht, as I wanted a sea-related metaphor).

Current workflow

While it’s far from the level of automation I am thinking of, it’s not too bad. I was able to set up a machine for use within a few minutes. This is what the process looked like:

  1. Create a 1 GB instance on DigitalOcean in a nearby region (for minimum latency).
  2. Add a wildcard DNS record for one of my domain names so that I can access my projects on ..
  3. Set the domain name and IP address in my Ansible playbook’s variable files and inventories.
  4. Run the Ansible playbook.

These steps had a machine ready for me with all tools required for Drupal development using Docker (Lando in my case). The Ansible playbook installs a bunch of utilities and customizations along with the required software like PHP, Docker, composer, Lando, etc. Only PHP CLI is installed because all development happens within Docker anyway. It also configures Lando for remote serving and sets the base domain to the domain I have configured, which means I can access the URLs generated by Lando (thanks to the wildcard DNS). With the Drupal-specific tooling we have written (some of which I have written before), setting up a new project is fairly quick.

A lot of these tools are somewhat specific to me (such as fish-shell and starship). I need to make it customizable so that someone else using it can pick a different profile. That’s a problem for another day.

Current trials

I have been using this machine for a long time now which is not how I intend for this to be. I am trying out a few tools and customizations before putting them in the Ansible playbook. Most notably, I am trying out cdr and using it as an online IDE which is very similar to vscode. It took a little effort to serve it via Caddy, it works well most of the time. It times out frequently but I think this is because this instance only has 1 GB of RAM. When it times out, the connection breaks and you need to reload which can get frustrating. Fortunately, this happens frequently for a while and then it works fine for long periods of time. In any case, I doubt if this will happen on an instance with a reasonable amount of RAM.

Screenshot of cdr running in Chrome

Screenshot of cdr running in Chrome

I know that VS Code has good support for remote development over SSH but I also want to be able to use the IDE over iPad and a browser-based IDE solves that. I am also considering trying out Theia and Projector but that’s for another day.

It’s also missing a few things I want in my development machines such as my dotfiles and configuration for some of the tools (such as custom fish commands). For now, all of this is set up manually. Of course, my intention is to automate all of these steps (even DNS).

Current problems

The general problem with these kinds of tools is to maintain a balance between flexibility and ease of use. By ease, I mean not having to configure the tool endlessly and frequently to make it do what you want. But that’s exactly what flexibility needs. For now, I am not trying hard to maintain flexibility. Once I have something that works with reasonable customization, I will figure out how to make it much more customizable.

Another problem is accessing remote servers from this machine. Right now, I am using SSH agent forwarding to be able to access remote servers from my development instance without having the SSH key there (and it works). But this doesn’t work if I am using the terminal in cdr. I am still looking for a solution to this problem that doesn’t involve me copying my keys over to the development instance. One idea is, if forwarding is not possible at all, to generate new keys for every instance and give you public keys that you can paste in services you want. This is secure compared to copying the private key but definitely a hurdle to get over.

I am excited by this project and hope to have more updates over time. For now, I hope you find the Ansible playbook useful. Also, I am looking for ideas and suggestions in solving this general problem of remote development. Please let me know via comments or social media what you think.

Apr 25 2021
hw
Apr 25

I thought I was done with the series of posts on object-oriented programming after my last one. Of course, there is a lot we can write about object-oriented programming and Drupal, but that post covers everything noteworthy from the example. There is a way which is old-school but works as it should and another which looks modern but comes with problems. Is there a middle-ground? Tim Plunkett responded on Twitter saying there is.

There's a third way! See https://t.co/cFizL60Loy and https://t.co/so2syXyXZS

— timplunkett (@timplunkett) April 24, 2021

At the end of the last post, I mentioned that the problem is not with object-oriented programming. It is with using something for the sake of using it. If you understand something and use it judiciously, that is more likely to result in a robust solution (also maintainable). Let’s look at the approach mentioned in the tweet in detail.

The fundamental difference

The problem with the version with objects in the last post was not because it used objects. It was because it overrode the entry point of the form callback to change it. Using classes has its advantages, most notably that we can use dependency injection to get our dependencies. For more complex alterations, it is also useful to encapsulate the code in a single class. But the approach with the entry point made the solution unworkable.

On the other hand, the form_alter hook is actually designed for this purpose. Yes, it cannot be used within classes and you have to dump it in a module file along with all the other functions. But it works, and that’s more important. There is no alternative designed for this purpose. So, in a way, the fundamental difference is that this method works whereas the other doesn’t. It doesn’t matter that we can’t use nice things like dependency injection here if it doesn’t even work.

Bringing them together

The two worlds are not so disjoint; PHP handles both brilliantly, after all. If you want to encapsulate your code in objects, the straightforward solution is to write your code in a class and instantiate it from your form_alter hook. Yes, you still have the hook but it is only a couple of lines long at most and all your logic is neatly handled in a class where it is easy to read and test. The class might look something like this.

<?php
 
namespace Drupal\mymodule;
 
use Drupal\Core\Form\FormStateInterface;
 
class MySiteInfoFormAlter {
  public function alterForm(array &$form, FormStateInterface $form_state, $form_id) {
    // Add siteapikey text box to site information group.
    $form['new_element'] = [ '#type' => 'textfield',
      // ... more attributes
    ];
  }
}

And you can simply call it from your hook like so:

function mymodule_form_system_site_information_settings_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  $alter = new \Drupal\mymodule\MySiteInfoFormAlter();
  $alter->alterForm($form, $form_state, $form_id);
}

You can save the object instantiation if you make it static but let’s not go there (you lose all advantages of using objects if you do that).

Dependency Injection by Drupal

This is already looking better (and you don’t even need route subscribers). But let’s take it a step further to bring in dependency injection.

We can certainly pass in the dependencies we want from our hook when we create the object, but why not let Drupal do all that work? We have the class resolver service in Drupal that helps us create objects with dependencies. The class needs to implement ContainerInjectionInterface but that is a very common pattern in Drupal code. With such a class, you only need to create the object instance using the class resolver service to build it with dependencies.

function mymodule_form_system_site_information_settings_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  \Drupal::service('class_resolver')
    ->getInstanceFromDefinition(MySiteInfoFormAlter::class)
    ->alterForm($form, $form_state, $form_id);
}

For better examples, look at the links Tim Plunkett mentioned in the tweet: the hook for form_alter and the method.

I hope you found the example useful and a workable middle-ground. Do let me know what you think.

Apr 24 2021
hw
Apr 24

Update: Read the follow-up to this post where I discuss a mixed approach combining both of the approaches here.

I previously wrote about how the object-oriented style of programming can be seen as a solution to all programming problems. There is a saying: if all you have is a hammer, everything looks like a nail. It is not a stretch to say that object-oriented programming is the hammer in this adage. That post was quite abstract and today I want to share a more specific example of what I mean. More specifically, I’ll talk about how using “objects” to alter forms without thinking it through can cause harm.

But first some context: this example comes from my reviews every week of code submitted as part of our interview test. This has happened frequently enough that I think that this is actually a recommendation somewhere. Even if it is the case of people copying each other’s work, it certainly is evidence that this has not been thought out. In fact, this makes for a good interview question: where would this method fail? I am going to answer that in this post.

The traditional method

Let’s look at the traditional method first. Drupal provides hooks to intercept certain actions and events. For example, Drupal might fire hooks in two situations: in response to events like saving a node, or to collect information about something (e.g. hook_help). You will find a lot more examples about the latter and that is what we are going to talk about today.

Drupal fires a few different hooks when a form is built. Specifically, it gives the opportunity to all the enabled modules to alter the form in any way. It does this via a hook_form_alter hook and a specifically named hook_form_FORM_ID_alter. So, for example, to alter a system site information form, either of the functions below would work:

function mymodule_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  if ($form == "system_site_information_settings") {
    $form['new_element'] = [ /* attributes */ ];
  }
}
 
// ... OR ...
 
function mymodule_form_system_site_information_settings_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  $form['new_element'] = [ /* attributes */ ];
}

Adding elements or altering the form elements in any way is a simple affair. Just edit the $form array as you want and you can see the changes (with cache clear, of course). This is the old-school method and it still works as of Drupal 9.

The OOPS approach

More often than not, I see the form being altered in a much more involved way. Broadly, this is how it looks:

  1. Create a form using the new object-oriented way but extending from Drupal\system\Form\SiteInformationForm instead of the regular FormBase.
  2. Define an event subscriber that will alter the route using the alterRoutes method.
  3. In the event subscriber, override the form callback to your new form.

This gist contains the entire relevant portion of code.

After doing all this, you might expect that the code should at least work as expected. Many people do. But if you have been paying close attention, you might see the problem. If not, think about what would happen if two modules attempt to alter the same form this way. Only one of them would win out.

If there are two modules altering the same route, the last one to run will win and that module’s form changes will be used, The form controllers from the previous modules will never be executed. You could extend the first module’s form controller in the second module (so that changes from both modules take effect) but that is not reasonable to expect in the real world with varied combinations of modules.

So, we shouldn’t use objects?

I am not saying that. I am saying that we should think how we are applying any programming paradigm to build a solution and where it might fail. In our example, if Drupal supported an object-oriented version of form alters, that would have been safe to use (there is an open issue about this.) In fact, there is discussion to use Symfony forms and also some attempts in contrib space. Until one of those solutions get implemented, the form_alter hook is the best way to alter forms. And there is a good chance that such hooks get replaced in time. After all, the event-based hooks did get replaced by events in most cases.

For now, and always, use the solution that fits your needs. Using objects or using functional programming doesn’t necessarily make a program better. It is using our skills and our judgement that makes a program better.

Update: Read the follow-up to this post where I discuss a mixed approach combining both of the approaches here.

Apr 23 2021
Apr 23

Aten loves libraries. We’ve built a range of software solutions for libraries all over the country, including the John D. Rockefeller Jr. Library in Williamsburg, VA, the Denver Public Library in our own Denver county, the Richland Library in South Carolina, Nashville Public Library in Tennessee and the Reaching Across Illinois Library System (“RAILS”). It’s remarkable just how many commonalities libraries share when it comes to the features, tools, and considerations their websites need to better serve their users.

Some of those similarities are a no-contest justification for building common, configurable library solutions — solutions like Intercept: a reimagined, generalized approach to library event management, room and equipment reservation, and customer tracking. We co-developed Intercept for Drupal with Richland Library, reused it with L2 / RAILS and actively maintain it in the hopes that it goes on serving libraries for years to come. But for all of the similarities we see between libraries and their digital needs, there are some key differences, too.

Complex permissions needs

Library Directory and Learning Calendar (“L2”) — a project of RAILS — had unique permissions needs. Their staff needed custom view, edit, and delete permissions to website content managed at different organizational levels of the library system. Those organizational levels, structured hierarchically from Illinois State Library, to Regional Library System, through Catalog Consortium, Agency and finally Location, needed associated cascading permissions — i.e., permissions granted at a higher organizational level should cascade to associated content in lower organizational levels. An administrator for an Illinois State Library piece of content, for example, should be able to exercise their administrative permissions on Regional, Consortium, Agency, and Location content associated with that State Library content. An administrator for an Agency would have administrative permissions for that Agency’s Locations.

Granular role based Drupal permissions wouldn’t cut it. That’s because with standard Drupal permissions, each user role can be assigned view, edit, and delete permissions to each content type (say any Regional Library System Page), but we needed to assign those permissions to the appropriate instances of a content type — like the specific Regional Library System Page that belongs to the west-central region, for example.

There are plenty of contributed modules that start down the right path towards (for lack of a better term) cascading permissions by content affiliation, but they wouldn’t have gotten us all the way there. Both the Group and Permissions by term modules, for example, can be incredibly useful in similar situations. In this case, given the features and functionality contributed modules would introduce that we don’t need, plus the level of modification necessary to achieve our goals, we decided on a lightweight, custom solution.

Role and affiliation based custom permissions for L2

Permissions for L2 staff are established using a custom affiliation entity, which stores data about the role a particular user has in relation to a specific piece of content. The custom affiliation entity references a user, a role (a taxonomy term like Admin, Manager, or Staff, for example), and a specific piece of content (a node). A variety of other fields are established in the same affiliation entity in order to store additional metadata about the relationship like contact information, job title, job description, or other details.

Custom affiliation entity field configuration screen, Drupal 8 Affiliation, Role / Access Group, and User fields establish a permission relationship. Metadata fields (blurred here) can provide some arbitrary data about the relationship.

The custom affiliation entities are organized into their own bundles, one for each of the hierarchically structured organizational levels previously described: Illinois State Library, Regional Library System, Catalog Consortium, Agency, and finally Location. This way each individual type of affiliation can contain the metadata fields appropriate for its specific organizational level. Finally, there is an arbitrary Group affiliation which affiliates a user with a piece of content without granting the cascading permissions that accompany standard affiliations.

Custom entity affiliation entity types, Drupal 8 Each organizational level is represented by its own custom affiliation entity type.

The content items (read nodes) whose permissions we’re controlling are organized along the same organizational levels: Illinois State Library, Regional Library System, etc. Each of these content types uses a unique entity reference field to establish a parent / child relationship along the organizational levels. Locations are associated with Agencies, Catalog Consortia, Regional Library Systems or directly with the Illinois State Library; Agencies are associated with other Agencies and / or Regional Library Systems; Catalog Consortia are associated with Regional Library Systems; and Regional Library Systems with the single parent Illinois State Library. It’s a complicated web!

Cascading permissions diagram Permissions granted on content at any one organizational level cascade to associated content in the lower levels.

Once the content for L2 was developed and properly structured with the appropriate parent / child relationships, granting a user specific view, edit, and delete permissions for a particular region of the structured content tree was simple. Simply create an affiliation that assigns the user a role in relation to content at a specific level of the organization, and voila! — the user gains access to that content and all of its children at each of the lower levels.

The permission grid: Tying it all together

One last element ties our whole permissions system together: a robust permissions map that associates view, edit, and delete permissions per custom defined Role / Access Group in relation to various entities. Unlike the unwieldy Drupal permissions grid that assigns roles to broadly defined permissions with the help of about a million radio buttons, our definitions can be static (think code, not GUI) and only have to deal with view, edit, and delete permissions for entities or entity fields. Each Role / Access Group has its view, edit and delete permissions defined per bundle or per specific bundle / field combination, resulting in very cleancut — and extremely granular — permission control.

[
  {
    "entity": "node",                         // Entity type we're granting access to
    "field": "",                              // Field for this entity, left blank we're defining with entity level access
    "affiliation bundle": "location",         // Affiliation entity that controls access, in this case the location affiliation type
    "target bundle": "location",              // Entity bundle we're granting access to, in this case a location node
    "role_access_group": "location_manager",  // Custom defined Role / Access Group that grants this permission
    "edit": 1,                                // Edit permission
    "view": "",                               // View permission
    "delete": ""                              // Delete permission
  }
]

Our “permissions grid” is made up of about 500 similar declarations in a single JSON file, which range through a variety of Roles / Access Groups, a couple of entity types, and tons of bundle / field combinations for some of the more complex field level permissions.

Individual permissions grants are then handled through hook_ENTITY_TYPE_access() and hook_entity_field_access(), which use a a custom Service to load all of the requesting user’s affiliation entities, determine their role in relation to the content (node) in question, then find that role’s particular permissions using our custom JSON permissions map. Here’s an example for node access.

/**
 * Determines if the operation is allowed for a node.
 *
 * @param object $relationship
 *   The relationship object.
 * @param string $operation
 *   The operation being attempted.
 * @param Drupal\node\Entity\Node $node
 *   The node on which the operation is being attempted.
 *
 * @return bool
 *   True if the operation is allowed.
 */
public function nodePermission(object $relationship, $operation, Node $node) {
  // Create an array of data from l2_access_permissions.json, each element
  // of which is an array with elements matching $relationship object
  // properties.
  $module_path = drupal_get_path('module', 'l2_access');
  $matrix = json_decode(file_get_contents($module_path . '/l2_access_permissions.json'),
    TRUE);
 
  // Create an array matching the structure of $matrix elements to see if
  // it matches any $matrix elements (which would mean that there might be
  // a permission that allows this $operation.)
  $relationship_array = [
    'entity' => 'node',
    'field' => '',
    'affiliation bundle' => $relationship->affiliation_bundle,
    'target bundle' => $relationship->target_bundle,
    'role_access_group' => $relationship->role_access_group,
  ];
 
  // Set the $relationship_array's 'edit' and 'view' elements based on the
  // $operation's value.
  $operation = ($operation == 'update') ? 'edit' : $operation;
  $operations = ['view', 'edit', 'delete'];
  foreach ($operations as $op) {
    $relationship_array[$op] = ($op == $operation) ? 1 : "";
  }
 
  // Handy here that array_search() can test whether an array is an element
  // in another array. Here: is $relationship_array an element of $matrix?
  $match = array_search($relationship_array, $matrix);
  if (!$match) {
    return FALSE;
  }
 
  // Found a match. Does it allow access?
  switch ($operation) {
    case 'edit':
      if ($matrix[$match]['edit'] == 1) {
        return TRUE;
      }
      break;
 
    case 'update':
      if ($matrix[$match]['edit'] == 1) {
        return TRUE;
      }
      break;
 
    case 'view':
      if ($matrix[$match]['view'] == 1) {
        return TRUE;
      }
      break;
  }
 
  // If we're here, this $relationship doesn't provide $operation access.
  return FALSE;
}

The end result is powerful and flexible. Our JSON permissions map tells us which Roles / Access Groups have which permissions per entity or entity / field combination. The custom affiliation entities grant users a specific Role / Access Group in relation to a specific piece of content, and that content’s entity references to other entities allow the permissions to cascade to entities in lower organizational levels.

That may sound like a lot, but it’s surprisingly simple. The solution boils down to a handful of entity access hook implementations that use a custom service to lookup the current user’s permissions by affiliation via a JSON permissions map. The total footprint sits at around 1000 lines of code — not counting the permissions map itself — and flexibly manages thousands of users’ permissions across thousands of complex, hierarchical content relationships down to the field level. Pretty neat.

Apr 23 2021
Apr 23

In preparation for the minor release, Drupal 9.2.x will enter the alpha phase the week of May 3rd 2021. Core developers should plan to complete changes that are only allowed in minor releases prior to the alpha release. The 9.2.0-alpha1 deadline for most core patches is April 30. (More information on alpha and beta releases.)

  • Developers and site owners can begin testing the alpha after its release.

  • The 9.3.x branch of core will be created, and future feature and API additions will be targeted against that branch instead of 9.2.x. All outstanding issues filed against 9.2.x will be automatically migrated to 9.3.x.

  • Once 9.3.x is branched, alpha experimental modules will be removed from the 9.2.x codebase (so their development will continue in 9.3.x only).

  • All issues filed against 9.1.x will then be migrated to 9.2.x, and subsequent bug reports should be targeted against the 9.2.x branch.

  • During the alpha phase, core issues will be committed according to the following policy:

    1. Most issues that are allowed for patch releases will be committed to 9.2.x and 9.3.x.
    2. Most issues that are only allowed in minor releases will be committed to 9.3.x only. A few strategic issues may be backported to 9.2.x, but only at committer discretion after the issue is fixed in 9.3.x (so leave them set to 9.3.x unless you are a committer), and only up until the beta deadline.

Roughly two weeks after the alpha release, the first beta release will be created. All the restrictions of the alpha release apply to beta releases as well. The release of the first beta is a firm deadline for all feature and API additions. Even if an issue is pending in the Reviewed & Tested by the Community (RTBC) queue when the commit freeze for the beta begins, it will be committed to the next minor release only.

The release candidate phase will begin the week of May 31. See the summarized key dates in the release cycle, allowed changes during the Drupal 8 and Drupal 9 release cycles, and Drupal 8 and 9 backwards compatibility and internal API policy for more information.

The scheduled release date of Drupal 9.2.0 is June 16 2021.

Bugfix and security support of Drupal 9.1.x, 9.0.x, and 8.9.x

Security coverage for Drupal 8 and 9 is generally provided for the previous minor release as well as the newest minor release. However, Drupal 8.9.x is a Long-Term Support release where support is provided until November 2021. The following changes are upcoming:

Drupal 8.9.x Security releases will be provided until November 2021. Bugfix support is restricted to selected low-disruption major and critical bug-fixes. Drupal 9.1.x. Normal bugfix support ends on June 16, 2021. However, security releases are provided until the release of Drupal 9.3.0 on December 8, 2021. Drupal 9.0.x. Security releases are provided until the release of Drupal 9.2.0 on June 2, 2021.

Apr 23 2021
Apr 23

Last week, Drupalists around the world gathered virtually for DrupalCon North America 2021.

In good tradition, I delivered my State of Drupal keynote. You can watch the video of my keynote, download my slides (244 MB), or read the brief summary below.

I gave a Drupal 9 and Drupal 10 update, talked about going back to our site builder roots, and discussed the need to improve Drupal's contributor experience.

Drupal 9 update

People are adopting Drupal 9 at a record pace. We've gone from 0 to 60,000 websites in only one month. In contrast, it took us seven months to reach the same milestone with Drupal 7, and three months for Drupal 8.

A chart that shows that Drupal 9 adoption is much faster than Drupal 7's and Drupal 8's With Drupal 8, after about 1.5 years, only a third of the top 50 Drupal modules were ready for Drupal 8. Now, only 10 months after the release of Drupal 9, a whopping 90% of top 50 modules are Drupal 9 ready. A chart that shows the Drupal 9 module ecosystem is pretty much ready

Drupal 10 update

Next, I spoke about the five big initiatives for Drupal 10, which are making progress:

  1. Decoupled menus
  2. Easy out of the box
  3. Automated updates
  4. Drupal 10 readiness
  5. New front-end theme initiative

I then covered some key dates for Drupal 9 and 10:

A timeline that shows Drupal 9.3 will be released in December 2021 and Drupal 10.0.0 in June 2022

Improving the site builder experience with a project browser

A Drupal robot staring in the distance along with a call to action to focus on the site builder experience

When I ask people why they fell in love with Drupal, most often they talk about feeling empowered to build ambitious websites with little or no code. In fact, the journey of many Drupalists started with Drupal's low-code approach to site building. It's how they got involved with Drupal.

This leads me to believe that we need to focus more on the site builder persona. With that in mind, I proposed a new Project Browser initiative. One of the first things site builders do when they start with Drupal is install a module. A Project Browser makes it easier to find and install modules.

If you're interested in helping, check out the Project Browser initiative and join the Project Browser Slack channel.

Modernizing Drupal.org's collaboration tools with GitLab

A small vessel sailing towards a large GitLab boat

Drupal has one of the largest and most robust development communities. And Drupal.org's collaboration tools have been key to that success.

What you might not know is that we've built these tools ourselves over the past 15+ years. While that made sense 10 years ago, it no longer does today.

Today, most Open Source communities have standardized on tools like GitHub and GitLab. In fact, contributors expect to use GitHub or GitLab when contributing to Open Source. Everything else requires too much learning.

For example, here is a quick video that shows of how easy it is to contribute to Symfony using GitHub:

Next, I showed how people contribute to Drupal. As you can see in the video below, the process takes much longer and the steps are not as clear cut.

(This is an abridged version of the full experience; you can also watch the full video.)

To improve Drupal's contributor experience, the Drupal Association is modernizing our collaboration tools with GitLab. So far, this has resulted in some great new features. However, more work is required to give new Drupalists an easier path to start contributing.

Please reach out to Heather Rocker, the Executive Director at Drupal Association, if you want to help support our GitLab work. We are looking for ways to expand the Drupal Association's engineering team so we can accelerate this work.

Drupal.org's goals for Gitlab along with positive attendee feedback in chat

Thank you

I'd like to wrap up with a thank you to the people and organizations who have contributed since we released Drupal 9 last June. It's been pretty amazing to see the momentum!

The names of the 1,152 individuals that contributed to Drupal 9 so farThe logos of the 365 organizations that contributed to Drupal 9 so far
Apr 23 2021
Apr 23

Drupal 8 is loaded with several built-in languages ​​to save time and effort for users using Symphony, which has an underlying element called translation, which can create multilingual sites and help you show the site content in multiple languages. Using Symphony, you can translate everything from content to blocks to menus to taxonomy. You can even translate user profiles, image styles, views, text formats, comments, and feeds on your website. In short, Drupal 8 provides the entire site translation.

Apr 23 2021
Apr 23

Have you ever wanted something, merely for the reason that it was in trend? And because it was in trend and everybody seemed to have it, you had to follow the trend? If you ask me, I would have to answer yes to this question. I have done things and bought things, just because everybody else was doing and buying them and not because I actually had a need for them. 

Now, let’s take this situation from our everyday life to the world of web development, do you think it’ll be applicable there? It would and I’ll tell you why. 

When a web project is underway, there are tens, if not hundreds of scenarios, that can turn out to be the outcome. It is up to the project managers to steer the project into the direction that is the most suitable for the goal that was decided in the planning stage. However, there are times when the project goes adrift.

How can that happen?

People, when developing a project, often aim to make it the best. Nobody aims for a substandard result. So, in the chase to become the best, they try to pack the project with as many features as possible and lose sight of the initial target. 

The client has asked for it, add that feature; 
A stakeholder vehemently disagrees with a valuable feature, leave it; 
The competition had added a specific feature, we have to add that too; 

Decisions like these are a major reason as to why projects do not achieve what they intended to. It is also why project managers have to take on the burden of choosing what to incorporate in a project and what to leave behind for now or for good. 

This practise of choosing the appropriate roadmap for a product and acting upon it throughout the development phase with the help of a well-defined strategy is often referred to as feature prioritisation. It essentially draws out the order of features that would find their way into the project and the time as to when they would. There is a lot that goes behind feature prioritisation and market research is the beginning of roadmap development. 

Upon asking a group of project managers about the biggest challenges pertinent to their workload, here is what they had to say. 

A survey is displaying the results of the challenges faced by project managers.Source: Mind The Product 

Feature prioritisation is by far one of the most challenging aspects of a project manager’s job profile. Today, we will find out why that is the case by understanding how this concept usually works, what are the flaws that accompany it and also the suitable strategies that work in the favour of feature prioritisation in project management. So, let’s begin.

The Everyday Feature Prioritising Process 

Feature prioritisation is a process that can be a tad strenuous to achieve. A major factor as to why I used that adjective is because it is a process involving people, their ideas and their feelings attached to those ideas. Building a web project requires work, but involving people and opinions and trying to reach a consensus for each development in the project would require work and give you a headache. Ask any project manager, an aspirin would be a common dietary supplement that comes with the job. It is, after all, a job where emotions perpetually run high. 

Let’s have a look at how feature prioritisation is done on an everyday basis. 

Focusing on the bigger goal 

Before the PM gathers evidence to back a feature and have all of the lengthy discussions based on the evidence trying to convince every person the team that it would work and is a must have, the PM has to make everybody see the bigger picture and focus on just that. 

The development team is going to be diverse, there are going to be people who are pros at what they do and these people are going to want to have their way because they think they know best. And maybe they, but being expert in one area does not give them a say in deciding what is right for the project, wherein several other aspects also play. 

Therefore, the understanding of the end goal by every person on the development team is crucial, if a sense of consensus has to be achieved. Of course, there aren’t always going to be unanimous decisions, and when that happens, team members shouldn’t resent the decision, they should be able to comprehend the reasoning behind it by focusing on the bigger goal. 

Prototyping for evidence 

Now, comes the part of accumulating evidence in support of a feature that should be implemented. And prototyping is the means to go here. 

For instance, you have a theory that you know is going to work in favour of the end goal. However, there is some apprehension about it. What do you do then? You prototype. You will try to develop a testable hypothesis, run it, get the results through the proper execution of the testing cycles and get your proof. 

This proof would help the team alleviate their doubts about an approach, even if it is you. The same can have negative results, meaning the test outcome could be unfavourable. In that case too, you would have the evidence to not seek a particular line of action.

Valuing the hierarchy or not?

When prioritising features, you will have a clear roadmap, you will most likely have a semblance of understanding as to what you want and what you don’t. However, all of that can go in vain, when a high level idea pops into your planned course of action. 

Denying an executive his request can be a problem for many. The perfectly curated project plan can land from a high chance of success to high chance of failure because a stakeholder decided to imbalance the project features with his request. 

Remember the prototyping we discussed in the previous point, put that to practise and save your project from being jeopardised. And that is how you must value hierarchy.

Seeing the future through the present

At the end of the day, feature prioritisation and even the entire development process is pursued for a goal that should be fulfilled in the future. And it would only be accomplished, if you put in the efforts today. 

Seeing the future means that you know how the dots will connect to make a perfectly straight line. This is done by thinking practically about the path to take and filtering out the meaningful from the meaningless.

You, as a Project Manager, have to make your team understand the reasonings of the present for the build that would get implemented in the future, If people know what they are doing today is going to serve a great or even a small purpose, like providing online education to the lower income households, they would most definitely put their best foot forwards; making feature prioritisation less of headache for the PM.

So, how would you define feature prioritisation? 

According to one of our Project Managers, Abhijeet Sinha, feature prioritisation cannot be put into a mould to have a rigid definition that would stick to every scenario. He considers the process to be purely contextual and thus, its meaning and implementation becomes quite dynamic. The only thing that persists in feature prioritisation is balance, a balance between the needs of the stakeholders and the feasibility of those needs. You can’t deliver the moon and stars on every occasion, the sky would lose its brilliance then. And I am 100% in accordance with him.

The only thing that persists in feature prioritisation is balance, a balance between the needs of the stakeholders and the feasibility of those needs.

During my discussion with Abhijeet, we talked about one particular project, wherein prioritisation was more difficult than others because priorities and feasibilities were clashing at massive proportions. 

This happened in the Thinkin Blue revamp project.

Thinkin Blue is one of OSL’s most prominent project; it involved the progressive revamp of its site. The client wanted the existing theme of the site to remain the same, the revamp would involve a change in the templates, all of this seemed feasible. The problem came in the homepage, wherein two themes had to be involved, the existing one along with the new one, which was essential for the revamp to look like a revamp. 

However, the developers were apprehensive about it, because building a homepage on two themes was going to be a massive challenge, and complicated would not even begin to describe it. 

The client wanted one thing and the developers thought it was too complicated, the project was in a deadlock. Abhijeet, being the PM, tried to reason with both the parties and in the end, the developers compromised and the home page was built on two themes. 

The same happened for the headers, the client wanted global headers, while the developers didn’t think that was the correct way to go for the home page because of the new theme. One of them had to give some leeway to Abhijeet to make the project roll forward and this time it was the client. 

Do you see what Abhijeet did? He didn’t let the stakeholders reign everywhere and neither did he allow the developers to issue all the commandments. He always listened to both sides, he thought rationally about the practicalities and wherever he thought he could push, he did. When he had to accept the client’s needs over the developers, he did and vice-versa. There was always a balance. And that is how projects succeed, Thinking Blue is a testament to that. Being stuck in a deadlock would only cost you money, time and efforts, and if compromising can avoid that, then you, as project managers, should start working on it.

So, What Should Be the Hard Hitting Prioritisation Questions To Ask?

In the previous section, we talked about feature prioritisation and how it usually goes around. The involvement of ideas, emotions and hierarchy is inevitable. Regardless you have to persist to get to the suitable features for your project without going on a crazy rampage of unwanted attributes that you will most definitely regret later. 

Here are some questions that will help you to avoid going astray.

Think about the users

Feature prioritisation starts with the users, it is them for them all of the efforts are being made. Therefore you must ask yourself; 

How many users would the proposed feature impact?
How many users would be able to use the feature without finding it complex or confusing?
How many times will the user be using that particular feature in a day?
How many users would feel as if they are empowered by the feature, would its value resonate with the users?

The higher the answers to these questions, the better the feature would turn out to be. For instance, adding various accessibility features that aid the use of a screen reader on your web project would benefit the visually impaired a great deal. With as many as 285 million users being visually, I’d say the odds of such a feature passing prioritisation are quite high.

Think about growth 

After the users comes the growth potential. Growth is a pretty broad term. It could mean bringing in new customers through an invite feature and it could also mean eliminating a feature that acted as a deterrent in luring the competitor’s market. You have to be familiar with all of your feature’s growth aspects and then ask yourself; 

Would the feature aid in your growth?

Think about efforts 

Building something that solves a problem is indeed going to require some effort from your effort. So, to ensure that your efforts get their rewards, ask yourself these three questions.

Does the feature require to be developed from scratch or does it just need some fine-tuning to perform better?
Does the feature require a lot of resources for its implementation, if so do you have a plan for delivering those resources?
Does the feature raise the level of building and using complexity for the developers and the end users and will it be worth it?

Then, think about yourself 

After all of that, you think about yourself, your goodwill and your market position. The kind of features you provide in a product speak to the kind of values you have as a brand. A brand prioritising accessibility would resonate as a brand aiming for social inclusion and that would be wordlessly spoken through the features it pumps into its products. And I do not have to tell what the value of a positive brand image is. 

So ask yourself, how well is the feature suited to your brand’s vision and its market position?

The Flawed Kind of Feature Prioritisation 

Now that we have a fair understanding of the do’s of feature prioritisation, it’s only fair to peruse the don’ts as well. These certain things and aspects of selection that may seem totally fair to you, but in reality can hamper the entire process; so, you really have to be mindful of them.

Prioritising one opinion over getting diversified notions 

One consumer feedback, one analyst opinion, one ROI report on that one feature; what do all of these have in common, the adjective one. They are proposed by an isolated person or report, and because of that, you cannot pay too much heed to them. It’s like not voicing your opinion because one person told you to shut up. 

You have to look at diversified notions. 

  • If a number of consumers are dissatisfied with a particular feature, then consider fixing it. 
  • If an analyst is able to back his claim with up-to-date data and not antiquated reports, then consider implementing his opinions. 
  • If the company ROI and consumer value is on the table along with the ROI of the feature in focus, only then consider any course of action.

Otherwise, you’ll end up wasting valuable time, efforts and resources and I’m pretty sure doing that has been in your don'ts list since such a list was conceptualised.

Prioritising gut over rationale 

There are features that we love and there are features that are necessary. Both of them could be the same and benefit the brand and the consumers alive, however, there is also a chance that they might not. 

If you or let’s say your boss thinks that a particular feature would add immense value to the project because she loves it and her gut tells her that this is what the project has been lacking, you can’t listen to that. She could be right, but following the gut is a big no-no in feature prioritisation. 

There has to be a proper rationale behind every selection, every addition and every decision made. 

You also need to know that you cannot always ensure that everybody on the team is making a rational decision, cognitive biases are a real thing and you can try to negate them, but being 100% free of them in the selection process is not a guarantee.

Prioritising an interpretable measurement system over a solid one 

In a group with a divided opinion, how do you come to consensus? The answer is through voting. And that is an important part of the selection process in feature prioritisation. These votes become the unit of measurement, based upon which a feature is selected or discarded. So, it has to be a solid system, right?

Yes, it should be full-proof without a shadow of doubt, however, it may not be and that happens when the units of measurement are open to interpretation. 

When the value of business is given a two star rating, can you be absolutely sure of what those two stars signify? 

Does that rating denote a sub-standard value? 
How does it translate to business profit? 
How do you reach five stars? 

For one person, the value may be clear, but for another it may be muddy; the reason being the difference between people’s perception and there are the cultural differences that also come to play while interpreting. 

Prioritising every vote at the same level 

Continuing on the voting discussion, it is often said that every vote is valued at the same. However, if the person voting is clueless about the feature he is voting on, should his vote have the same as that of a specialist in that area? Think about that for a minute.

Every team has a diversified skill set based on its members. There would be people with technical backgrounds like the developers and there would be people with a not-so-technical background like the marketers. Now you tell me, should a marketer be given the same voting rights as the developer, when the vote is about a technical feature? 

What About the Time and People Constraints?

Like it or not, many a time a feature is shelved not because it was ill-suited, but because the team did not have the time to build it or it did not have the right people to do it. Something like this has happened in every organisation at one point or another. 

Such an incident, when you do not have the right people to execute a feature that you know would do wonders for your project is going to be frustrating, which is understandable. You can’t do anything about it; you can hire a new person, but that could be a whole other task in itself. 

You can think of these constraints as negative or you can look at the silver lining and think of them as another filtering agent helping you prioritise further. Since I am the glass half-full kind, I’d say it is a good thing. 

If you plan out everything in an organised way and your process is full-proof, you have won half the battle. Carrying out its proper implementation would help you over the time and people constraints. If your team agrees to the plan and knows it, your chances of success increase immensely. This also means that the team is able to identify the priority tasks over the non-priority ones. 

The end goal of every project is improvement and it is research and prototyping that make it possible. So, if you have a process to implement those two, no amount of constraints will be able to hold you back. And just maybe, these two constraints help you in avoiding over-reaching?

What are the Ideal Strategies for Feature Prioritisation?

We have learnt everything we can about this concept, now it is time to learn some about the strategies that help project managers implement it. There are a few feature prioritisation frameworks that deserve mention. 

The KANO Model 

The KANO model helps you in understanding the consumer’s needs and wants and base your features on them. This is done through questionnaires and consumer feedback. Although it is a time-consuming process, it does give you a clear picture of where the consumer stands in terms of product features by classifying them. 

A graphs illustrates the four parameters of the KANO model.


According to the KANO Model, the sure thinks in four ways. 

  • For one, he wants excitement. These features add no value of the product or service, but their presence is enough to excite the consumer and lure him in.
  • Second, he wants an elevated performance. The better the performance of the feature, the better the consumer satisfaction and vice-versa. 
  • At number three, he wants the basics. There are certain that have to be there, despite them not providing any excitement to the user, you could refer to them as the threshold.
  • Finally, he becomes indifferent. This indifference is towards features, the presence or absence of which does not affect the consumer.

All four of them, with their different satisfaction and functionality levels help the project managers know the strengths and weaknesses of the project.

The MoSCoW Model 

Must-haves; 
Should-haves; 
Could-haves; 
Won’t-haves;  

These four sum up the meaning and the reasoning behind the MoSCoW model. All of them categorise features based upon their importance to the project. Going from the top; 

  • Features that have to be in a project to make it complete are the must-haves and should be prioritised over anything else. 
  • Features that should be included in the project, but can be delayed for the time being are should-haves, much like green vegetables; you should eat them, but you can survive without them for some time. 
  • Features that could be included in the project or could not be included in the project without having any impact on the overall functionality are the could haves. It’s good to have them for higher consumer satisfaction, but their absence won’t be blatant to the consumer.
  • Features that are won’t have are the ones that are not at all crucial for the project at the moment and would only cause additional stress on the resources at hand. 

The thing about the MoSCoW model is that it lets you know what kind of features you can bring to the table in the feature. This is because priorities never remain the same, a feature that was shelved for requiring too much work and having too little impact could become a Must-have in the future.

According to OSL's project managers, the success of any given project is primarily determined by the intelligent prioritisation of various tasks. Choosing the right high-priority feature may seem to be daunting, but for successful and timely delivery of the project, this is a must. Neha Grover, one of our Project Managers, feels that during instances in projects when you have the list of work packages, which need to be prioritised and moulded into a work breakdown structure(WBS), the PM has to play a key role in getting the stakeholder's and dev team's expectations and priorities to be on the same page.

"I follow the MoSCoW prioritisation technique in my projects, as this is quite simple and less time-consuming and it focuses on both customers and stakeholders."

The Cost of Delay Model 

It is often said that you can't put a price on time, it is indeed priceless; never to come back again once it is gone. Saying that, if you happen to have the right matrix to work with you can actually value the cost of time or more like the cost of delaying. And that is what this feature prioritisation framework is all about. 

The COD model calculates your losses for delaying the development and implementation of a particular feature. Based upon that cost, you will get an idea as to the importance of that feature and prioritise its build accordingly. 

For instance, 

Say there is a feature that would take 30 days to build and every day it’ll cost the organisation $1000. Then there is a feature that would take the same amount of time to develop, however, it is costing the organisation more than double in comparison every.

In such a scenario, which feature would you prioritise? The answer is simple, the one that is making you lose more money. Building that first would stabilise your losses more than the other as the cost of delaying that was more than the other. 

Prioritise what saves you more money by reducing the cost of delay.

The Value Model 

Businesses develop features because they think the said features would prove to be valuable for them. To get that value, they endeavour to build something good. What if that value isn’t as impressive as the business, the project managers and the developers had thought to be. This is why the value model becomes an important strategy to implement during feature prioritisation. 

The purpose of the value model is to reap the highest value of a feature for the business through its two facets.

Value based on cost 

Cost doesn’t necessarily mean money, it could be interpreted as efforts as well or even complexity. The model states to simply prioritise the tasks with high value and low cost first, then move onto the features with high business value and high cost. If you wish, you can take on the low value-low cost features, but most definitely avoid low value and high cost features. 

A graph shows the functionality of value vs cost model of feature prioritisation.


Value based on risk

From cost, we come to risk, which is another metric to be mindful of while prioritising features. It more or less works in a similar fashion to the value and cost model, it's just instead of the cost, you’d be focusing on the risk. 

A graph is explaining the way value and risk model works.


The higher value and low risk features would be prioritised over everything else, while the low value and high risk features would be avoided altogether. This helps in enduring that you are not going to end up building something that is unnecessary or even something that is too simple by playing it safe.

The Financial Model 

An income is what everyone is after and feature prioritisation operates in the same notion as well. The purpose of increasing revenue and reducing costs is omnipresent in all the business decisions and choosing features for a particular project falls under that umbrella. 

So, thinking about the financial side of the features is what the essence of this model is. 

Whether you will be able to generate new income; 
Whether you will be able to enhance your operational efficiency and reduce costs; 
Whether you will be able to lessen the amount of consumer turnover;
Whether you will be able to gain an additional income from the consumers you already have; 

All these are important scenarios to consider in the financial model for feature prioritisation. 

Then there is the actual money metrics to pay heed to in the selection process, which includes three important dimensions. 

  • One is the focus on the Present Value of money. What you have invested today and the return you will get from it five years down the line would not be in the same value of currency as it keeps changing. So, making a projection based on the Net Present Value formula is a wise choice. 
  • Second is to calculate the Internal Rate of Return, which is essentially a percentage value of returns for a project and how quickly they might increase.
  • Third is to focus on the running total of the discounted cash flows to get an overview of the time it would take to get the investment back. This is also referred to as ‘the Discounted Payback Period.’

The Opportunity Scoring Model

For every feature in every product, there are two attributes that usually stand out apart from the financial aspects of that feature. And these are; 

How important the feature or its outcome is to the consumer?
And how satisfied is the consumer with the provided feature?

Take the answers to these questions and start pointing them out in a graph and you will end up with something looking like this. 

A graph is illustrating the way opportunity scoring is done.


So, the scoring makes it easy to prioritise by using visuals and categories simultaneously. The features that are important to the consumer, but aren’t very satisfactory would be priotises more than the features that are satisfactory for the consumer, but not very important. 

The RICE Model  

The RICE Model gives you an in-depth understanding of each feature you wish to implement based upon four parameters that some of the other strategies are unable to. 

Reach 

Reach refers to the number of people the feature would reach and affect. These numbers are calculated on real matrices like ‘customers per quarter’ and ‘transactions per month,’ thereby removing all forms of personal bias from the equation. The higher the resultant number, the further the feature’s reach.

Impact 

The I is for impact, meaning the kind of impact the feature would have on individual users as well as the goals and     objectives of the business as a whole. This is ranked from minimal to medium to high to massive impact based on points from 0.25 to 3.

Confidence 

Now that you have the numbers for the reach and impact of the features, comes the moment to test your confidence in them, that is what the C denotes. Using a scale with 100, 80 and 50 points referring to a high, medium and low confidence level, you will start scoring. Remember to always provide evidence in the form of data for every score you give.

Effort

In the end, the E is for Effort in terms of time and people. Questions like how much time would it take to build the feature. How many people would be required to make it, how much time would one person have to shell out in a day for the build are to be asked and answered in this parameter. 

The RICE formula, a framework for feature prioritisation, is depicted.


Once you have the results for all the parameters, you will use this formula and will be left with a number that would denote the total impact of time worked. This number is what would help you prioritise. 

The Voting Model 

This is not a well-established model, but it has a lot of merit in its implementation. Based on two different aspects of feature selection, here is how it is used. 

Annotation 

When you are voting for a feature’s implementation, you would be stating whether that feature should be high on priority or even low, you can’t just say those two simple words; there won’t be any clarity in that. If you are saying that a feature is high impact, then you must state why? It could be any reason, an admin panel with a particular feature could have a massive impact because it would help the stakeholders in completing their primary tasks with ease. A comment can truly make a difference in the perception of a feature and thus, aid the selection process.

There is table showing how a feature is prioritised based on its impact on the stakeholders.


Diversification

The second part of the model is to diversify the voting team as much as possible. Include experts in the domain as well as non-experts. This would give you a more concise picture as to the popularity of the feature amongst a wider range of people. However, do remember that you separate the experts' votes from the non-experts clearly, because even though diversified voting helps get a better perception on the proposed features, the expert opinions would weigh more. You can even segregate the votes based on departments, like votes from the finance department’s votes could be one category and marketing people could have a different category. 

Now that you have explored various approaches of prioritising features in a project, read about the right way to start a Drupal project, standard development workflow for a Drupal project, best project management techniques for complex Drupal projects, difference between product mindset vs project mindset and human factor in project management for effective project management.

The Bottom Line 

There are numerous other techniques and strategies that can be implemented for feature prioritisation. You can use all of them or just a couple, that is totally up to you. However, you have to remember that a feature is not just for the consumer and not solely for the business. 

Both the consumer and the business have different reasons for using and building a particular feature. The consumer wants the feature to fulfil a need, while the business wants the feature to bring an increased revenue. So, you have to endeavour to strike a balance between the two. This can be done through choosing strategies that are both business-centric and consumer centric, like the Cost of Delay model or RICE. 

In the end, I just want to say that feature prioritisation is a never ending process much like development. As long as you will keep developing, you would have to choose certain features over others to prioritise. So, mastering the prioritisation technique would serve your interests well. 

Apr 23 2021
hw
Apr 23

I have been contributing to Drupal in a few different ways for a few years now. I started off by participating in meetups, and then contributing to Drupal core whenever I found time. Eventually, I was even contributing full-time courtesy of Axelerant, my employer. At the same time, I started participating in events outside my city and eventually in other countries as well. I was speaking at most of the events I attended and mentored at sprints in many of these events. I have written about this in detail before in different posts about my Drupal story and a recent update to that.

It was only with the support my wonderful family and also from Axelerant in the early years that enabled me to contribute in this way. As my responsibilities grew, I had to find focus in where to contribute. My kids were growing up and I wanted to spend a lot more time with them. At the same time, I started picking up managerial responsibilities at Axelerant and was responsible not just for my work, but for a team. I was approaching a burnout quickly and something had to go. It was at this time I rethought on how to sustainably contribute to open-source.

Long story, short…

The story is not interesting. Honestly, I barely remember those years myself. I know they were essential for my growth and they came at a significant price. But we know that nothing worth doing is easy. As a mentor to a team and even bordering on a reporting manager, I had the privilege to multiply my efforts through them. I am proud to see how many of them have built their own profiles in the community and continue to do so.

My recommendation to my team and myself is now to stop thinking of contributing as “contribution” but as a part of our work. The word “contribution” implies giving something externally. People are hesitant to make this external action when they already are very busy with bugs, deliveries, and meetings. We all have a limited working area in mind to think about the code and all the complexity. Thinking about something external is very difficult in these circumstances.

Don’t hack core

One reason this feels so external to us is because of how we treat Drupal core and contrib. We drill in the notion in newcomers that we should never hack core. While there is a good reason for this but it results in the perception that the core (and contrib) cannot be touched. It is seen as something external and woe befall anyone who dareth touch that code. It is no surprise that many people are intimidate by the thought of contributing to the Drupal core.

My workflow

The trick is to not think of it as external. I use the word “upstream” instead of contrib projects when talking about Drupal core or modules. I find that some people think of “upstream” as a little closer to themselves than “community contribution”. Thinking about it this way makes the code more real, not something which is a black box that can’t be touched. We realize that this code was written by another team consisting of people who are humans just like us. We realize that they could have made mistakes just the way we do. And we realize that we can talk to them and fix those mistakes. It is no different than working in a large team.

Yes, this means that we don’t have people who are dedicating time to contribute. That is a worthy goal for another day. I am happy with this small victory of getting people familiar with the issue queues and the contribution process on drupal.org. I have seen these small acts bubble up to create contrib modules for use in client work (where relevant). Eventually, I see them having no resistance with the idea of contribution sprints because the most difficult part of the sprint is now easy: the process. They are familiar with the issue queue and how to work with patches (now merge requests); if it is a sprint, the only new thing they are doing is coding, which is not new to them at all.

I realize that this is not a replacement to the idea of a full-time contributor. We need someone who is dedicated to an initiative or a system. These contributors are needed to support the  people who will occasionally come in to fix a bug. But we need to enable everybody to treat the core as just another piece of code across the fence and teach them how to open the gate when necessary.

Apr 22 2021
Apr 22

We used PowerPoint slides as a prototype (#nokidding) and tested an early draft of our content strategy with stakeholders. Their inputs and feedbacks improved the quality of our final deliverable and reduced risk and uncertainty early and cheaply.

Discover more about the services UX Design and Content our digital agency has to offer for you.

Content Strategies are not typically tested with users

And we didn’t plan to do so in the beginning. The project started relatively conventional. My colleague Caroline Pieracci was asked to do the Content Strategy for a Swiss retailer and invited me to join the team. As a UX Designer I was asked to lead the stakeholder interviews for the client.
The first workshop was dedicated to get insights about the strategy of the client, their business goals and customer needs. To understand how we can help reach these goals with content, we are proposing a core message. Of course the main topics the client wants to talk about and what users are interested in was part of that too.

But why stakeholder interviews?

I mostly do stakeholder interviews at the very beginning of the project, to understand their goals, needs and pain points. It seemed a bit late in the process for me. When asked this question, the client explained that they wanted to inform the stakeholders, get their feedback, understand if the content strategy was valuable for them and get their buy in. During the reflection with Caroline about everything we heard from the client, I felt that what they actually wanted was to communicate and to test their Content Strategy.

A prototype to communicate important aspects of your project

Prototypes serve various purposes. Either to explore different solutions and opportunities, or to evaluate a solution, reduce the number of options and decide what to focus on. Or to communicate important aspects of your project. A communicative prototype can ignite meaningful discussions with your stakeholders and reduce friction and misunderstandings right from the start. It can be a valuable strategic tool to present, convince and inspire your management or stakeholders. That is why our client was all in and the prototyping began.

We used a PowerPoint presentation as a prototype #nokidding

Caroline had a poster in mind as the final delivery. A nicely designed visualisation of the core messages for the team to hang up in their office. In the end, our prototype was a PowerPoint presentation as this is the main medium for communication used by our stakeholders. It explained each part of the strategy and how it was created. The presentation was not polished at all, neither designed, and it had some blanks and question marks in it. We even put a “draft” sticker on each slide to make sure that the presentation was not judged by its bad looks.

Five interviews in one day

The testing was done remotely, all in one day and with the whole project team present. From the stakeholder map that we prepared in the kick-off meeting, our client picked the five most important people they wanted to talk to. We had 10 min to present the prototype and 20 min to ask our questions.

  • After seeing our content strategy prototype, what questions do you have? What is unclear for you?
  • What is your first impression of the content strategy, what goes through your mind?
  • What is still missing in our content strategy?
  • What is superfluous?
  • What opportunities do you see when we implement this content strategy?
  • What risks do you see?
  • What does it take to make this content strategy a success?
  • Is there anything else you would like to say on the subject?
    The whole project team took notes. After each interview all the details were collected on our Miroboard. This took about 20-25 min and after a short break the next interview took place. The technique and the questions are coming from Jake Knapps book Sprint and felt great to be used in the context.

    Confidence to release the Content Strategy

    The feedback was very positive and gave the client a great boost and a lot of confidence to be on track. The stakeholders were pleased to see the Content Strategy in an early stage. Their feedback also pointed out a couple of things that were missing or not clear enough.
    The next day we met again and put all the feedback from the interviews on a huge “pile” on the Miroboard and prioritized them to decide what we will implement for the first release of the Content Strategy. We used the MoSCoW method and organised them into “must have’s”,”should have’s” and “won't have’s”.

    Continuous adaptation

    The team that would mainly work with the Content Strategy on a daily basis, was invited to give their feedback on the prototype too. We are planning to invite them again, after they have worked with the Content Strategy for a while, to understand what works well for them and what doesn’t. It is a living document that needs to be adjusted and improved regularly.

    Reduce risk and uncertainty

    Experimentation, prototyping and testing is not always easy, but this project went down really smooth, and the collaboration with the client was great. The only challenge was to keep the prototype basic enough and to avoid putting too much effort in it or try to making it perfect. Something we’re not used to in our industry, where every typo is a sign of incompetence and a bad image can damage our customers' trust in us. The whole process was done in a few weeks and without a huge budget.
    But what was the value for the client in the end? I find these words from Service Design Doing sums it up perfectly: “Prototyping is an essential activity to reduce risk and uncertainty as early and as cheaply as possible, to improve the quality of your final deliverable and eventually implement your project successfully”

Apr 22 2021
Apr 22

As DrupalCon comes to a close for the crew at Mediacurrent, we’ve all had a chance to reflect on the experience. Here are the top 10 things we loved and learned at this year’s event.

1. Opening New Doors to ‘Discover Drupal’

Drupal talent is in high demand and The Drupal Association is focused on cultivating that talent with an emphasis on diversity, equity, and inclusion. That’s important to our team at Mediacurrent, too. We love helping young professionals get started in a Drupal career (like our two student interns who experienced their first-ever DrupalCon last week!) and we jumped at the chance to become a training partner for the just-launched Discover Drupal program. We will be mentoring students and providing an opportunity to intern with us after they have finished their scholarship.

Discover Drupal offers a 12-month scholarship and training program for underrepresented individuals in the open source community. Learn more and support the program

Speaking of training, our booth offer this year was a drawing for a free 4-hour training workshop in one of our most popular topics: Front-End Development, Decoupled Drupal with Gatsby, or Drupal Component-Based Theming. We are very excited to be drawing the names of 3 winners this week, who will learn about current technology demands and best practices using active discussion and a hands-on workshop. Watch our Twitter channel to see who wins!

2. Bright Horizons Ahead for Drupal 10 

Dries reinforced that the sun is quickly setting on Drupal 8, with community support ending this fall. Drupal 7’s days are numbered as well. If you haven’t already, it’s time to think about your Drupal 9 action plan.

Key dates for Drupal 7 and 8

The community’s innovation efforts will focus on Drupal 9 while also looking ahead to June 2022 — the target release date for Drupal 10.

Drupal 9 and 10 timeline

3. Going Back to Our Site Builder Roots

Drupal’s roots are about empowering site builders to build ambitious websites with low code. 

-Dries Buytaert, State of Drupal Keynote - DrupalCon North America 2021

What made YOU fall in love with Drupal? 

In his State of Drupal keynote, Dries reflected on Drupal’s core strength to find focus for the year ahead. He reasoned that to help our community grow and become even more successful, we need to give every user a clear reason to adopt Drupal. 

Many Drupal love stories share a common spark; the feeling of being quickly empowered by Drupal’s low code approach. To give site builders that “love at first site” feeling, Dries announced the Project Browser Initiative. That goal is to make site builder basics like installing a module as easy as installing an iPhone app and rise to the competition of Wix, Squarespace, and WordPress. 

Drupal program browser initiative

4. Building a Better Foundation for Future Features

Everyone wants to know what comes next for our favorite digital experience platform. As always, DrupalCon sessions and the Driesnote shed some light on the innovation that lies ahead, highlighting both core and contrib initiatives that the community is working to advance.

Visitors to the Mediacurrent booth saw how Rain CMS speeds up development and gives content creators the authoring experience they crave. (If you missed it, Rain CMS now ships with Layout Builder to make page building a breeze) 

Dries shared a progress update on the core strategic initiatives that are blazing trails for future functionality and improvements in Drupal core. These initiatives shaped the program content, with a different one assigned to each day of the conference. 

  • Easy Out of the Box - This initiative is improving Drupal's ease-of-use remains a top priority.
  • Decoupled Menus - This initiative is positioning Drupal as the go-to for decoupled. Now, non-module Javascript projects have a home on Drupal.org.
  • Automatic Updates - By getting automated security updates into Drupal 9 core, we can help site owners sleep soundly.
  • Drupal 10 Readiness - Drupal 9 is just under a year old but the community is already looking ahead. Dries called for community support to hit the target release date for Drupal 10.

5. Celebrating and Encouraging Community Contributions 

Drupal continues to shine as of the most scalable, robust, and mature development communities in open source. We heard from Heather Rocker, Global Executive Director of the Drupal Association, about some of the initiatives that are making it easier for first-time and non-coding contributors to get involved.

Both individual and company-level contributors were celebrated on the DrupalCon stage. Congratulations are in order for AmyJune Hineline, the recipient of this year’s Aaron Winborn Award. The award honors individuals for their outstanding commitment to the Drupal project and community. (Check out our interview with AmyJune from season one of the Open Waters podcast.) 

Giving back to Drupal remains a core priority for the Mediacurrent team. This year, we’re proud to show our support for the Drupal Association as a Diamond Drupal Certified Partner and excited to maintain our rank as one of the top five organizational contributors. 

top companies sponsoring Drupal

6. The More Sites, The Merrier

How do you manage and maintain dozens or even hundreds of sites effectively?

That’s the question Jay Callicott, VP of Operations at Mediacurrent, set out to answer in his DevOps track session on scaling Drupal with the power of multisite. Drupal’s multisite capabilities are a standout feature, setting it apart from other CMS platforms. Yet there’s a lot to consider - configuration, deployments, site provisioning, and more. 

This session recording is now available to registered attendees with public access coming in a few weeks. Stay tuned!

Drupal multisite presentation slide shows a decision tree

7. Making Sense of Open Source Security 

Mediacurrent’s Drupal security pros took the stage to tackle a timely topic: open source security for marketing and business leaders.

As open source software like Drupal continues to become widely adopted, sticking to security standards is a challenge. The global losses from cybercrime totaled nearly $1 trillion last year (csis.org), raising the stakes on security even higher. 

Be on the lookout for the session recording for a playbook on how to optimize your Drupal security. They covered how to become a security-first organization, embrace process automation, harden Drupal security, and create clear security policies.

these Drupal modules protect from OSWAP

8. Higher Education: The Stage for Ambitious Digital Experiences

DrupalCon’s industry summits are always a great accompaniment to the regular program, and this year was no exception. At the Higher Education Summit, Director of Development Dan Polant was joined by one of Mediacurrent’s ivy league partners  to co-present a case study session. We saw how the university relies on Drupal to model complex data and got a behind-the-scenes look at the decoupled architecture with Gatsby. 

A decoupled approach lets us choose a dedicated solution for a given job

9. Drupal is Powering Hope 

At this year’s DCon, we saw how Drupal is powering some of the most impactful organizations in the world. All but one of the major COVID-19 vaccine-producing companies use Drupal. 

Major nonprofits like Habitat for Humanity also rely on Drupal. Through its website, the organization has helped more than 5.9 million people build or improve the place they call home. Mediacurrent has been honored to support Habitat’s mission and partner with them to build a maintainable platform that thrives on support from the Drupal community. The Drupal Showcase session recording for Habitat for Humanity: Building a foundation for digital success will be publicly available soon. We’re grateful for the opportunity to reflect on the success we achieved through our partnership, and we hope others can learn from it. 

Covid vaccine sites Pfizer, Modern, J&J run on Drupal

10. The Momentum Continues With Drupalfest 

DrupalCon has ended but the celebration continues with Drupalfest. 

Interested in learning more about contributing to Drupal? Let Mediacurrent’s Community Lead Damien McKenna be your guide. Join Damien for Contrib Open Hours through the end of April. 

Watch the State of Drupal Keynote 

Check out the recording of the State of Drupal keynote below.

[embedded content]

Cheers to 20 years, Drupal! We look forward to gathering again next year.

Apr 22 2021
Apr 22

Last week, Drupalists around the world gathered virtually for DrupalCon North America 2021.

In good tradition, I delivered my State of Drupal keynote. You can watch the video of my keynote, download my slides (244 MB), or read the brief summary below.

I gave a Drupal 9 and Drupal 10 update, talked about going back to our site builder roots, and discussed the need to improve Drupal's contributor experience.

Drupal 9 update

People are adopting Drupal 9 at a record pace. We've gone from 0 to 60,000 websites in only one month. In contrast, it took us seven months to reach the same milestone with Drupal 7, and three months for Drupal 8.

A chart that shows that Drupal 9 adoption is much faster than Drupal 7's and Drupal 8's With Drupal 8, after about 1.5 years, only a third of the top 50 Drupal modules were ready for Drupal 8. Now, only 10 months after the release of Drupal 9, a whopping 90% of top 50 modules are Drupal 9 ready. A chart that shows the Drupal 9 module ecosystem is pretty much ready

Drupal 10 update

Next, I spoke about the five big initiatives for Drupal 10, which are making progress:

  1. Decoupled menus
  2. Easy out of the box
  3. Automated updates
  4. Drupal 10 readiness
  5. New front-end theme initiative

I then covered some key dates for Drupal 9 and 10:

A timeline that shows Drupal 9.3 will be released in December 2021 and Drupal 10.0.0 in June 2022

Improving the site builder experience with a project browser

A Drupal robot staring in the distance along with a call to action to focus on the site builder experience

When I ask people why they fell in love with Drupal, most often they talk about feeling empowered to build ambitious websites with little or no code. In fact, the journey of many Drupalists started with Drupal's low-code approach to site building. It's how they got involved with Drupal.

This leads me to believe that we need to focus more on the site builder persona. With that in mind, I proposed a new Project Browser initiative. One of the first things site builders do when they start with Drupal is install a module. A Project Browser makes it easier to find and install modules.

If you're interested in helping, check out the Project Browser initiative and join the Project Browser Slack channel.

Modernizing Drupal.org's collaboration tools with GitLab

A small vessel sailing towards a large GitLab boat

Drupal has one of the largest and most robust development communities. And Drupal.org's collaboration tools have been key to that success.

What you might not know is that we've built these tools ourselves over the past 15+ years. While that made sense 10 years ago, it no longer does today.

Today, most Open Source communities have standardized on tools like GitHub and GitLab. In fact, contributors expect to use GitHub or GitLab when contributing to Open Source. Everything else requires too much learning.

For example, here is a quick video that shows of how easy it is to contribute to Symfony using GitHub:

Next, I showed how people contribute to Drupal. As you can see in the video below, the process takes much longer and the steps are not as clear cut.

(This is an abridged version of the full experience; you can also watch the full video.)

To improve Drupal's contributor experience, the Drupal Association is modernizing our collaboration tools with GitLab. So far, this has resulted in some great new features. However, more work is required to give new Drupalists an easier path to start contributing.

Please reach out to Heather Rocker, the Executive Director at Drupal Association, if you want to help support our GitLab work. We are looking for ways to expand the Drupal Association's engineering team so we can accelerate this work.

Drupal.org's goals for Gitlab along with positive attendee feedback in chat

Thank you

I'd like to wrap up with a thank you to the people and organizations who have contributed since we released Drupal 9 last June. It's been pretty amazing to see the momentum!

The names of the 1,152 individuals that contributed to Drupal 9 so farThe logos of the 365 organizations that contributed to Drupal 9 so far
Apr 22 2021
Apr 22

Custom drupal development methods vary from version to version in Drupal. The one method used in Drupal 7 might not work for Drupal 8. Here are some of the crucial things to keep in mind for the best output of custom Drupal development:

1. Use Configuration Before Code

To avoid hard-coding a class into a theme, make sure to set the values in the configuration and use it with the code to avoid rewriting the code every time you have to make changes. Once you set the value in the configuration, it gives you advanced functionality and "easy to modify" features. It also enhances coding speed and results in high-quality modules.

2. Limit The Usage Of Modules

While working on enterprise-level Drupal websites, it is essential to use fewer modules. Avoid using every single module which is lacking. Experienced developers are in favor of programming their modules rather than reusing the existing ones. A higher number of custom modules will demand more work in the future to maintain and modify your Drupal site. Publish your modules on Github, which helps you avoid using a larger number of custom modules and encourages you to create reusable code with the required configuration.

3. Environment & Coding Standards

Most of the Drupal development companies work in the same development environment to ensure efficient workflow. In such scenarios, the biggest issue is to make sure that the produced code is clean and reliable; the code should make sense to their team members and the larger Drupal community itself. With the Drupal community and distributed teams working together efficiently, you need to follow coding standards to make sure you achieve the project's goals and objectives.

4. Use hook_schema and hook_update_N

Your module will need its database tables if it stores data in the database which is not Drupal entities or nodes, and you are going to use it as content. Make sure to declare the table schema you will need. You can use "hook_schema" in the module. install file. It will be created and removed on its own whenever you install or uninstall a module.

If you want to remove, add or modify columns or modify the schema, you can easily do these changes in an update hook. Use hook_update_N and bump the version number. Other developers or development teams can implement these changes by pulling your code by running "drush updb."

5.Use what Drupal offers

Drupal has several built-in and admin functionality to help you store and display your module's data. You can define the module settings pages using hook_menu; it will enable modules to register paths to help determine how URL requests are handled. You can use the drupal_get_form page callback to define and return the settings that are needed to be stored. One of the key benefits of doing this- you can restrict, submit and validate handlers to the form if the value has to be integers or existing data fetched from an API, etc.

6. Working With Cron

Cron is a time-based task scheduler that you can configure to execute tasks automatically without any manual involvement once the initial configuration is done. Drupal offers several built-in cron functionalities and a task queue for almost every module development. However, in some cases, like your hosting provider doesn't support cron tasks configuration or maybe you are facing issues in configuring it to run correctly, it is essential to check your site's status report to figure out if you can use Cron configuration.

7. Have A Dedicated Place For Your Custom Modules

Suppose you have named your Drupal custom module as "My_shop." In this case, call the new dedicated folder as "/module/custom/My_shop." It's always advised to create a proper place for this type of module so you don't have to struggle a lot when you want to use the new custom module. It also helps you keep the in-built Drupal module downloaded by Drpal.org and custom modules separated from each other.

8. Create A New Module If Necessary

Reuse the existing Drupal modules as much as possible instead of jumping to write one from scratch because the heavy "load" of custom modules can become challenging to maintain and update your Drupal website in the future.

9. Debug Your Module With Xdebug

It is also known as the golden rule to follow when developing custom modules in Drupal. Debugging your custom module code ensures that there are no PHP code bottlenecks, and you're using only the latest code.

10. Use Automated Code Checking Tool

Make sure to use automated code checking tools like Coder to ensure that your code is reliable, clean, readable, and adheres to Drupal’s coding standards. It also helps you ensure that you are following the best practices of your PHP version.

Apr 22 2021
hw
Apr 22

We at Axelerant have been contributing to Drupal in our own ways for a long time. In fact, I worked as a full-time contributor to Drupal a few months after I joined. This was around the time Drupal 8 was almost done and it is thanks to Axelerant I could contribute what I could at that time. At the same time, there was community focus around incentivizing contributions and there were a few websites (like drupalcores) to track contributions.

Sometime later, in one of our internal hackathons, I built a basic mechanism to track contributions by our team (you can see it at contrib.axelerant.com). It was a weekend hackathon and what we built was very basic but it set the groundwork for future work. We adopted it internally and reached a stage where we had processes around it (even our KPI’s for a while). In this time, we expanded the functionality to (manually) track non-code contributions. More recently, we added support to track contributions from Github as well.

Tracking all contributions

Since this was a hackathon demonstrating possibilities using code and technology, we started with only tracking code contributions. Soon, we expanded this to track contributions to events and other non-code means. The latter was manual but this helped us build a central place where we could document all our contributions to the open-source world.

Today, we have tracking from drupal.org and Github and basic checks to determine if code was part of the contribution. On the non-code front, you can track contributions to websites like StackOverflow or Drupal Answers. And for events, you can track contributions such as volunteering, speaking, and even attending (yes, I think participating in an event counts as a contribution).

Now, we have a process for anyone joining Axelerant to set up an account on the contrib tracker. After this, all their contributions to Drupal and Github begin to get tracked at that time. We also remind people frequently to add details of any event they have attended.

Improving contrib-tracker

Contrib Tracker is open source but currently treated as an internal project at Axelerant. We initially set it up as a public repository on our Gitlab server. Now we realize that it was not practical for people to access and help build it. Today, I moved Contrib Tracker to its new location on Github: contrib-tracker/backend. For a while, we had the thought of implementing the website as a decoupled service and that’s why the repository is named “backend”. Right now, we may or may not go that route. If you have an opinion, please do let me know in the comments.

Moving to Github

The move to Github is still in progress. In fact, it just got started. The project is, right now, bespoke to how we host it and one of the lower priority items would be to decouple that. Once that is done, it should be possible for anyone to host their own instance of contrib tracker. But the more important task now is to move the CI and other assets to Github. I will be creating an issue to track this work. Our team at Axelerant has already started moving the CI definitions.

That’s it for today. Do check out the source code at contrib-tracker/backend and let us know there how we can improve it, or better yet, submit a PR. :)

Apr 21 2021
Apr 21

Today’s business reality is nearly every company needs at least one website in order to be successful in their business. As organizations get larger, the number of websites companies need also increases. From Human Resources, to sales support, to customer service and support, different groups in your organization may have some similar needs, but different access levels.

Making these websites turnkey can reduce the amount of time your IT or devops teams need to spend standing up resources. It can also significantly reduce development costs when you have a deployable website instance that can be used to fulfill the needs of several organizations in your company.

In this first part a two part series, Tag1 Managing Director Michael Meyers talks with CIO Jeff Sheltren, and Senior Infrastructure Engineer Travis Whitehead about the challenges large enterprises face, and the software-based solutions Tag1 is using to help our customers be more successful with standardized website deployments.

[embedded content]


### Additional resources

For a transcript of this video, see Transcript: Deploying New Enterprise Web Applications in Minutes - Part 1.

Photo by Amir Hanna on Unsplash

Apr 21 2021
Apr 21

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security release for Drupal core to fix a Cross-Site Scripting (XSS) vulnerability. You can learn more in the security advisory:

Drupal core - Critical - Cross-Site Scripting - SA-CORE-2021-002

Here you can download the Drupal 6 patch to fix, or a full release ZIP or TAR.GZ.

If you have a Drupal 6 site, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

FYI, there were other Drupal core security advisories made today, but those don't affect Drupal 6.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Apr 21 2021
Apr 21

Drupal core's sanitization API fails to properly filter cross-site scripting under certain circumstances.

Not all sites and users are affected, but configuration changes to prevent the exploit might be impractical and will vary between sites. Therefore, we recommend all sites update to this release as soon as possible.

Apr 21 2021
Apr 21

6 minute read Published: 21 Apr, 2021 Author: Christopher Gervais
Drupal , Drupal Planet , CMI2.0 , Config Enforce , DevOps

Introduction to the Introduction

Over the last few years we’ve built lots of Drupal 8 sites, and some Drupal 9 ones too, both for our clients and for ourselves. As such, we’ve taken a keen interest in (read: faced many challenges with) the Configuration Management subsystem. This was a major new component in Drupal 8, and so, while it’s functional, it isn’t yet mature. Of course, the vibrant Drupal developer community jumped in to smooth the rough edges and fill the gaps, in what has since become known as CMI 2.0.

At Consensus, we tend to work on fairly large, complex Drupal projects that often require significant custom development. As such, we’ve adopted fairly rigourous software engineering processes, such as standardized local development environments, CI-enabled Test Driven Development, Continuous Delivery, etc.

However, we struggled to find a workflow that leveraged the powerful CMI system in core, while not being a pain for developers.

Configuration Management in D8+

The core CMI workflow assumes you are transferring configuration between a single site install with multiple instances (Dev, Stage, Prod) and that this configuration is periodically synchronized as a lump. For a number of reasons, this wouldn’t work for us.

As a result, we went back to our old standby, the venerable Features module, which worked reasonably well. Unfortunately, we found that it would sometimes handle dependencies between configuration objects poorly. On more than one occasion, this led to time-consuming debugging cycles.

So we switched to using Config Profile instead. However, reverting config changes was still manual, so we started using Config Update and the related Update Helper.

The Update Helper module, “offers supporting functionalities to make configuration updates easier.” Basically, when preparing for a release, Update Helper generates a special file, a “configuration update definition” (CUD). The CUD contains two values for each changed config. The first is the “current” value for a given configuration, as of the most recent release. The second is the new value to which you want to set that config.

These values are captured by first rolling back to the most recent release, then installing the site, so that the value is in active config. Then you checkout your latest commit, so that the new values are available on the filesystem. Update Helper can then generate its CUD, as well as generate a hook_update() implementations to help deploy the new or changed config.

This process turned out to be error-prone, and difficult to automate reliably.

We explored other efforts too, like Config Split and Config Filter which allow for finer-grained manipulation of “sets” of config. Other projects, like Config Distro, are focused on “packaging” groups of features such that they can be dropped in to any given site easily (kind of like Features…)

A simple, reliable method to deploy new or updated configuration remained elusive.

The underlying problem

Note that all the tools mentioned above work very well during initial project development, prior to production release. However, once you need to deploy config changes to systems in production Update Helper or similar tools and processes are required, along with all the overheads that implies.

At this point, it’s worth reminding ourselves that Drupal 7 and earlier versions did not clearly distinguish between content and config. They all just lived in the site’s database, after all. As such, whatever configuration was on the production site was generally considered canonical.

It’s tempting to make small changes directly in production, since they don’t seem to warrant a full release, and all the configuration deployment overhead that entails. This, in turn, requires additional discipline to reproduce those changes in the codebase.

Of course, that isn’t the only reason for configuration drift. Well-meaning administrators cannot easily distinguish between configs that are required for the proper operation of the site, and those that have more cosmetic effects.

Facing these challenges, we’d regularly note how much easier all of this would be if only we could make production configuration read-only.

A new approach

With some reluctance and much consideration, we decided to try an entirely new approach. We built Config Enforce (and its close companion Config Enforce Devel) to solve the two key problems we were running into:

  1. Developers needed an easy way to get from “I made a bunch of config-related changes in the Admin UI of my local site instance” to “I can identify the relevant config objects/entities which have changed, get them into my git repository, and push them upstream for deployment”.
  2. Operations needed an easy way to deploy changes in configuration, and ideally not have to worry too much about the previously-inevitable “drift” in the production-environment configuration, which often resulted in tedious and painful merging of configs, or worse yet, inadvertent clobbering of changes.

Config Enforce has two “modes” of operation: with config_enforce_devel enabled, you are in “development mode”. You can quickly designate config objects you want to enforce (usually inline on the configuration page where you manipulate the configuration object itself), and then changes you make are immediately written to the file system.

This mode leverages Config Devel to effectively bypass the active configuration storage in the database, writing the config files into target extensions you select. Each target extension builds up a “registry” of enforced configuration objects, keeping track of their location and enforcement level. This eases the development workflow by making it easy to identify which configuration objects you’ve changed without having to explictly config-export and then identify all and only the relevant .yml files to commit.

In production mode, you enable only the config_enforce module, which leverages the same “registry” configuration that config_enforce_devel has written into your target extensions, and performs the “enforcement” component. This means that, for any enforced configuration objects, we block any changes from being made via the UI or optionally even API calls directly. In turn, the enforced configuration settings on the file system within target extensions become authoritative, being pulled in to override whatever is in active configuration whenever a cache rebuild is triggered.

This means that deployment of configuration changes becomes trivial: commit and push new enforced configuration files in your target extensions, pull those into the new (e.g., Prod) environment, and clear caches. configuration Enforce will check all enforced configuration settings for changes on the file system, and immediately load them into active configuration on the site.

This workflow requires some adjustment in how we think about configuration management, but we think it has promise. Especially if you are building Drupal distributions or complex Drupal systems that require repeatable builds and comprehensive testing in CI, you should give Config Enforce a try and see what you think. Feedback is always welcome!

We’ve scratched our own itch, and so far have found it useful and productive. We are pleased to make it available to the Drupal community as another in the arena of ideas surrounding CMI 2.0.

The article Introducing Config Enforce first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Apr 21 2021
Apr 21

We are going to use Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio. The Bootstrap Barrio theme for Drupal 8/9 integrates Bootstrap 4 (or Bootstrap 5 if you want) with your Drupal site. 

Bootstrap is a very popular framework for building websites. It provides designers and developers with a common language to communicate, making the development process a lot easier.

Creating a subtheme of Barrio is a straightforward process. This tutorial will explore the basic configuration options of the theme, which are managed through a complete graphical user interface.

Keep reading to learn how!

Step # 1.- Install the theme

Before we start, make sure that your site has at least one article, so you can make a comparison after changing the theme settings. Place also a block inside the region sidebar second (Structure > Block layout > Place block).

  • Open the terminal application of your operative system.
  • Place the cursor in the root of your Drupal installation.
  • Type: composer require drupal/bootstrap_barrio

This will download the latest stable version of the theme to:    /web/themes/contrib/bootstrap_barrio

Step # 2.- Create a Subtheme

  • Place cursor on the bootstrap_barrio theme directory
  • Type:
chmod +x scripts/create_subtheme.sh
./scripts/create_subtheme.sh
chmod +x scripts/create_subtheme.sh
./scripts/create_subtheme.sh

This will make the script called create_subtheme inside the scripts folder executable, and will also execute it. 

The script will ask for a machine name and a descriptive name for your custom subtheme.

Enter the values that suit best for you. Remember that the machine name has to be lowercase and may not contain spaces.

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

This step is optional:

  • Open the directory of your subtheme (/web/themes/custom/mytheme) in a code editor
  • Replace all instances of `Bootstrap Barrio` with `Name of your theme`
  • Save all files

Here, we are only changing descriptive text, so there would be no problem at all if you would leave this as is.   

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

Step # 3.- The Bootstrap Barrio Settings

  • Click Appearance on the backend of your Drupal site
  • Scroll down to your custom theme
  • Click Install and set as default

Once the theme has been installed, 

  • Click the theme Settings link

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

You will see a group of vertical tabs on the left side of the screen with the following options: 

  1. Layout (active tab)
  2. Components
  3. Affix
  4. Scroll Spy
  5. Fonts & icons
  6. Colors

Layout

By default is the `Layout` tab active. The first option `Container` specifies if the elements of your site will have a fixed width, or on the contrary will be displayed across the whole size of the screen. Leave this option untouched by now. 

Within the `Region` section, it is possible to assign custom CSS classes to the regions of the site.

  • Add your own custom class to a particular region

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

  • Close the `Region` section
  • Open `Sidebar position`
  • Change the value of `Sidebars position` to Left
  • Open `Sidebar first layout` and `Sidebar first layout`
  • Change the values to 3 cols and 2 cols respectively

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

Components

  • Click the `Components` vertical tab
  • Change the Button element to outline format
  • Check Apply img-fluid style to all content images

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

This will make the images that you insert through the image button of the content editor, responsive by default. The image will scale down to fit the size of the screen.

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

The `Navbar structure` section deals with the size of the navbar container. You have to differentiate between two navbars (navbar-top and navbar). Navbar is the main navigation menu of your site.

  • Change Navbar position to Fixed bottom and Navbar link color to Dark
  • Check Sliding navbar on the `Navbar behavior` section, in order to display a sliding main menu on small screens

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

The 3 last sections of the `Components` configuration option refer to the position of the messages delivered by Drupal’s internal message system, the tabs for locals tasks (like the edit content tab), and the appearance of form elements. Leave these options untouched.

Affix

With affix, it is possible to fix an element, i.e. set the value of the CSS position property to fixed.

Scrollspy

Scrollspy is used to automatically update the links of a navigation menu, based on the position of the cursor, i.e. if you scroll up or down on the site. This topic will be covered in a future tutorial.

Fonts and Icons

Here you have options to choose between different Google Fonts font combinations for the text of your site. Furthermore, you can choose between sets of predefined icons to use on your posts.

  • Choose the font combination and the icon set of your liking

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

Colors

You have here options to customize the color of Drupal’s internal messages. There are options to customize the tables of the site, for example, the ones generated with the Views module.

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

If you keep scrolling down, you will find the `Color scheme` for your subtheme. You can customize the text and background colors of the default theme regions.

You can customize the color of each element to your liking and block it, by using the lock icon. 

Page Element Display, Logo Image, and Favicon

These are default options in all Drupal themes.  

Load Library

You can choose between multiple online ready-to-use Bootswatch libraries to enhance the look and feel of your theme with just one click. These options are worth checking. 

It is possible to choose here if you want to load Bootstrap (Bootstrap CSS and JS) locally or via a CDN. This configuration should not be altered here. It is much better to change the code in the .info.yml file.

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

Bootstrap 4 in Drupal 8/9 with Bootstrap Barrio - OSTraining.com

  • Click Save configuration

Take a look at your site. This tutorial does not intend to teach you UI design, but rather explain the possibilities available with the Barrio theme. 

However, you can now start from a design and try to adapt the theme to it. 

I hope you liked this tutorial. Thanks for reading! 


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Apr 21 2021
Apr 21

DrupalCon North America is the biggest and probably the most important event of the year for Drupal. During normal times, the conference itself was hosted in a new city every year.

Even though the event was primarily targeted to the North America audience, by going virtual it was actually able to more easily attract people from all over the world. 

As a virtual event, this year’s DrupalCon NA turned out to be much more accessible in terms of logistics and investment.

Agiledrop would probably have attended and supported the in-person event too, but going virtual made it a bit easier for us (and our families, and as a company that values work-life balance this was a really great fit).

Our “One hour for Drupal” campaign

Like every year Agiledrop was sponsoring the event, and as a sponsor we were able to have a booth at the event. Normally we would bring some swag with us to hand over to booth visitors.

But, since handing out our famous notepads (designed for Drupal website wireframing) and fitness bands (that can help you flex your team) wasn’t really possible at a virtual booth, we had to look for something else.

We wanted to give away something meaningful for the Drupal community. We remembered a great campaign that our friends at Dropsolid had at DrupalCon EU 2019 called “Help Make Drupal Shine” where they gave contribution time for anyone who filled out a survey.

Right away, we knew such a campaign would be perfect for an event like this: we would devote 1 hour of contribution time to the Drupal community for each person who visited our booth during the DrupalCon week and claimed our special offer.

We were very happy with the turnout: 55 people ended up claiming the offer, which means 55 hours will be devoted to Drupal’s open-source code (in addition to our regular contribution, of course)!

What’s more, we’ll be emailing everyone who claimed our offer to ask them about which Drupal projects or initiatives they would like this hour of contribution time to be spent on. This way we’ll make sure that the time we devote to Drupal through this will be spent as meaningfully as possible.

Thank you to everyone who participated in this campaign, and thank you in advance for the feedback and ideas on how to best spend the 55 hours.

Networking

The virtual DrupalCon was hosted on a conference platform called Hopin which has a great feature of facilitating networking with random other attendees.

Networking and meeting new people is one of the best things about attending a Drupal event, but it is often missed at virtual events, so this was a great virtual alternative to “bumping into people during coffee breaks”. 

“It was a real pleasure to connect and talk with really interesting people in the networking section/page of the conference platform. For example, I met Albert Hughes, the author of the famous Drupal Rap video, and Alec Reynolds, the CEO at Tandem and Lando - a tool our developers often use for local development, and even the founder of Drupal Dries Buytaert.” 

Iztok Smolic, Commercial Director at Agiledrop

Sessions

Just like every DrupalCon, whether in-person or virtual, this year’s event offered a wide selection of great sessions on several different tracks. The main stage featured the excellent keynote presentations, with the heavily anticipated Driesnote which is always one of the highlights at DrupalCons.

There were also keynotes dedicated to some of the most important ongoing developments for Drupal, such as the Decoupled Menus initiative and the Drupal 10 Readiness initiative

Among the non-keynote sessions, one of our favorite ones was Preston So’s and Nani Jansen Reventlow’s session on advancing digital rights for everyone, which is a topic that’s particularly relevant in the current times of social upheaval with people also being more dependent on digital experiences than ever before.

Another session that we just couldn’t miss was Yan Zhang from Digital Echidna speaking about building resilience through energy management, providing invaluable tips and sharing relatable personal experiences which made her advice resonate even more.

Of course, all the things happening in-between speaker sessions also contributed to a fantastic experience for everyone: the yoga sessions, the awesome Spotify playlist, the great interactions from community members and, last but not least, the cat filter Zoom meme seamlessly pulled off by members of the Acro Media team!

Acro Media's Zoom cat meme at DrupalCon NA 2021

Conclusion

With everyone isolating ourselves for well over a year now, being able to attend DrupalCon and having an experience that was as close to an in-person event definitely lit up the year.

A lot of the features provided by Hopin and the DrupalCon organizers really gave that warm communal feeling of attending a vibrant in-person conference, which isn’t that easy to achieve - so, big kudos to Hopin and the organizers for this, and a big thank you to everyone working tirelessly for the attendees to have this experience. 

It was really great to hear from and connect with our friends from the community. We can’t wait to see you all again in-person and have a chat over a cup of coffee. Until then, though, if virtual Drupal events keep being this awesome, it seems like we’ll get by okay!
 

Apr 21 2021
Apr 21

Pros of Choosing WordPress Over Drupal For Website Development:

Ease Of Use – WordPress is a free and open-source CMS in which anyone can change and redistribute its source code. WordPress even offers mobile apps that make content editing, site administration easy on any device.

SEO-Friendly – Sites built using WordPress are SEO-friendly as there are several SEO plugins (such as Yoast SEO) inbuilt to help optimize content, meta tags, keywords, etc.

Responsive – You can use thousands of WordPress themes to make your site responsive and mobile-friendly. Using this market-leading CMS keeps you ready for the emergent responsive web technology.

Cost-Effective – WordPress is free, and even its paid features are less expensive than Drupal. The development, maintenance, and hosting cost of WordPress is lower than Drupal.

E-Commerce – WordPress provides plugins like WooCommerce to help you run your e-commerce site efficiently. Also, it has the features to help you handle the growing demand for your products and services.

Cons of Choosing WordPress Over Drupal For Website Development:

Frequent Updates – WordPress releases regular updates, and if you fail to keep up with these updates, it can bring your site to a screeching halt.

Customization Sophistication – You need to know PHP, CSS, and HTML to write complicated code if you want to add any enhanced feature to your site and

Pros of Choosing Drupal Over WordPress For Website Development:

Rapid Development – Drupal is agile and rapid, which helps developers create and deploy a website's core features and functionality. You can use the modules to cut down development times from weeks to days.

Extensive API Support – Drupal offers a range of API support, including Google Analytics, Facebook, Twitter, Google Apps, YouTube, that help developers create custom modules effortlessly.

Security – Unlike WordPress, themes, and modules are covered by an internal security program in Drupal. Therefore, it is not vulnerable when third-party plugins are uploaded as it is challenging to smuggle malicious content in Drupal.

Cross-Browser Usability and Support – Drupal is easy and suitable to use with almost all major browsers. The website's functionality and design translate as per the browsers.

Access Controls / User Permissions - Drupal has a built-in access control system using which you can easily create new roles with individual permissions in no time.

Cons Of Choosing Drupal Over WordPress For Website Development:

Installation and Modification – Drupal does not offer ease of use as WordPress because of its script, which is not user-friendly and needs advanced technical knowledge to install and modify.

Compatibility – Drupal is not backward compatible; if you have extra content and programs in place that you are customary to, Drupal might not be the right CMS platform for you.

We hope you have sound learning of both, WordPress and Drupal CMS platform’s pros and cons now. Again the choice of the right CMS platform for your business boils down to the question; what is your business requirement?

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web