Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Apr 26 2021
hw
Apr 26

Over the years, I have significantly lesser time for development and an increasing need to move around. After dealing with back-pain due to the weight of my Dell laptop while travelling for conferences, I bought a 15″ MacBook Pro. More recently, with the issues with Docker performance on Mac, I have been thinking of getting a Linux box. I was further motivated when I bought an iPad last year and wanted to use that for development. Now, with my old MacBook Pro failing because of keyboard and hard disk, I have a new MBP with the M1 chip and just 8 GB RAM. I am more and more interested in making remote development work efficiently for me.

It is not hard to set up a remote machine for development in itself. The problem I want to solve is to make it easy to spin up machines, maintain them, and tear them down. I also want to make it easy to set up a project along with the necessary tooling to get it running as quickly as possible. For now, the only problem I am solving is to set up the tooling quickly. I am calling this project Yakht (after yacht, as I wanted a sea-related metaphor).

Current workflow

While it’s far from the level of automation I am thinking of, it’s not too bad. I was able to set up a machine for use within a few minutes. This is what the process looked like:

  1. Create a 1 GB instance on DigitalOcean in a nearby region (for minimum latency).
  2. Add a wildcard DNS record for one of my domain names so that I can access my projects on ..
  3. Set the domain name and IP address in my Ansible playbook’s variable files and inventories.
  4. Run the Ansible playbook.

These steps had a machine ready for me with all tools required for Drupal development using Docker (Lando in my case). The Ansible playbook installs a bunch of utilities and customizations along with the required software like PHP, Docker, composer, Lando, etc. Only PHP CLI is installed because all development happens within Docker anyway. It also configures Lando for remote serving and sets the base domain to the domain I have configured, which means I can access the URLs generated by Lando (thanks to the wildcard DNS). With the Drupal-specific tooling we have written (some of which I have written before), setting up a new project is fairly quick.

A lot of these tools are somewhat specific to me (such as fish-shell and starship). I need to make it customizable so that someone else using it can pick a different profile. That’s a problem for another day.

Current trials

I have been using this machine for a long time now which is not how I intend for this to be. I am trying out a few tools and customizations before putting them in the Ansible playbook. Most notably, I am trying out cdr and using it as an online IDE which is very similar to vscode. It took a little effort to serve it via Caddy, it works well most of the time. It times out frequently but I think this is because this instance only has 1 GB of RAM. When it times out, the connection breaks and you need to reload which can get frustrating. Fortunately, this happens frequently for a while and then it works fine for long periods of time. In any case, I doubt if this will happen on an instance with a reasonable amount of RAM.

Screenshot of cdr running in Chrome

Screenshot of cdr running in Chrome

I know that VS Code has good support for remote development over SSH but I also want to be able to use the IDE over iPad and a browser-based IDE solves that. I am also considering trying out Theia and Projector but that’s for another day.

It’s also missing a few things I want in my development machines such as my dotfiles and configuration for some of the tools (such as custom fish commands). For now, all of this is set up manually. Of course, my intention is to automate all of these steps (even DNS).

Current problems

The general problem with these kinds of tools is to maintain a balance between flexibility and ease of use. By ease, I mean not having to configure the tool endlessly and frequently to make it do what you want. But that’s exactly what flexibility needs. For now, I am not trying hard to maintain flexibility. Once I have something that works with reasonable customization, I will figure out how to make it much more customizable.

Another problem is accessing remote servers from this machine. Right now, I am using SSH agent forwarding to be able to access remote servers from my development instance without having the SSH key there (and it works). But this doesn’t work if I am using the terminal in cdr. I am still looking for a solution to this problem that doesn’t involve me copying my keys over to the development instance. One idea is, if forwarding is not possible at all, to generate new keys for every instance and give you public keys that you can paste in services you want. This is secure compared to copying the private key but definitely a hurdle to get over.

I am excited by this project and hope to have more updates over time. For now, I hope you find the Ansible playbook useful. Also, I am looking for ideas and suggestions in solving this general problem of remote development. Please let me know via comments or social media what you think.

Apr 25 2021
hw
Apr 25

I thought I was done with the series of posts on object-oriented programming after my last one. Of course, there is a lot we can write about object-oriented programming and Drupal, but that post covers everything noteworthy from the example. There is a way which is old-school but works as it should and another which looks modern but comes with problems. Is there a middle-ground? Tim Plunkett responded on Twitter saying there is.

There's a third way! See https://t.co/cFizL60Loy and https://t.co/so2syXyXZS

— timplunkett (@timplunkett) April 24, 2021

At the end of the last post, I mentioned that the problem is not with object-oriented programming. It is with using something for the sake of using it. If you understand something and use it judiciously, that is more likely to result in a robust solution (also maintainable). Let’s look at the approach mentioned in the tweet in detail.

The fundamental difference

The problem with the version with objects in the last post was not because it used objects. It was because it overrode the entry point of the form callback to change it. Using classes has its advantages, most notably that we can use dependency injection to get our dependencies. For more complex alterations, it is also useful to encapsulate the code in a single class. But the approach with the entry point made the solution unworkable.

On the other hand, the form_alter hook is actually designed for this purpose. Yes, it cannot be used within classes and you have to dump it in a module file along with all the other functions. But it works, and that’s more important. There is no alternative designed for this purpose. So, in a way, the fundamental difference is that this method works whereas the other doesn’t. It doesn’t matter that we can’t use nice things like dependency injection here if it doesn’t even work.

Bringing them together

The two worlds are not so disjoint; PHP handles both brilliantly, after all. If you want to encapsulate your code in objects, the straightforward solution is to write your code in a class and instantiate it from your form_alter hook. Yes, you still have the hook but it is only a couple of lines long at most and all your logic is neatly handled in a class where it is easy to read and test. The class might look something like this.

<?php
 
namespace Drupal\mymodule;
 
use Drupal\Core\Form\FormStateInterface;
 
class MySiteInfoFormAlter {
  public function alterForm(array &$form, FormStateInterface $form_state, $form_id) {
    // Add siteapikey text box to site information group.
    $form['new_element'] = [ '#type' => 'textfield',
      // ... more attributes
    ];
  }
}

And you can simply call it from your hook like so:

function mymodule_form_system_site_information_settings_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  $alter = new \Drupal\mymodule\MySiteInfoFormAlter();
  $alter->alterForm($form, $form_state, $form_id);
}

You can save the object instantiation if you make it static but let’s not go there (you lose all advantages of using objects if you do that).

Dependency Injection by Drupal

This is already looking better (and you don’t even need route subscribers). But let’s take it a step further to bring in dependency injection.

We can certainly pass in the dependencies we want from our hook when we create the object, but why not let Drupal do all that work? We have the class resolver service in Drupal that helps us create objects with dependencies. The class needs to implement ContainerInjectionInterface but that is a very common pattern in Drupal code. With such a class, you only need to create the object instance using the class resolver service to build it with dependencies.

function mymodule_form_system_site_information_settings_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  \Drupal::service('class_resolver')
    ->getInstanceFromDefinition(MySiteInfoFormAlter::class)
    ->alterForm($form, $form_state, $form_id);
}

For better examples, look at the links Tim Plunkett mentioned in the tweet: the hook for form_alter and the method.

I hope you found the example useful and a workable middle-ground. Do let me know what you think.

Apr 24 2021
hw
Apr 24

Update: Read the follow-up to this post where I discuss a mixed approach combining both of the approaches here.

I previously wrote about how the object-oriented style of programming can be seen as a solution to all programming problems. There is a saying: if all you have is a hammer, everything looks like a nail. It is not a stretch to say that object-oriented programming is the hammer in this adage. That post was quite abstract and today I want to share a more specific example of what I mean. More specifically, I’ll talk about how using “objects” to alter forms without thinking it through can cause harm.

But first some context: this example comes from my reviews every week of code submitted as part of our interview test. This has happened frequently enough that I think that this is actually a recommendation somewhere. Even if it is the case of people copying each other’s work, it certainly is evidence that this has not been thought out. In fact, this makes for a good interview question: where would this method fail? I am going to answer that in this post.

The traditional method

Let’s look at the traditional method first. Drupal provides hooks to intercept certain actions and events. For example, Drupal might fire hooks in two situations: in response to events like saving a node, or to collect information about something (e.g. hook_help). You will find a lot more examples about the latter and that is what we are going to talk about today.

Drupal fires a few different hooks when a form is built. Specifically, it gives the opportunity to all the enabled modules to alter the form in any way. It does this via a hook_form_alter hook and a specifically named hook_form_FORM_ID_alter. So, for example, to alter a system site information form, either of the functions below would work:

function mymodule_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  if ($form == "system_site_information_settings") {
    $form['new_element'] = [ /* attributes */ ];
  }
}
 
// ... OR ...
 
function mymodule_form_system_site_information_settings_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
  $form['new_element'] = [ /* attributes */ ];
}

Adding elements or altering the form elements in any way is a simple affair. Just edit the $form array as you want and you can see the changes (with cache clear, of course). This is the old-school method and it still works as of Drupal 9.

The OOPS approach

More often than not, I see the form being altered in a much more involved way. Broadly, this is how it looks:

  1. Create a form using the new object-oriented way but extending from Drupal\system\Form\SiteInformationForm instead of the regular FormBase.
  2. Define an event subscriber that will alter the route using the alterRoutes method.
  3. In the event subscriber, override the form callback to your new form.

This gist contains the entire relevant portion of code.

After doing all this, you might expect that the code should at least work as expected. Many people do. But if you have been paying close attention, you might see the problem. If not, think about what would happen if two modules attempt to alter the same form this way. Only one of them would win out.

If there are two modules altering the same route, the last one to run will win and that module’s form changes will be used, The form controllers from the previous modules will never be executed. You could extend the first module’s form controller in the second module (so that changes from both modules take effect) but that is not reasonable to expect in the real world with varied combinations of modules.

So, we shouldn’t use objects?

I am not saying that. I am saying that we should think how we are applying any programming paradigm to build a solution and where it might fail. In our example, if Drupal supported an object-oriented version of form alters, that would have been safe to use (there is an open issue about this.) In fact, there is discussion to use Symfony forms and also some attempts in contrib space. Until one of those solutions get implemented, the form_alter hook is the best way to alter forms. And there is a good chance that such hooks get replaced in time. After all, the event-based hooks did get replaced by events in most cases.

For now, and always, use the solution that fits your needs. Using objects or using functional programming doesn’t necessarily make a program better. It is using our skills and our judgement that makes a program better.

Update: Read the follow-up to this post where I discuss a mixed approach combining both of the approaches here.

Apr 23 2021
hw
Apr 23

I have been contributing to Drupal in a few different ways for a few years now. I started off by participating in meetups, and then contributing to Drupal core whenever I found time. Eventually, I was even contributing full-time courtesy of Axelerant, my employer. At the same time, I started participating in events outside my city and eventually in other countries as well. I was speaking at most of the events I attended and mentored at sprints in many of these events. I have written about this in detail before in different posts about my Drupal story and a recent update to that.

It was only with the support my wonderful family and also from Axelerant in the early years that enabled me to contribute in this way. As my responsibilities grew, I had to find focus in where to contribute. My kids were growing up and I wanted to spend a lot more time with them. At the same time, I started picking up managerial responsibilities at Axelerant and was responsible not just for my work, but for a team. I was approaching a burnout quickly and something had to go. It was at this time I rethought on how to sustainably contribute to open-source.

Long story, short…

The story is not interesting. Honestly, I barely remember those years myself. I know they were essential for my growth and they came at a significant price. But we know that nothing worth doing is easy. As a mentor to a team and even bordering on a reporting manager, I had the privilege to multiply my efforts through them. I am proud to see how many of them have built their own profiles in the community and continue to do so.

My recommendation to my team and myself is now to stop thinking of contributing as “contribution” but as a part of our work. The word “contribution” implies giving something externally. People are hesitant to make this external action when they already are very busy with bugs, deliveries, and meetings. We all have a limited working area in mind to think about the code and all the complexity. Thinking about something external is very difficult in these circumstances.

Don’t hack core

One reason this feels so external to us is because of how we treat Drupal core and contrib. We drill in the notion in newcomers that we should never hack core. While there is a good reason for this but it results in the perception that the core (and contrib) cannot be touched. It is seen as something external and woe befall anyone who dareth touch that code. It is no surprise that many people are intimidate by the thought of contributing to the Drupal core.

My workflow

The trick is to not think of it as external. I use the word “upstream” instead of contrib projects when talking about Drupal core or modules. I find that some people think of “upstream” as a little closer to themselves than “community contribution”. Thinking about it this way makes the code more real, not something which is a black box that can’t be touched. We realize that this code was written by another team consisting of people who are humans just like us. We realize that they could have made mistakes just the way we do. And we realize that we can talk to them and fix those mistakes. It is no different than working in a large team.

Yes, this means that we don’t have people who are dedicating time to contribute. That is a worthy goal for another day. I am happy with this small victory of getting people familiar with the issue queues and the contribution process on drupal.org. I have seen these small acts bubble up to create contrib modules for use in client work (where relevant). Eventually, I see them having no resistance with the idea of contribution sprints because the most difficult part of the sprint is now easy: the process. They are familiar with the issue queue and how to work with patches (now merge requests); if it is a sprint, the only new thing they are doing is coding, which is not new to them at all.

I realize that this is not a replacement to the idea of a full-time contributor. We need someone who is dedicated to an initiative or a system. These contributors are needed to support the  people who will occasionally come in to fix a bug. But we need to enable everybody to treat the core as just another piece of code across the fence and teach them how to open the gate when necessary.

Apr 22 2021
Apr 22

As DrupalCon comes to a close for the crew at Mediacurrent, we’ve all had a chance to reflect on the experience. Here are the top 10 things we loved and learned at this year’s event.

1. Opening New Doors to ‘Discover Drupal’

Drupal talent is in high demand and The Drupal Association is focused on cultivating that talent with an emphasis on diversity, equity, and inclusion. That’s important to our team at Mediacurrent, too. We love helping young professionals get started in a Drupal career (like our two student interns who experienced their first-ever DrupalCon last week!) and we jumped at the chance to become a training partner for the just-launched Discover Drupal program. We will be mentoring students and providing an opportunity to intern with us after they have finished their scholarship.

Discover Drupal offers a 12-month scholarship and training program for underrepresented individuals in the open source community. Learn more and support the program

Speaking of training, our booth offer this year was a drawing for a free 4-hour training workshop in one of our most popular topics: Front-End Development, Decoupled Drupal with Gatsby, or Drupal Component-Based Theming. We are very excited to be drawing the names of 3 winners this week, who will learn about current technology demands and best practices using active discussion and a hands-on workshop. Watch our Twitter channel to see who wins!

2. Bright Horizons Ahead for Drupal 10 

Dries reinforced that the sun is quickly setting on Drupal 8, with community support ending this fall. Drupal 7’s days are numbered as well. If you haven’t already, it’s time to think about your Drupal 9 action plan.

Key dates for Drupal 7 and 8

The community’s innovation efforts will focus on Drupal 9 while also looking ahead to June 2022 — the target release date for Drupal 10.

Drupal 9 and 10 timeline

3. Going Back to Our Site Builder Roots

Drupal’s roots are about empowering site builders to build ambitious websites with low code. 

-Dries Buytaert, State of Drupal Keynote - DrupalCon North America 2021

What made YOU fall in love with Drupal? 

In his State of Drupal keynote, Dries reflected on Drupal’s core strength to find focus for the year ahead. He reasoned that to help our community grow and become even more successful, we need to give every user a clear reason to adopt Drupal. 

Many Drupal love stories share a common spark; the feeling of being quickly empowered by Drupal’s low code approach. To give site builders that “love at first site” feeling, Dries announced the Project Browser Initiative. That goal is to make site builder basics like installing a module as easy as installing an iPhone app and rise to the competition of Wix, Squarespace, and WordPress. 

Drupal program browser initiative

4. Building a Better Foundation for Future Features

Everyone wants to know what comes next for our favorite digital experience platform. As always, DrupalCon sessions and the Driesnote shed some light on the innovation that lies ahead, highlighting both core and contrib initiatives that the community is working to advance.

Visitors to the Mediacurrent booth saw how Rain CMS speeds up development and gives content creators the authoring experience they crave. (If you missed it, Rain CMS now ships with Layout Builder to make page building a breeze) 

Dries shared a progress update on the core strategic initiatives that are blazing trails for future functionality and improvements in Drupal core. These initiatives shaped the program content, with a different one assigned to each day of the conference. 

  • Easy Out of the Box - This initiative is improving Drupal's ease-of-use remains a top priority.
  • Decoupled Menus - This initiative is positioning Drupal as the go-to for decoupled. Now, non-module Javascript projects have a home on Drupal.org.
  • Automatic Updates - By getting automated security updates into Drupal 9 core, we can help site owners sleep soundly.
  • Drupal 10 Readiness - Drupal 9 is just under a year old but the community is already looking ahead. Dries called for community support to hit the target release date for Drupal 10.

5. Celebrating and Encouraging Community Contributions 

Drupal continues to shine as of the most scalable, robust, and mature development communities in open source. We heard from Heather Rocker, Global Executive Director of the Drupal Association, about some of the initiatives that are making it easier for first-time and non-coding contributors to get involved.

Both individual and company-level contributors were celebrated on the DrupalCon stage. Congratulations are in order for AmyJune Hineline, the recipient of this year’s Aaron Winborn Award. The award honors individuals for their outstanding commitment to the Drupal project and community. (Check out our interview with AmyJune from season one of the Open Waters podcast.) 

Giving back to Drupal remains a core priority for the Mediacurrent team. This year, we’re proud to show our support for the Drupal Association as a Diamond Drupal Certified Partner and excited to maintain our rank as one of the top five organizational contributors. 

top companies sponsoring Drupal

6. The More Sites, The Merrier

How do you manage and maintain dozens or even hundreds of sites effectively?

That’s the question Jay Callicott, VP of Operations at Mediacurrent, set out to answer in his DevOps track session on scaling Drupal with the power of multisite. Drupal’s multisite capabilities are a standout feature, setting it apart from other CMS platforms. Yet there’s a lot to consider - configuration, deployments, site provisioning, and more. 

This session recording is now available to registered attendees with public access coming in a few weeks. Stay tuned!

Drupal multisite presentation slide shows a decision tree

7. Making Sense of Open Source Security 

Mediacurrent’s Drupal security pros took the stage to tackle a timely topic: open source security for marketing and business leaders.

As open source software like Drupal continues to become widely adopted, sticking to security standards is a challenge. The global losses from cybercrime totaled nearly $1 trillion last year (csis.org), raising the stakes on security even higher. 

Be on the lookout for the session recording for a playbook on how to optimize your Drupal security. They covered how to become a security-first organization, embrace process automation, harden Drupal security, and create clear security policies.

these Drupal modules protect from OSWAP

8. Higher Education: The Stage for Ambitious Digital Experiences

DrupalCon’s industry summits are always a great accompaniment to the regular program, and this year was no exception. At the Higher Education Summit, Director of Development Dan Polant was joined by one of Mediacurrent’s ivy league partners  to co-present a case study session. We saw how the university relies on Drupal to model complex data and got a behind-the-scenes look at the decoupled architecture with Gatsby. 

A decoupled approach lets us choose a dedicated solution for a given job

9. Drupal is Powering Hope 

At this year’s DCon, we saw how Drupal is powering some of the most impactful organizations in the world. All but one of the major COVID-19 vaccine-producing companies use Drupal. 

Major nonprofits like Habitat for Humanity also rely on Drupal. Through its website, the organization has helped more than 5.9 million people build or improve the place they call home. Mediacurrent has been honored to support Habitat’s mission and partner with them to build a maintainable platform that thrives on support from the Drupal community. The Drupal Showcase session recording for Habitat for Humanity: Building a foundation for digital success will be publicly available soon. We’re grateful for the opportunity to reflect on the success we achieved through our partnership, and we hope others can learn from it. 

Covid vaccine sites Pfizer, Modern, J&J run on Drupal

10. The Momentum Continues With Drupalfest 

DrupalCon has ended but the celebration continues with Drupalfest. 

Interested in learning more about contributing to Drupal? Let Mediacurrent’s Community Lead Damien McKenna be your guide. Join Damien for Contrib Open Hours through the end of April. 

Watch the State of Drupal Keynote 

Check out the recording of the State of Drupal keynote below.

[embedded content]

Cheers to 20 years, Drupal! We look forward to gathering again next year.

Apr 22 2021
hw
Apr 22

We at Axelerant have been contributing to Drupal in our own ways for a long time. In fact, I worked as a full-time contributor to Drupal a few months after I joined. This was around the time Drupal 8 was almost done and it is thanks to Axelerant I could contribute what I could at that time. At the same time, there was community focus around incentivizing contributions and there were a few websites (like drupalcores) to track contributions.

Sometime later, in one of our internal hackathons, I built a basic mechanism to track contributions by our team (you can see it at contrib.axelerant.com). It was a weekend hackathon and what we built was very basic but it set the groundwork for future work. We adopted it internally and reached a stage where we had processes around it (even our KPI’s for a while). In this time, we expanded the functionality to (manually) track non-code contributions. More recently, we added support to track contributions from Github as well.

Tracking all contributions

Since this was a hackathon demonstrating possibilities using code and technology, we started with only tracking code contributions. Soon, we expanded this to track contributions to events and other non-code means. The latter was manual but this helped us build a central place where we could document all our contributions to the open-source world.

Today, we have tracking from drupal.org and Github and basic checks to determine if code was part of the contribution. On the non-code front, you can track contributions to websites like StackOverflow or Drupal Answers. And for events, you can track contributions such as volunteering, speaking, and even attending (yes, I think participating in an event counts as a contribution).

Now, we have a process for anyone joining Axelerant to set up an account on the contrib tracker. After this, all their contributions to Drupal and Github begin to get tracked at that time. We also remind people frequently to add details of any event they have attended.

Improving contrib-tracker

Contrib Tracker is open source but currently treated as an internal project at Axelerant. We initially set it up as a public repository on our Gitlab server. Now we realize that it was not practical for people to access and help build it. Today, I moved Contrib Tracker to its new location on Github: contrib-tracker/backend. For a while, we had the thought of implementing the website as a decoupled service and that’s why the repository is named “backend”. Right now, we may or may not go that route. If you have an opinion, please do let me know in the comments.

Moving to Github

The move to Github is still in progress. In fact, it just got started. The project is, right now, bespoke to how we host it and one of the lower priority items would be to decouple that. Once that is done, it should be possible for anyone to host their own instance of contrib tracker. But the more important task now is to move the CI and other assets to Github. I will be creating an issue to track this work. Our team at Axelerant has already started moving the CI definitions.

That’s it for today. Do check out the source code at contrib-tracker/backend and let us know there how we can improve it, or better yet, submit a PR. :)

Apr 21 2021
Apr 21

Dealing with constant upgrades and changes to the project requirements is not just the despair of all developers, but also dents the pockets of the clients. When discussing the project development - time and cost go hand in hand. The more the development time, the higher will be the cost. 

Resolving these hurdles is easy with, Drupal distributions as these

(1) make project development less costly to both build and maintain and 

(2) make it more commercially interesting

Leading media and publishing enterprises across the globe are already testifying the positive impact that Drupal has made on their digital business. By enabling a professional editing experience, multi-channel publishing, and personalization distributions are revolutionizing the way media and publishing houses approach Drupal.

media-publishing-drupal-statisticsSource: Drupal.org

But, what is a Drupal Distribution?

Drupal Distribution accelerates website development not just by saving time and cost but it provides quality code with out-of-the-box Industry-standard features. Maintenance of distribution is simpler because updates for all its modules and features can be performed in one shot!

According to Drupal.org “Distributions provide site features and functions for a specific type of site as a single download containing Drupal core, contributed modules, themes, and predefined configuration. They make it possible to quickly set up a complex, use-specific site in fewer steps than installing and configuring elements individually.”

Distributions provide site features and functions for a specific type of site as a single download containing Drupal core, contributed modules, themes, and predefined configuration.

Looking to redesign or start your website using Drupal in 2021? Explore the detailed comparison of the top media publishing distributions that are EzContent, Rain, Varbase, and Thunder to choose the right solution for you.

Top 4 Media & Publishing Drupal Distributions for 2021

Content Listing from EzContent

Content Listing from EzContent


    • Structured content for SEO: With EzContent you can not only easily create structured content but also provides a range of approaches to enable search engine optimization (SEO), including flexible fields, built-in metatags, Schema.org usage, and a large library of available components for rich text, media, and other common content needs.

Component Library EzContentComponent Library EzContent


    • A page builder for landing pages: With EzContent’s page builder, editors can create layouts on the fly without any dependency on developers. They can use the convenient drag-and-drop interface to easily place reusable components onto pages as needed. Similar components can be reused by configuring in different variations.

Drag & Drop Layout Builder EzContentDrag & Drop Layout Builder EzContent


    • An API-ready decoupled CMS: EzContent provides OOB Headless, CMS integration with Gatsby, React, and Angular. Thanks to EzContent distribution, editors are still empowered with legacy CMS features like previewing unpublished content, placing blocks and content using Layout Builder. Ready to use open-sourced Angular and React starter kits are available.

Choice of Frontends, OOB starter LitsChoice of Frontends, OOB starter Lits


    • AI-powered content generation: For content, leverage auto-tagging and auto caption to generate meaningful and contextual tags driven by Google AI and AWS Rekognition.

    • Now you can also manage and perform an intelligent search leveraging image recognition, image detection, and Deep Learning algorithms. Generate auto Podcast out of Article Content on the fly while curating content with help of Google AI.

auto-tagging-ezcontentAuto-tagging for Images

  • Thunder

    Thunder was designed by Hubert Burda Media and released as open-source software under the GNU General Public License in 2016. It consists of the current Drupal functionality, lots of handpicked publisher-centric modules with custom enhancements.

    Key Features & Pros of Thunder

    • Publisher Features: Create articles dynamically with paragraphs. Using paragraphs, you can add text, pictures, videos, Instagram, or Twitter Cardscards to your article with a WYSIWYG editor. — Change the order of elements by dragging and dropping the content wherever you like it.

    • LiveBlog: Cover events in real-time with the liveblog.

    • Google AMP: With the integration to Google AMP, you can deliver not only text but also images, galleries, videos, as well as Instagram and Twitter cards.

    • Improved media handling: With the help of the media browser, it’s very easy to add pictures, galleries, or videos to your article. You can upload new pictures to the media browser by just drag & drop.

    • Mobile-Friendly theme: With the Thunder installation, you get a responsive theme for the frontend and backend.

  • Varbase

    Varbase is an enhanced Drupal distribution launched in Feb 2019. It is packed with adaptive functionalities and essential modules that speed up your development and provides you with standardized configurations. The essence of Varbase lies within the basic concept of DRY (Don’t Repeat Yourself). It removes the need to perform repetitive tasks with the help of its modules, features, configurations that are included in the Drupal project.

    Key Features & Pros of Varbase

    • Publisher Features, Flexible Content Structure & Categorisation: The flexible content architecture allows for custom pages, layouts, and integrations for your special use cases. Content is organized through a predefined and scalable structure for better navigation.

    • Media Management: Full-Feature libraries to provide an appealing way to display media galleries.

    • Multilingual options with localized and translated content, translated interface, date formats, country flags, modern fonts, and more.

    • Mobile-Friendly theme: Fully optimized for mobile.

  • Rain

    Mediacurrent created the Rain Install Profile in May 2019. Rain comes prepackaged and pre-configured with components

    Key Features & Pros of Rain

    • The rain has a strong content model & focuses on editor features. It combines content, administrative & editorial features with a base theme and style guide. It also implements Components based theme, meaning components can be reused, and a build-in style guide.

    • Rain has a preconfigured API for exposing content to other applications in the JSON format. Rain can be paired with frameworks like Gatsby

Metric Comparison:

Screenshot 2021-04-22 at 1.01.06 PM

We did not forget Acquia Lightning!

The list of media publishing distribution is incomplete without mentioning Acquia Lightning. However, Acquia is ending support for the Lightning distribution in November 2021, simultaneously with Drupal 8.

Read the official announcement here

Conclusion

There are a lot of distributions to choose from. Evaluate what your product requires, map it with the vision of the distribution. Do check the roadmap, backlog, compatibility, and maintenance for the distribution.

If you’re looking for a smart distribution, OOB Components, and you want to be flexible on a choice of frontend, EzContent provides a starter kit to save your development time by 30%. 

Apr 21 2021
hw
Apr 21

Drupal 8 was a major revolution for the Drupal community in many ways, not least of which was because of the complete rearchitecting of the codebase. We picked up the modernization of various frameworks and tools in the PHP community and applied it to Drupal. Of course, this makes complete sense because Drupal is written with PHP. One of the things we picked up was the PHP’s shift to the object-oriented programming approach.

The shift to using objects enabled us to collaborate with the rest of the PHP community like never before. On the language front, we had PHP-FIG working on standards such as PSR-4 which was adopted by many libraries. And on the tooling front, we had composer which allows us to use other packages with minimal effort. These developments made it possible for us to build Drupal on top of the work from the larger PHP community. However, this meant that we had to follow the same style of programming as those libraries and thus began a massive re-architecture effort within Drupal.

The refactoring story

Drupal is a product built and maintained by a diverse group of geographically distributed and mostly unpaid people. Further, Drupal’s value is not in just the core system, but also the tens of thousands of modules, themes, and distributions maintained by a similar group of people. All the modules, themes, and distributions depend on the core Drupal’s API which means that making any change to the core is very risky.

Because of this, it is simply not possible to isolate Drupal and refactor. We had to incrementally refactor parts of Drupal while making sure there is sufficient documentation for people to upgrade their modules. This inadvertently, but predictably led to significant overengineering in Drupal’s codebase. There are a lot of parts of Drupal several layers down which may seem messy but there is a reason. The problem is that those reasons are very hard to surface. More importantly, they become a hurdle for more core contributors to simplify the system.

I have to caveat this by pointing out that I am not an expert core developer myself. Not even close. The above is just my observation in working with the core. You may say that this is practically an outsider’s perspective and you are right. But I have learnt that an outsider’s perspective is often the most valuable one.

Objects, objects everywhere

The new structure has brought in dozens of new practices (such as the DIC). This often means that you need to augment the structure to support that. Soon, we have classes with multiple adjectives and nouns everywhere reminiscent of Java. This makes the code even more opaque putting in more hurdles for people who are new to Drupal. Even experienced PHP developers may find trouble navigating the sometimes unintuitive names of various components because of the historical context.

With people managing to work in this new reality, I have found that people’s answer to all problems is simply more objects. Copy-paste solutions from various places (aka StackOverflow) and make it work. At work, I have seen more and more code samples from people who are convinced they are presenting a superior solution simply because they are using classes with names like subscriber and controller while not realising they are breaking functionality that would not have broken in Drupal 7.

This may seem vague at this point and I hope to present more specific examples in future posts. I will leave you with these abstract thoughts for today. I hope you found this interesting even if not useful.

Apr 20 2021
Apr 20

open waters podcast logo

New Year, New Faces, Same Focus

We’re back! Season 2 of Mediacurrent’s Open Waters podcast is officially on its way. We recorded a special trailer episode to celebrate. 

Mark Shropshire and Mario Hernandez are returning as hosts. Just like last season, our purpose is to explore the intersection of open source technology and digital marketing. We’ll share the challenges and solutions we see with our own clients and in the market.

During season 2, we'll be covering topics including:

  • How to optimize your digital strategy
  • Accessibility as a business imperative
  • UX design principles
  • How it all works together using open source technologies like Drupal

Welcome to Season 2

Meet the Hosts 

Mark "Shrop" Shropshire 

Shrop headshot

Role at Mediacurrent: Senior Director of Development 

Podcast creds: Shrop can be found talking about open source security, leadership, mentoring, and more on his two personal podcasts: goServeOthers and SHROPCAST

What are you most excited about for season 2? The opportunity to learn more about what other talented folks are doing in the marketing and tech industries. It is easy to be comfortable with what you know today, but I find that having conversations with others can really challenge you to think differently and learn new niches.

Mario Hernandez
Mario headshot

Role at Mediacurrent: Head of Learning 

Podcast creds: Member of the original cast of Open Waters  - Mario co-hosted the first season of our podcast.

What are you most excited about for season 2? I am looking forward to interviewing industry leaders to discuss all things open source. I am also really excited about teaming up with Shrop. He has vast experience in podcasting and I am ready to learn from him.

Podcast release schedule 

New episodes will be released monthly on the Mediacurrent blog and wherever you follow your favorite podcast. You can also follow us on Twitter for updates.

Subscribe to Open Waters

Visit mediacurrent.com/podcast to hear the latest episode and subscribe to the podcast on your favorite podcast app. Share the podcast or tell a friend about it - tag us @mediacurrent on Twitter. 

How can I support the show?

Subscribe, leave a review, and share about us on social media with the hashtag #openwaterspodcast. Have an idea for a topic? Get in touch with our hosts at [email protected]

Apr 20 2021
hw
Apr 20

Now that we have discussed some of the concerns with career growth, let’s talk about beginning a career in Drupal. I am not aiming for comprehensiveness but I hope to give a broad overview of what a Drupal developer role looks like in the real world. Needless to say, every company or agency is different. You might have a very different set of responsibilities depending on your organization. One of those responsibilities would be to build and develop Drupal websites and we are going to talk about that here.

Working in an organization essentially means working with other people. That is how you multiply your impact. All the roles do have some amount of dealing with people. Initially, you only might have to worry about collaborating with people but as you grow, the scope increases. Eventually, you would have to mentor and maybe even manage people. All along this journey, you would also increase your knowledge and area of impact technically.

Early roles

As someone who has very little experience with programming, you might start off with Drupal by building sites using UI and simple CLI commands (drush or console). Your challenge would be to understand content types, taxonomy terms, users, roles, permissions, etc. You would need to understand how to design a content type that is actually useful for what you want to represent and add fields to it. Moreover, you might know enough PHP to find snippets to alter forms or pages and implement those as per requirements.

At this stage, you might not be expected to solve complex problems or scale websites with heavy traffic. You might be working under the supervision (and mentorship) of someone with more experience with Drupal. It would be expected of you to collaborate with other people on the team such as front-end and QA engineers if any and complete your tasks the best you can.

You would need a strong set of fundamentals here such as git, basic PHP programming skills, and a working knowledge of HTML, CSS, and JavaScript. You would need more of an awareness of your software stack but wouldn’t have to go deep.

Intermediate roles

With around 2-5 years of experience, you will begin to pick up a more complex set of tasks. You would write more complex modules with proper object-oriented code. The site-building requirements also would be more complex. You would have to implement content types with complex relationships with other entities. The code required to tie these things together would also get more complex. You will also have to worry more about how to solve a requirement in the best way rather than just implement something.

At this stage, you would be expected to participate in code reviews and share your feedback. In fact, some organizations (like mine) encourages all levels of engineers to participate in code reviews, so this might not be new. Take this opportunity to also seek mentorship from other folks to go deeper into Drupal and other technologies. While you only had awareness of your stack earlier, you should now have a working knowledge of various elements in the stack. You would also be involved in discussions with other teams to build integrations your system needs. Essentially, you should understand how the pieces fit together and how they participate in making the system work.

You should also begin to question requirements and try to understand the business problem. Your value at this level is not just your technical skills but also solving business problems. In other words, it’s not just about solving the problem right but solving the right problem.

Advanced roles

After about 5-10 years of experience, you will now be responsible for the entirety of a system. You may begin with simpler systems where Drupal plays a major role and then move on to more complex systems where Drupal is just one piece of the puzzle. At this stage, more people will turn to you to solve their problems and you will have to figure out how to solve those problems yourself, guide them, or delegate it to someone else.

You should also be much more familiar with the entire Drupal ecosystem and how it can fit in with other parts of your stack. You are close to approaching what could be called the “architect” role (but we won’t go there today). As an IC, you will have a lot more impact but you would also play a significant glue role in your team (or teams).

This is a vast topic and I might come back to this in a future post (or multiple posts). Like I said before, this is not meant to be a comprehensive guide, but just a broad overview.

Apr 19 2021
hw
Apr 19

As the Director of Drupal services at Axelerant, one of the things I often worry about is the growth of each of the members of my team. We are a Drupal agency, so there are a lot of options for people to choose from. Yet, there are different concerns with working on Drupal for a long time. Today, I’ll talk about some of those concerns and what are the mitigating factors. I can’t claim to present a perfect solution, even less in a post written in less than an hour. But I hope to at least get the topic started.

Concern: Drupal is not cool anymore

When Drupal celebrated its 18th birthday, there were jokes around how Drupal is now old enough to drive (or something like that). This year, Drupal is 20. You might be surprised but a software’s life-cycle is not the same as a person’s life-cycle :). So, while 20 might be the age where a person is cool (I’m showing my age, aren’t I?), that’s not the same for software.

Yes, Drupal is now in the boring technology club. It has been so for years with a definitive transition around the Drupal 8 period. This is not a bad thing. It’s only a bad thing if you want your software stack to be cool and you having to work against deadlines to fix issues for everything that is just a little out of the way. Drupal is mature, predictable, and boring. And that means you should not worry about Drupal going out of style anytime soon.

Fact: Drupal does valuable things; cool, but also valuable

However, the people who have this concern don’t worry about the software living on. They are missing the fun that comes with working with cutting edge technology. They’re right. Fun is important and software development is hard enough without it not being fun. The fact is that Drupal can still be used in cool new ways. For example, Decoupled Drupal is one of the newest trends in the industry to offset Drupal’s limitations in the front-end area. Another subtle way that Drupal is fun by opening it up to the PHP ecosystem with the “get off the island” movement with Drupal 8. You don’t have to be limited by Drupal’s API and libraries to do what you want. The entire PHP ecosystem of packages is available for you for direct use within Drupal.

Further, I encourage people not to code just for the sake of coding. Solve a problem: any problem, even if it is a hypothetical issue or an imaginary challenge. Solve it and put it out to help other people. You make that act of coding about people and in turn, you help yourself. But I am digressing. Drupal solves problems of quickly building a web presence for activities that help people. This tweet comes to my mind again over here. Even if Drupal’s software stack is not cool, what it does is cool and that’s a big deal for me.

Concern: There is not much to do in Drupal

As developers grow and learn more, they find that they are learning less and less in their day-to-day work. This is closely tied with the first concern I talked about (Drupal is not cool). People may not think that Drupal is not cool but the underlying thoughts are the same: all tasks are of the same kind and I am not learning anything anymore. They’re right as well but there are ways around it.

I previously wrote that most requirements of building a website with Drupal can be achieved just by configuring in the UI. You only have to write some glue code to get it all working. Opportunities to develop entirely new functionality in Drupal is now rare unless you are working on the Drupal product. This is when people start looking into other frameworks and languages. Again, I highly encourage that because the learning benefits both the person and the project, but that’s beside the point of this post.

Fact: There is a lot to do around Drupal

The problem is in finding opportunities around Drupal. I have already hinted at one easy way out of this: work on the Drupal product. That is to say, contribute to the Drupal core and contrib space. These are the people solving the hard problems so that we don’t have to. If you want to solve hard problems, we need you there. I am sure your organization will support you if you are interested in contributing to Drupal.

Next, look at what are the integrations where you could help. Very few Drupal sites today work in isolation. Either you have a completely decoupled site, or at least integrations to other complex systems and these are still challenging areas to work in.

You will notice that the solutions here largely depend on the organization where you work. Yes, you would need the support of your organization and there is a good chance you will get that support if you ask. If you don’t, you can tackle this yourselves but that is the subject of another post.

Concern: No one respects Drupal or PHP

PHP (and Drupal) is often the subject of cruel jokes ranging from competence to security issues to just being old. Chances are that you have come across enough of them and so I won’t write more about this.

Fact: PHP and Drupal communities are some of the most respected ones

To be blunt, don’t listen to these people. In my opinion, they are living in a time 20 years back or are influenced by people who lived then. PHP has made a comeback with a robust feature-set and predictable release cycle. So has Drupal. Moreover, Drupal has rearchitected itself to be more compatible with the PHP ecosystem. Finally, code is just about getting the job done and PHP commands a significant share of the web applications market, so you’re in good company.

And it’s not just about code. PHP and Drupal communities are some of the most welcoming communities in the world. They are often talked about in various places (just search for podcasts on the topic) and people cite examples of how they found welcome and support in the communities.

I am going to leave this post here even though there is a lot more I can say. But these are my thoughts for today and I hope you enjoyed them. I hope to come back to this in another post some day.

Apr 18 2021
hw
Apr 18

As of this writing, there are 47,008 modules available on Drupal.org. Even if you filter for Drupal 8 or Drupal 9, there is still an impressive number of modules available (approximately 10,000 and 5,000 respectively). Chances are that you would find just the module you are looking for to build what you want. In fact, chances are that you will find more than one module to do what you want. How do you decide which module to pick or if even a particular module is a good candidate?

Like all things in life, the answer is “it depends”. There are, however, a few checks that you can make to make a better decision. Well, these are the checks I make when trying to pick a module and I hope they can help you too. This is not something you should take strictly; just use this as a guideline to help you decide. Also, if a module fails one of these checks, it doesn’t mean that the module is a bad choice. It only means that you might be making a trade-off. Software is all about making trade-offs and so this is nothing new.

Lastly, I’ll try to focus on modules here in this post but most of this advice is suitable for themes as well.

First-look checks

These are the most basic and easiest checks you can make on a module. Except for the first one, these are not strong indicators but they can quickly give you an initial impression of the module. There might be excellent modules that are perfectly suited to your needs but fail these checks, which is why you should only use these checks to differentiate between two modules. Yeah, the irony of the name is not lost on me.

Version

Is the module even available for your Drupal version? If you’re using Drupal 9, you would need the version that supports Drupal 9. Earlier, just reading the module version would tell you if it is compatible. For example, if the module version is “8.x-3.5”, you know that the module is for Drupal 8, not Drupal 7. You might think that the module is not for Drupal 9 either but that’s not so simple anymore.

Screenshot of version information on Drupal.org

As you can see in the above screenshot from the Chaos Tools module, the 8.x-3.5 version is for both Drupal 8 and Drupal 9. This changed after Drupal.org began supporting semantic versioning for contrib projects. These example release tags show different styles of version numbers you might see and this change record explains how a module may specify different core version requirements.

Module page

Is the module page updated? Does the description make sense considering the Drupal version you are targeting? If you’re looking for the Drupal 9 version of the module and the text hardly talks about that, maybe the module is not completely ready for Drupal 9 or has features missing.

Screenshot of project page on Drupal.org

Another clue you may have is the timestamp when the page was last updated. It may just be that the maintainer has forgotten to update the page even if the module works perfectly fine. For this, we go to the next set of checks.

Recent commits

Has the module been updated recently? If it was last updated years ago, chances are it won’t work with the latest version of Drupal core. Even if it works with your version of Drupal core, would you be able to upgrade once you upgrade to the latest version of Drupal core?

Screenshot of commits block on a project page

The block on the module page gives a quick summary of the recent commits, but don’t rely entirely on this. The block only shows current committers. If there are newer commits by a previous committer or commits made elsewhere pushed here (this would happen if someone were maintaining the module on Github, for example), they won’t show here. To be sure, click on the “View commits” link to see all the commits.

Issues

Is there a discussion going on about improving the module? The issues block on the project page gives a quick summary of what’s happening on the module. Be careful though, some module maintainers choose to maintain the module on Github or elsewhere. In those cases, the issues here would show no activity (or it may be entirely disabled).

Issues block on project pageResources

Are there other resources for the module? Is there external documentation or documentation within the drupal.org guides? The Documentation and Resources block on the project page will point you to such links and other useful links.

Code structure check

These checks take a little more time than just quickly scanning the project page. These indicators are a little more reliable but not the sole determinators of success. For all of these checks, first, go to the code repository by following the “Browse code repository” link from the project page and then, select the branch for the version you want.

README file

In a previous section, I mentioned that the project page may have an outdated or missing description. This happens over time when there are multiple versions and the maintainers find it difficult to keep the page updated. However, chances are that the maintainers would definitely update the README file in the repository. Go to the code repository (see screenshots above) and find the README file. The actual file may be named either README.txt or README.md.

If this file is maintained, there is a good chance the module is well maintained and documented and you would have fewer problems using it.

Code structure

You would need experience with Drupal development to make this check. Look at the module code and see how it’s structured. Are the classes neatly separated from the rest of the code? Does the code make separation of concerns? Are there tests? Is everything dumped in the “.module” file or a bunch of “.inc” files (ignore this if you are checking for Drupal 7 modules)? If the module stores something in the config, is there a config schema?

There are some metrics we can gather to understand this better but not from the code repository browser we see. But an experienced developer would know by looking at how well the module is following Drupal conventions. This is important because following these conventions will make it easier to keep the module updated for future versions of Drupal. You don’t want to start with a module and be stuck on an older version of Drupal because of this.

Test

Here is where we actually test the module and see if it works. These are the strongest indicators but also the most time-consuming to check.

Simplytest.me

The easiest way to check a module is simplytest.me. This excellent community-run service lets you test contrib projects along with patches. Type the name of the module and click “Launch Sandbox”.

SimplyTest screenshotThere is also an “Advanced options” section that lets you add more projects (if you want to test this module along with another module) and patches. Select the module or combination of modules you want, click “Launch Sandbox” and you have a brand new Drupal installation to play with for 24 hours. This actual testing will help you determine if this module fits your need.

Test locally

If you don’t want to test on simplytest.me or you simply want to evaluate code locally, use one of the quick-start Drupal tools to get a Drupal installation with the module. For example, with axl-template, I can type this command to get a Drupal site with smart date module and a few others.

$ init-drupal drupal/test -m smart_date -m admin_toolbar -m gin

You can also use the same code base to evaluate the codebase better, maybe even run some of the code quality checks on it if you’re interested.

I hope this quick post was useful to you. Like I said before, don’t treat this as a complete list but as a guide. In the end, your own experience and actual tests will tell you more. The above guidelines are only here to save you time in reaching there.

Apr 17 2021
hw
Apr 17

I picked up this topic from my ideas list for this #DrupalFest series of posts. I didn’t think I would want to write about this because I don’t think about features that way. One of the strengths of Drupal is its modular architecture and I can put in any feature I want from the contrib space. In fact, I prefer that, but that’s a different topic. I am going to write this very short post anyway because I am now thinking about this from a different point of view.

To write this post, I thought about the various things that bother me in my day-to-day work. I thought of how we could have simpler dependency injection, or lesser (or clearer) hooks, or lesser boilerplate, but then I realized that all of those don’t matter to me very much right now. Those can be improved but they are inconveniences and I can get over them.

Easier testability is what I want from the future versions of Drupal. I have realized that the ease of testability is the strongest indicator of code quality. A system that is easily testable is implicitly modular along the right boundaries. It has to be, otherwise, it is not easy to test. A good test suite comprises of unit tests, integration tests, and feature tests. If it is easy to write unit tests, then the system is composed of components that can be easily reused (à la SOLID principles). Integration and feature tests are easier to write but eventually, they get harder unless the system is built well.

Drupal core testability

Now, Drupal is one of the most well-tested pieces of code out there. Each patch and every commit is accepted only after it passes tens of thousands of tests. Testing is also a formal gate in adding new features (and some bugfixes). That is to say, any new feature needs to have tests before it can be merged into Drupal core. Consequently, testing is common and core contributors are skilled in writing these tests.

This testability is not easily carried forward to most kinds of websites we build using Drupal. Building a typical Drupal website is mostly a site-building job and cross-connecting various elements. This happens either by configuring the site via UI or by using hooks which are essentially magically named functions that react to certain events. With Drupal 8, the hooks-based style of writing custom modules has changed significantly with modern replacements but principally the same. This style of code where you have multiple functions reacting to events and altering small pieces of data is very difficult to unit test. This means that the only possible useful tests are feature tests (or end-to-end tests). Unfortunately, these tests are expensive to run and failures do not always point to an isolated unit.

Testing tools

There have been several attempts to make writing tests easier generally and for Drupal. The package that stands out to me for this is drupal-test-traits. This package provides traits and base classes that make it easy to test sites with content. You would set the configuration to point to an installed site and run the tests. The traits provide methods to create common entities and work with them and the tear-down handlers will clean those entities up at the end. All you have to worry about is providing an installed site.

In fact, the common problem with testing Drupal is testing an installed site. Most Drupal sites I have worked with are not easy to install, despite best efforts. Over time, you need to rely on the database to get a working copy of the site. This is true even if you follow the perfect configuration workflow. It is very common to have certain content required to run a site (common block content, terms, etc) and configuration workflows cannot restore such content. There are other solutions to these problems but they are not worth the pain to maintain. We’ll not go into that here.

Handling the database and assets

Sharing databases turns out to be the simplest way to create copies of a site which shifts the problem to making such databases available to the testing environment (e.g., CI jobs). If you are running a visual regression test, you would also need other assets such as images referenced in blocks and so on. At least for the database, we have prepared a solution involving Docker and implemented as a composer plugin. Read more about it at axelerant/db-docker. This plugin makes it easy for the team to manage the database changes as a Docker image which can be used by the CI job (or another team member).

Handling assets is a more complicated problem. We traditionally solved this with stage_file_proxy and I am planning to work on this more to build a seamless workflow like we did for databases. Any ideas and suggestions are very welcome. :)

The testing workflow

Drupal has long been said to be a solution to build “ambitious digital experiences”. Such systems are built by teams and any team needs a workflow on which everyone is aligned (otherwise it wouldn’t be a team). We have seen many improvements in various aspects of Drupal over time that cater to a team workflow (configuration management comes to mind clearly). In my humble opinion, I feel standard workflows for improving testing should be a priority now.

I will leave my post here as it is already much longer than I intended. I know I didn’t present any comprehensive solutions and that’s because I don’t have one. My interest was in sharing the problem here. There are only pieces of the solution here and I am interested in finding the gel that brings them together.

Apr 16 2021
hw
Apr 16

Drupal is a CMS. One might even say that Drupal is a good CMS and they would be right about that, in my not-so-humble opinion. At its core, Drupal is able to define content really well. Sure, it needs to do better at making the content editor’s experience pleasant, apart from other things. But defining content structures that are malleable to multiple surfaces has always been Drupal’s strengths. This makes Drupal an excellent choice for building a Digital Experience Platform (DXP).

The concept of a DXP has been popular for a few years but it has peaked, not surprisingly, in the last year with organizations now forced to prioritize their digital presence above all else. Companies have now realized that, with the amount of information being thrown at each of us, they have to make sure their presence is felt through all media. Building a coherent content framework that can be used for all of this media is no easy task. Information architects need powerful tools to flexibly define how they would store their content. Drupal has been able to provide such tools for a long time.

Digital Experience is a strategy

You might have realized by now that DXP is not a product, but a collection of tools that can help you execute your strategy. With the proliferation of media, it is important that you convey a consistent message regardless of how someone consumes it. To be able to do that, you have to identify the various ways you would talk to your customers and build a strategy. This is highly subjective and customized for your needs, and I won’t go into much depth here but the output we want is a coherent content architecture. An architecture which can represent your messaging to your customers.

Once you’re able to formulate this strategy, that is when you begin implementing it within your DXP. The content architecture you have identified goes in the CMS within the DXP and it needs to be flexible enough. Drupal is a very capable CMS for such requirements. It supports complex content models with relationships among pieces of content, rich (semantic) fields, and multilingual capabilities. You can also build advanced workflows for content moderation. This enables Drupal to be the single source of truth for all content in an enterprise. This, too, should be important in your content strategy and Drupal makes it easy to implement.

Discovery

A lot of this may sound like a lot of theory and not enough practice. In a way, that’s true. Think of this as a discovery stage for the problem you’re solving. It’s important you spend enough time here so that you identify the problem clearly. Solving a wrong problem may not be very expensive from a technical point of view but frustrating to your content team. Involve various stakeholders within the DXP to determine if the content model you are building will break their systems. For example, your long text field may be good for web and email, but is unusable for a text message. But if you break up your content into a lot of granular pieces, you have to figure out how to  piece it together to build a landing page.

You also have to determine how your content can be served to more diverse channels (e.g., voice assistants or appliances). Depending on your domain, you have to make trade-offs and build a model that is workable for a variety of consumers. But that’s only one side of the story. You also have to make sure that the content is easily discoverable (both internally and externally), easily modifiable, auditable (revisions), trackable (workflows), and reliably stored (security and integrity of the data). Typically, there are an ecosystem of tools to help you achieve this.

Integrations

Drupal already handles some of these things and it can integrate well with other systems in your infrastructure. Drupal 8 began a decoupling movement which became a hype and is now being rationalized. I wrote about it in a separate post. To be clear, decoupling was always possible but Drupal 8 introduced web services in the core that accelerated the pace. Today, you only need to enable the JSON:API module to make all your content immediately discoverable and consumable by a variety of consumers.

Apart from being the content server, Drupal also handles being the consumer very well. As of Drupal 8, developers can easily use any PHP package, library, or SDK to communicate with different systems. Again, this was possible before but Drupal 8 made it very easy by adopting modern PHP programming practices. Even if a library or SDK is not available, most systems expose some sort of API. From Drupal’s point of view, use the in-built Guzzle or another HTTP client of your choice and invoke the API.

Where does Drupal fit in

Drupal is now a very suitable choice for beginning to build your DXP. However, that is not the complete story. All systems evolve, requirements change, and strategies shift. It should be easy for Drupal to shift along with it.

For example, Drupal’s current editing experience was excellent when it came out but that was several years ago. Today, building an intuitive editorial experience with Drupal is the most pressing challenge we face. There are a lot of improvements in this space and there will be more with newer versions of Drupal. It helps that community has picked up a reliable release schedule and that has built the user’s trust in Drupal. Because of the regular release schedule and focused development, we see editorial features such as layout builder and a modern theme as a part of Drupal.

It may seem that this is not important, or at least not as important as “strategy” and “infrastructure”. That’s a dangerous notion to have. Ultimately, your system will only be as effective as your team makes it. An unintuitive UI makes for mistakes and a frustrating experience. And if it is hard to maintain the content, it will not be maintained anymore. If there is anything more dangerous than missing content, it is outdated content.

Customization

Apart from the editorial experience, a flexible system is important for an effective DXP. If the content store cannot keep up with the changes required for new consumers or even existing ones, it will become a bottleneck. In organizations, such problems are solved by hacking on another system within the DXP or running a parallel system. Both of these approaches mark the beginning of demise of your DXP.

That is why it is important for you to be able to easily customize Drupal. Yes, I’m talking about low-code solutions. Drupal has figured out how to modify the content structure with minimum developer involvement, if any. It needs to make this easier for other functionality as well. Various features of Drupal should be able to interact with each other more flexibly and intuitively. For example, it is possible to place a view within a layout but it is not intuitive to do so. We have to identify such common problems and build solutions for site-builders to use. Again, I am not going to go in depth into this.

Building a digital experience platform for your organization is a massive undertaking and I cannot hope to do justice to all the nuances within a single blog post written over a couple of hours. But I hope that this post gave some insights into why Drupal is relevant to this space and how it fits into the picture.

Apr 15 2021
Apr 15

Featuring a video from Acro Media’s YouTube tutorial series Tech Talk, this article will walk you through setting up an awesome Drupal Commerce product catalogue using Search API and Solr, and then adding various ways of filtering the results (product search, sorting options and Facet categories).

Ten steps to creating a product catalogue with Search API, Solr & Facets

These 10 steps will get you a Search API index using Solr, setup a View to display the results of the index, and then implemented 3 different ways to filter the results (search, sorting and Facets).

The end result of this guide will be a catalogue that functions in the same way as our Urban Hipster Drupal Commerce demo site’s catalogue. You can try it here. If you don’t already know why we use Search API, Solr and Facets for catalogues, check out this article to get up to speed.

Even though we’re going to be using products, once you understand how it works you’ll be able to apply the same method for other type of content such as a blog, videos, resources, and more. The datasource can change but the process is the same.

Let's get started! Follow along with this video or skip below for a written guide.

[embedded content]

What you need before starting

  1. A running Solr server (Solr 6.4+)
    This tutorial assumes you have Solr running and can make a new core.
  2. Drupal 8 installed with the following modules:

    TIP: Get most of what you need quickly with Commerce Kickstart. Note that you will still need to install the Facets module after getting your Kickstart package.
    • Commerce
      composer require drupal/commerce
    • Search API
      composer require drupal/search_api
    • Solr
      NOTE: This module requires you're running Solr 6.4+ and PHP 7
      composer require drupal/search_api_solr
    • Facets
      composer require drupal/facets

Getting started

  1. Install/start Solr and add a new core that we’ll use later.
  2. Enable the Commerce, Search API, Solr and Facets modules.

Setup a basic Commerce store

For this tutorial, get your site and basic store set up by doing the following:
  1. Add a product category taxonomy vocabulary that is at least 2 levels deep.
    If we use clothing as an example, we might have Men and Women as the first level, then t-shirts, shorts and shoes as the second level for each.
  2. Setup your basic Commerce store, product types and product variation types.
    If you’re new to Commerce, take a look at their documentation to get up and running quickly.

    NOTE: Make sure to add the taxonomy vocabulary you created as a ‘taxonomy reference’ field to your Product Type.

  3. Add 10 or more simple products for testing the catalogue as we go.

Demo Drupal Commerce today! View our demo site.

Add a Search API server and index

Search API server

Admin: Configuration > Search and metadata > Search API
Admin menu path: /admin/config/search/search-api

  1. Click ‘Add server’.
  2. Configure the server.
    1. Name your server and enable it.
    2. Set ‘Solr’ as the server ‘Backend’.
    3. Configure the Solr connector.
      The defaults are usually fine. The main things to add are:
      • Solr connector = ‘Standard’.
      • Solr core = Whatever you named your core.
    4. Under ‘Advanced’, check ‘Retrieve result data from Solr’.
    5. Look over the remaining settings and update if you need to.
Search API index

Admin:Configuration > Search and metadata > Search API
Admin menu path:/admin/config/search/search-api

The index is where you set what data source is used by Search API. Eventually, you’ll also specify specific fields to be used for filtering the displayed results.

  1. Click ‘Add index’.
  2. Configure the index.
    1. Name your index.
    2. Data source should be ‘Product’
      This can be anything, but we’re creating a Commerce catalogue and so we want to use the store products.
    3. Select the server you just created.
    4. Save. Don’t add any fields for now, we’ll do that later.
    5. Go to the ‘View’ tab and index your results. This will index all of the products you have added so far.

Create a View for the catalogue

Admin:Structure > Views
Admin menu path:/admin/structure/views

The View will use the data source we’ve identified in our index and allow us to create a catalogue with it, and then assign ways of filtering the catalogue results (i.e. a search field and/or facets).

  1. Create a new View.
    1. View Settings, select your index.
    2. Create a page (this will become our catalog).
  2. View Display settings.
    1. Format > Show
      Set as ‘Rendered entity’, then in the settings, set your product types to use a ‘Teaser’ view mode instead of the default.

      NOTE: If you don't see the 'Rendered entity' option, this seems to be a bug with Views and happens when you don't select your index before checking the 'Create a page' checkbox. To fix this, just refresh your page to start over. If that doesn't work, flush your caches.

      NOTE:You may need to create this view mode if it doesn’t already exist.

      NOTE:You could alternately use Fields instead of view modes, but I like to keep my product display settings all within the product type’s display settings. Then you can potentially customize the display per product type if you have more than one.

  3. Save the view .
    These basic settings should give us our overall catalog. You can confirm by previewing the view or visiting the page you just created.

Add Fulltext datasource fields for a catalog search field

Now we’ll start setting up a Fulltext search field to let our users filter results using a product search field. The first thing we need to do is add some datasource fields to our index that the search will use.

  1. Go to your Search API Index and go to the Fields tab.
  2. Add Fulltext fields that you would like to be searchable (such as Title, SKU, Category taxonomy name, etc.).
    Here’s an example for adding the title:
    1. Click ‘Add fields’.
    2. Under the ‘Product’ heading, click ‘Add’ beside the ‘Title’ field.

      NOTE: If you’re adding a different field instead, you may need to drill down further into the field by clicking ( + ) next to the field. For example, to make a taxonomy term a searchable field, you would go to Your Vocabulary > Taxonomy Term > Name .

    3. Click ‘Done’.
    4. Set the field ‘Type’ to ‘Fulltext’.
      This is an important step as only Fulltext fields are searchable with the user-entered text search we are currently setting up.

      NOTE: Under the fields section is a ‘Data Types’ section. You can open that to read information about each available type.

    5. Optionally change the ‘Boost’ setting.
      If you have more than one Fulltext field, the boost setting allows you to give a higher priority to specific fields for when the terms are being searched.

      For example, multiple products could have a similar title, but each product would have an individual SKU. In this case, SKU could be given a higher boost than title to make sure that search results based on the SKU come back first.

  3. Next, add another field for the ‘Published’ status.
  4. Once you’ve added this field, set it’s type as ‘Boolean’.
  5. Reindex your data (from within the index view tab).

Set up the catalogue search field within the catalogue View

We can now set up the actual search field that our customers will use to find products, and use the datasource fields we added to our index to do this.

  1. Go to your catalog View.
  2. Under ‘Filter criteria’.
    1. Add ‘Fulltext search’ and configure its settings.
      • Check ‘Expose this filter to visitors, to allow them to change it’.
        IMPORTANT:This is what gives the user the ability to use this search field.
      • ‘Filter type to expose’, set as ‘Single filter’.
      • ‘Operator’, set as ‘Contains any of these words’.
      • ‘Filter identifier’, optionally adds an identifier into the url to identify a search term filter.
        (i.e. yoursite.com/products?your-filter-identifier=search-term)
      • Apply/save your settings.
    2. Add ‘Published’ and configure it so that it is equal to true.
      This uses the field we added to the index earlier to make sure the product is actually published. Chances are you don’t want unpublished results shown to your customers.
  3. Under ‘Sort criteria’.
    1. Add ‘Relevance’.
    2. Configure so that the order is sorted ascending.
      This will show the more relevant results first (factoring in the boost you may have applied to your index fields).
  4. Now we need to expose the search field to our customers. To do this:
    1. Open the ‘Advanced’ section of your catalog view.
    2. In the ‘Exposed Form’ area.
      • Set ‘Exposed form in block’ to ‘Yes’.
        This creates a block containing a search field that we can place on the site somewhere.
      • Set ‘Exposed form style’ to ‘Basic’ and update the settings. For now, the settings you might change are customizing the submit button text and maybe including a reset button.
  5. Add the search block to your site.
    Admin menu path:/admin/structure/block
    1. In your preferred region, click the ‘Place block’ button.
    2. Find the Views block that starts with ‘Exposed form’ and click ‘Place block’.
      Its full name will be determined by you view’s machine name and page display name (i.e. Exposed form: products-page_1).
    3. Configure the block as you see fit, and save.
  6. Test your search!
    You should now be able to see the search field on your site frontend and try it out.

Add more datasource fields for sorting options

We can optionally sort the catalogue and search results with some additional sorting filters, such as sorting by Title, Price, Date added, etc. Let’s add the ability to sort our products by title with the option to choose ascending or descending order.

  1. Go to your Search API Index fields and add another 'Title' field the same as you did earlier. However, this time you want to change the field ‘Type’ to ‘String’. You should now have two Title fields added, one as ‘Fulltext’ and one as ‘String’.

    NOTE: The field type can be different depending on what field you’re adding. If you’re adding a sorting field such as Price > Number, you might use the ‘Decimal’ field type.

    TIP: I would recommend changing the label for the new Title field to something like ‘Title for sorting’ so that it’s easier to identify later. You could even change the fulltext Title label to ‘Title for search’, just to keep them organized and easy to understand.

  2. Reindex your data (from within the index view tab).
  3. Go to your catalogue View.
    1. Under ‘Sort criteria’.
      • Add the new title datasource and configure it.
        • Check ‘Expose this sort to visitors, to allow them to change it’.
          IMPORTANT: This is what gives the user the ability to use this sorting option.
        • Add a label for the filter
        • Set the initial sorting method.
      • Add any additional sorting fields if you added more.
    2. Open the settings for the ‘Exposed form style’ (within the view’s ‘Advanced’ section).
      • Check ‘Allow people to choose the sort order’.
      • Update the labels as you see fit.
    3. Save your view!
      Refresh your catalogue page and you should now see sorting options available in the search block that you added earlier.

      TIP: If you DO NOT see the sorting options, this is a bug and is easily fixed. All you need to do is remove the search block and then re-add it.

      TIP: You can place this block in multiple regions of your site and hide the elements you don’t want to see with CSS. This way you can have a block with the site search and no filters in your header, and then also have another block on your catalog pages that shows the sorting filters but no search field.

Add Facets to the catalogue

The filters we added earlier can only be used one at a time, however, often we want to filter the results based on a number of different options. For example, if I’m browsing an online store looking for shoes of a certain style and size, I don’t want to see everything else the store has to offer. I want to be able to go to a ‘shoe’ category, then pick the ‘type’ of shoe that I’m after, and finally pick the ‘size’ of shoe that’s relevant to me. I want to see all of the results that fit that criteria. Facets lets you use taxonomy (and other datasources) to achieve this.

Let’s add a Facet that uses the taxonomy vocabulary we created in the initial store setup. This will be our main catalogue menu for narrowing down the product results. Each facet that is created creates a block that we can add into any region of our template.

  1. Add a Search API index field for your taxonomy vocabulary. Set the field ‘Type’ as ‘String’.

    TIP: Like we did earlier, I would recommend renaming the label for this field to something like ‘Categories for Facet’.

  2. Reindex your data (from within the index view tab).
  3. Go to the Facets page.
    Admin: Configuration > Search and metadata > Facets
    Admin menu path: /admin/config/search/facets

    You should see a ‘Facet source’ available to use. When we created a View using our index, this is what added the Facet source here. Now that we have a source, we can create Facets to filter it.

  4. Click ‘Add facet’.
    1. Choose the ‘Facet source’ to use.
    2. Select the index ‘Field’ that this Facet will use (i.e. Categories for Facet, or whatever you labelled your field).
    3. Name your Facet (i.e. Categories).
  5. Configure the Facet.
    This will cover the basic settings that you will need. Start with this and then you can always play around with other settings later. Each setting has a pretty good description to help you understand what it does.
    1. Widget.
      Choose a ‘Widget’ for displaying the categories. For categories, I like to use ‘List of checkboxes’.
    2. Facet Settings.
      Check the following:
      • Transform entity ID to label.
      • Hide facet when facet source is not rendered.
      • URL alias as ‘cat’ (or whatever you like).
      • Empty facet behavior as ‘Do not display facet’.
      • Operator as ‘OR’.
      • Use hierarchy.
      • Enable parent when child gets disabled.
      • NOTE: the Facets Pretty Paths module can be used to give you nicer looking URL paths.
    3. Facet Sorting.
      Configure as you see fit. In this example, I would only check the following.These settings make sure that the taxonomy follows the same order that you have set within the vocabulary itself.
      • Sort by taxonomy term weight.
      • Sort order as ‘Ascending’.
    4. Save.
  6. Add the Facet block to your site.
    Admin: Structure > Block layout
    Admin menu path: /admin/structure/block
    1. In your preferred region, click the ‘Place block’ button.
    2. Find the ‘Categories’ facet block (or whatever you named it) and click ‘Place block’.
    3. Configure the block as you see fit.
    4. Save.
  7. Test your Facet!
    You should now see your Facet on the catalog page. Click the checkboxes and test out how it works!

One last thing...

The sites we've been building use the Facets Pretty Paths module for making nicer looking URLs with our catalog and filters. For a while we were plagued with a problem where, when the user selects a Facet category and then uses the sorting options, the Facets would uncheck and reset. This is obviously not good because the user is trying to sort the filtered down items, not the overall catalog. We need to be able to maintain the active facets when using the filters.

Luckily, a coworker came up with this nice little solutions that you can apply to your theme's .theme file. You just need to replace YOUR_THEME,YOUR-VIEW (i.e. products-page-1), and YOUR-PATH (i.e. products) in the code below. Ideally, this will be fixed within the module itself soon, but this will work while we wait.

/**
* Implements hook_form_alter().
*/
function YOUR_THEME_form_alter(&$form, FormStateInterface $form_state, $form_id) {
// Store - Product Listing view exposed form.
if ($form['#id'] == 'views-exposed-form-YOUR-VIEW') {
$current_path = \Drupal::request()->getRequestUri();

// If current path is within your catalog, correct the form action path.
if ((strpos($current_path, '/YOUR-PATH') === 0)) {
// Fix for views using facets with pretty paths enabled.
// Replace form action with current path to maintain active facets.
$form['#action'] = $current_path;
}
}
}

Done!

There you have it! You have now created a Search API index using Solr, setup a View to display the results of the index, and then implemented 3 different ways to filter the results (search, sorting and Facets). This is the start of an awesome product catalogue and you can expand on it with more datasource fields however you want. Cool!

Contact us and learn more about our custom ecommerce solutions

Editor’s note: This article was originally published on July 12, 2018, and has been updated for freshness, accuracy and comprehensiveness.

 
Apr 15 2021
Apr 15
[embedded content]

Don’t forget to subscribe to our YouTube channel to stay up-to-date.

File Management Series

In Drupal, files can be uploaded to the site for users to view or download. This can be easily achieved by creating a file or image field on content types.

In the back end, a list of all the files uploaded can be viewed by the administrator, by going to Administration > Content > Files (admin/content/files).

Files uploaded can be easily removed from the individual content pages (see the image below), but removing them entirely from the system is another story. You might be surprised that you cannot find a button, a link or an option to remove these deleted files entirely from the system.

After deleting files on content, if you go to the Files (admin/content/files) page, you will find the deleted files are still there, and the status still shows ‘Permanent’, even though they are already removed from the nodes. It seems very confusing. Removing files from content and removing files from the system are two different things in Drupal.

To remove files from the system, we need to add the file delete function. This can be achieved by installing the File Delete module.

Table of Contents

Getting Started

The File Delete module adds the ability to delete files from within the administration menu.

To download the module run the following Composer command:

composer require drupal/file_delete

How to Configure

After installing the module, we need to add a ‘Delete link’ to the Files page.  We can do that by going to Administration > Structure > Views and edit the views of the Files list. By default, the name of the views is ‘Files’:

To add the ‘Delete link’, we need to add a field called ‘Link to delete File’ in this views, which appears like the following image:

Select this field.  We can keep everything by default, and then click ‘apply’ and save the views.  Next, go to Administration > Content > Files, and there we shall see the ‘delete’ link there, like the following:

How to Use

With this ‘delete link’, we can remove the delete files from the system by clicking on the ‘delete link’ shown in the above diagram. Note that Drupal will protect all the files, and will not allow files to be deleted when they are being used in any content.  If you try to delete a file which is still being used in a node, you will see this following error message.

The Tricky Part

In some cases, even if you have deleted the files from the nodes, and the files are no longer visible in the nodes, you might still find this error message disallowing you to remove the file from the system.

But that might be correct because these files might still be used in content. The nodes where these files came from might have more than one revision. These files removed from the current revision might still be attached to the previous revisions.

When this happens, check out the nodes again for multiple revisions, like the following example:

Only when all attachments (including the old revisions) are cleared, then these files can really be removed from the system. So in summary, when we delete the files, we need to make sure that these files are not being used in any content, including the old revisions.

The File Status

Even though you can successful remove deleted files from the system using the ‘Delete link”, you will still find the following:

  • The file is still on the list
  • The status of the file is changed from ‘Permanent’ to ‘Temporary’
  • The file is used in 0 places (that means it is not used with any nodes)

Following is an example how it will look like:

But why do these deleted files showing ‘temporarily’ status still appear in the list?  Here is why …

In fact, the files removal process has already been initiated. Drupal will delete the file in the following sequence:

  1. Change the status from ‘Permanent’ to ‘Temporary’
  2. Wait for a minimum of 6 hours
  3. Upon the next cron-run after 6 hours, the system will really remove these temporary files

That is why you still see these deleted files in the list within this 6-hour period.  This is how Drupal manages temporary files.

Temporary File Management

If you go to Administration > Configuration > Media > File system, you will find a configuration and explanation at the bottom of the page:

Here it says temporary files will be deleted in the next cron run after 6 hours. Note that 6 hours is the minimum. This ‘6 hours’ setting can be changed, but 6 hours is already the minimum.  See the following options for the configuration:

With the ‘Media’ module, files can also be added as media documents instead of a ‘File’ field.  These files added as media documents can also be deleted, but it will be a little bit different in the operation. Again, the system will not allow deleting files that are still being used.

To delete such files added as media entities, pay attention to the following:

  1. Confirm that the media is not being used on the site first.
  2. Go to Administration > Content > Media to the list of media entities, and delete the designated items here first.
  3. Next go to Administration > Content > Files, and then delete the designated files here.

It will go through the 6-hour cycle described above, but we should leave it to the system to handle it from here.

The Files list is within the administration area, accessible by the administrator. But if other users will require access to this area to delete the files, user permissions need to be configured specifically to this module.

Using Drush

If you use Drush, you can delete files straight away using the “drush entity:delete” command.

For example:

drush entity:delete file ID

This command will remove the file from the Files page and delete it from the file system and you won’t have to wait.

Summary

The file delete procedure in Drupal is not that user-friendly. The module provides the ability to delete files in Drupal.  The procedure for deleting files is a little bit complicated, due to the fact that files can be used therefore associated with different entities in the system.  Also, for this reason, it is suggested that file delete be handled by an experienced administrator with the right permission and access to the administration area.

Apr 14 2021
Apr 14

Companies are breaking free of restrictive proprietary platforms in favour of custom open source solutions. Find out why in this comprehensive article.

Advantages of Open Source Commerce

Download the PDF version of this article | Acro Media

Ownership of data & technology

If you use an open source commerce platform, you own the code.

You need to look at your website the same way you would view a brick-and-mortar storefront. Paying a monthly licensing fee for a hosted platform is like having a leased property -- you’re only adding to your overhead costs and you have no control over your future. Hosted solutions don’t allow you to own the code, which business owners often don’t think of as a problem until something bad happens. If a hosted solution goes under, they take you down with them. You lose your online presence and have to rebuild from the beginning. That’s a long and expensive process.

If you use an open source commerce platform you own the code. If you work with an agency and choose to move to an in-house development team or a different agency, you can do so at no cost or risk to your business.

Integration with virtually any business system

The code is completely exposed for you to use.

If you judge ecommerce solutions solely on their feature set, hosted solutions like Magento, Shopify, and Volusion will always look good up front. But your ecommerce platform is more than just window dressing. Open source frameworks can have an impressive feature set, but the biggest advantage is the expansive list of back-end systems they can integrate with.

Proprietary platforms can offer standard integrations with customer relationship management (CRM) systems and fulfillment solutions, but if you’re a big retailer, you may find you need a higher degree of integration for your sales process.

Open source platforms are exactly that. Open. The code is completely exposed for your use. Combine this with the modular architecture and you have a platform with the ability to integrate with virtually any business system that allows access, from CRMs and shipping vendors to payment gateways and accounting software. Your ecommerce site can become an automated business rather than just a storefront.

A custom user experience

A custom user experience gives more power to the marketer.

When it comes to user experience, hosted platforms give you a best-practice, industry-standard, cookie-cutter execution of a shopping cart. It’s a templated design that is sold as a finished product, so you know you’ll have a catalogue, a simple check-out, account pages, etc. Outside of choosing a theme, there is very little room for customization. Open source allows for all the same functionality plus a powerful theme system that allows you to add unique and advanced features very easily.

A custom user experience gives more power to the marketer, allowing them to create custom conversion paths for different user types and integrate those paths within the content experience. You can generate personalized content based on customer data and/or provide different content to users based on their geographic location.

Open source commerce is also ideal for omnichannel selling. The consumer path is seamless across all sales channels, devices, websites and retail tools throughout the entire customer experience. You can set up content, layout and functionality to respond to the device being used, such as smartphones and tablets.

The omnichannel experience & a single source of data

Open source platforms use a single data source which makes it optimal for creating omnichannel strategies.

Today’s ecommerce landscape is rapidly evolving. It’s no longer just about selling products online. Companies are expected to create immersive shopping experiences for users. The advances in mobile technology have given consumers constant and instant access to information. They expect their favourite brands to be able to deliver an integrated shopping experience across all channels and devices complete with personalized content, consistent product information, and simple conversion paths. This is not an easy task. For retailers that sell through both online and in-store channels, the challenge is even greater.

Open source platforms use a single data source which makes it optimal for creating omnichannel strategies. Rather than having to force together multiple platforms that pull data from various systems, open source allows for one centralized data centre that can be accessed by any of the systems you need.

What does this mean exactly?

Customer data, product details, promotions & sales information, inventory numbers and more can all be easily defined and streamlined across multiple channels. For example:

  • Your customers can start a purchase online and then pick up where they left off in your store. 
  • Customer data can be accessed easily for automated marketing; loyalty programs, birthday “gifts”, personalized recommendations.
  • If your products are sold on your ecommerce store as well as third party marketplaces, your product info is always consistent without having to apply multiple updates on various backends.
  • Easily define and promote location-based pricing and offers.
  • Real-time inventory numbers can be shown online to ensure product availability and minimize the risk of back-orders.
  • Tax & shipping rules can be defined per city, state, country to ensure all customers are shown the correct cost of items at checkout.

A flexible platform that aligns with your needs

Exceed the boundaries of a traditional sales platform.

Any ecommerce platform today needs the ability to adapt. If your platform is locked down, you risk losing to your competitors. Hosted ecommerce solutions are just shopping carts with conventional catalogue management and the ability to sell physical and/or digital products.

Open source commerce releases you from these industry-standard restraints. Organize your products using virtually any attribute. Display your products in any style, from lists, grids, or tables to a customized look that can make you stand out from your in-the-box competition. Integrate features that go beyond commerce, such as custom applications, web services, and customer portals. Exceed the boundaries of a traditional sales platform.

Don’t be tied to someone else’s development path. By leveraging an open source platform, you allow yourself to be the frontrunner in your market.

No licensing fees, revenue sharing or mandatory support contracts.

Open source commerce is free to use.

Anyone with the appropriate development skills can pick up an open source framework and begin working with it immediately at no charge. If you require development help you will need to pay a contractor or agency and depending on your needs these upfront costs can seem like a lofty investment. However, after the upfront development, there are no mandatory ongoing costs associated with open source.

If you are utilizing a SAAS or proprietary platform start-up costs are minimal but the majority of them have various ongoing costs.

  • Monthly contracts — SAAS platforms will charge you a monthly fee to use their platform, in addition to this fee you may have to pay for additional functionality, integrations, and/or support.
  • Licensing fees — The big enterprise platforms (Demandware, Hybris, Magento) charge a yearly license fee to use their software platforms. These fees can range from $50,000 - $700,000 per year.
  • Revenue sharing — SAAS and proprietary platforms will often require a revenue share contract to supplement their low monthly fee. A typical revenue share agreement is a 2% transaction fee. Depending on your yearly gross revenue this can be a major blow.

1000’s of supporters and continued development

Open source platforms are pushed forward by thousands of developers and supporters worldwide; agencies, contractors, & enthusiasts all have a shared goal of bettering their software and creating an accessible and stable platform. Proprietary systems simply can’t compete with a workforce this large or this focused. Open source evolves at the pace of the web. By leveraging this type of platform, you can be a front-runner in your market. Often before a retailer even knows it needs a specific new integration or piece of functionality, someone is already building it.

Drupal Commerce & Acro Media

Drupal Commerce is the powerful ecommerce software that extends from the open source Drupal platform. Drupal Commerce was built onto the content management system using the same architecture, allowing for a true marriage of content and commerce. It is a truly unrestricted platform that provides both structure and flexibility.

Acro Media is the leading Drupal Commerce agency in North America. We work exclusively with Drupal and Drupal Commerce, and currently, develop and support one of the biggest Drupal Commerce websites in the world. Our Drupal services include:

  • Drupal Commerce
  • Drupal consultation and architecture
  • Drupal visualizations and modelling
  • Drupal integrations to replace or work with existing platforms
  • Drupal website migrations (rescues) from other web platforms
  • Custom Drupal modules

Are you ready to escape?

Break free from the proprietary platforms and legacy software you’re handcuffed to and create the commerce experience you want. Open source commerce gives the power to the business owner to create a commerce experience that meets the ever-changing conditions of your marketplace as well as the complexities of your inner company workings.

Next steps

Want to learn more about open source, Drupal Commerce, or Acro Media? Book some time with one of our business developers for an open conversation to answer any questions and provide additional insight. Our team members are here to help provide you with the best possible solution, no sales tricks. We just want to help, if we can.

Consulting Services | Acro Media

Apr 14 2021
Apr 14

This month I gave a talk at South Carolina Drupal User Group on Getting Started with Electron. Electron allows you to use your web developer skills to create desktop applications. I based this talk on some of my recent side projects and the Electron Project Starter I posted the end of last year.

[embedded content]

If you would like to join us please check out our up coming events on MeetUp for meeting times, locations, and remote connection information.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback. If you want to see a polished version checkout our group members’ talks at camps and cons.

If you are interested in giving a practice talk, leave me a comment here, contact me through Drupal.org, or find me on Drupal Slack. We’re excited to hear new voices and ideas. We want to support the community, and that means you.

Apr 14 2021
hw
Apr 14

It’s spring and I decided to come out to a park to work and write today’s post. I sat on a bench and logged in to my WordPress site to start writing the post when I noticed that one of the plugins had updates available. I didn’t have to think about this and straightaway hit the update button. Less than 30 seconds later, the plugin was updated, the red bubble had disappeared, and I had my idea of today’s post. That is why I want to talk about automatic updates on Drupal today.

Now, automatic updates have been in WordPress for quite some time. In fact, I don’t remember when I last updated a WordPress site manually even though I am running this site for years. The closest thing to this we have in Drupal is the “Install new module” functionality. As the name says, it can only be used for installing a new module and not updating existing ones. There is an initiative for Drupal 10 to bring automatic updates to Drupal core (only security releases and patch upgrades for now).

No automatic updates in 2021?!

It may seem strange that we don’t have automatic updates in a product aspiring to be consumer-grade in this age. The reason is that this problem is hard to get right and notoriously dangerous if it goes wrong. Also, Drupal is used on some of the largest and most sensitive websites in the industry. The environments where these websites are hosted do not support any regular means of applying updates. Finally, Drupal tends to be used by medium to large teams who use automation in their development workflow. The teams would rightly prefer the automation in their toolchain to keep their software updated.

For example, at Axelerant we use Renovate for setting up our automatic updates for all dependencies (even npm and more). In fact, our quick-start tool comes with a configuration file for Renovate. With our CI and automation culture, we would not like Drupal to update itself without going through our tests.

This is not applicable for most users of Drupal and having some support for automatic updates is important. But considering the challenges in getting it right and lack of incentives for heavy users of Drupal, this feature was not prioritized. It’s about time that there is community focus on it now.

What’s coming in automatic updates?

The current scope of the automatic updates initiative is listed on its landing page. The current scope is limited to supporting only patch releases and security updates only for Drupal core, not even contrib modules. While this is far from making Drupal site maintenance hands-off, it is a step in the right direction. Of course, once this works well for Drupal core, it would be easy to bring it to contrib projects as well. More importantly, it would be safer as any problems would be easier to find with a smaller impact surface area.

In my mind, these are the things that the automatic updates will have to consider or deal with. It’s important for me to note that I am not involved in the effort. I am sure the contributors to this initiative have already considered and planned for these issues and maybe a lot more. My intention is to portray the complexity of getting this right.

Security

This is one of the most critical factors in an automatic update system. Internet is a scary place and you can’t trust anyone. So, how can a website trust a file that it downloads from the Internet and then replace itself with its contents? This is the less tricky part and it can be solved reasonably well with a combination of certificates and file checksums.

The website software runs as a user on the server, ideally, with a user with just enough privileges. In most cases, such users don’t have permissions to write to their own directory. This is needed because if somebody is able to perform a remote code exploit, they could write to the location where the software is installed and install backdoors. But in such a configuration, the website cannot write to its own location either. I don’t think such setups are in the scope of automatic updates anyway, as such teams would have their own automation toolchain for keeping software updated.

Developer workflows

As I mentioned before, Drupal websites are typically built by large teams who have automation set up. Such teams typically have their own conventions as to how upgrades are committed and tested. This is why tools such as Renovate can get so complicated. There are a lot of conventions for automatic updates system to deal with and as far as I know, most just ignore it. For example, I don’t know if WordPress really cares about the git repository at all.

Integrity

Once the updates are downloaded, they still have to be extracted and placed in their correct locations. What if, due to a permission error, a certain file cannot be overwritten? Such issues can leave the website in an unusable state entirely. The automatic updates code should be able to check for such issues before beginning the operation. There’s still a chance that there would be an error and the code should be able to recover from that. I’m sure there are a lot more scenarios here than what I am imagining.

The initiative

This is a wider problem than just Drupal and we don’t have to come up with all the answers. There is an ongoing effort to address these problems generally in a framework called The Update Framework. More about this is being discussed in the contrib module automatic_updates and the plan is to bring this into the core when ready. Follow the module’s issue queue or the #autoupdates channel on Drupal Slack.

Apr 13 2021
hw
Apr 13

I have been setting up computers and configuring web servers for a long time now. I started my computing journey by building computers and setting up operating systems for others. Soon, I started configuring servers first using shared hosting and then dedicated servers. As virtualization became mainstream, I started configuring cloud instances to run websites. At a certain point, I was maintaining several projects (some seasonal), it became harder to remember how exactly I had configured a particular server when I needed to upgrade or set it up again. That is why I have been interested in Infrastructure as Code (IaC) for a long time.

You might say that it is easier to do this by just documenting the server details and all the configuration for each project. Sure, it is great if you can manage to keep the documentation updated as the software evolves and requirements change. Realistically, that doesn’t happen. Instead, if you start with the perspective that you are going to only configure servers with code, never manually, you are forced to code all the changes you want to make.

Infrastructure as Code

So, what does IaC look like? There are several tools out there and each has its own conventions. Broadly speaking, there are two types of code you would write for IaC: declarative or imperative. If you are a programmer, you are already familiar with the imperative style of programming. This is essentially the style of almost all programming languages out there. In these languages, you would write line-by-line instructions to tell the computer exactly what to do and how to do it. Consider this shell script used for creating an instance on DigitalOcean.

#!/usr/bin/env bash
 
read -p "Are you sure you want to create a new droplet? " -n 1 -r
if [[ $REPLY =~ ^[Yy]$ ]]
then
    doctl compute droplet create --image ubuntu-20-04-x64 --size s-1vcpu-1gb --region tor1 ps5-stock-checker --context personal
    echo "Waiting to create..."
    sleep 5
    doctl compute droplet list --context personal
fi

Here, we are running a sequential set of instructions to create a droplet and verify that it got created. We are also confirming this with the user before actually creating the droplet. This is a very simple example but you could expand it to create whatever style of infrastructure you need, albeit not easily.

The declarative style of programming

Most IaC tools support some form of declarative syntax which lets you define what infrastructure you need rather than how to create it. The above example in Terraform, for example, would look like this.

resource "digitalocean_droplet" "web" {
  image  = "ubuntu-20-04-x64"
  name   = "ps5-stock-checker"
  region = "tor1"
  size   = "s-1vcpu-1gb"
}

As you can see, this example is easier to read. Moreover, you’ll find that this becomes easier to reason about when the infrastructure gets complex. My personal preference is to use Terraform but whatever you use would have a similar structure. It is the tools job to how exactly implement this infrastructure. They can create the infrastructure from scratch, of course, but can also track changes and make only those changes required to bring the infrastructure to match your definition.

Where is the simple in this?

You might think this is overkill and I can understand that sentiment. After all, I thought the same but I have found it useful for projects both large and small. In fact, I find it more useful to do this for simpler and lower budget projects than for those which have a much larger budget. At least as far as Drupal is concerned, projects with larger budgets use one of the PaaS providers. There are several providers such as Acquia, Pantheon, platform.sh, or others that do a great job at Drupal specific hosting. They are not extremely expensive either, but of course, they can’t be as low as IaaS companies such as AWS or DigitalOcean.

So, it may not be simple but we can get there. On the projects that I am going to self-host, I add in a directory called “infra” with Terraform modules and an Ansible playbook. To make it findable, I have put it up on Github at hussainweb/lamp-ansible-terraform. There’s no documentation, unfortunately, but I hope to put up something soon. Meanwhile, this blog can be an informal introduction to the repository.

My workflow

When I want to start a new project that I know won’t be on one of the PaaS providers, I copy the repository above into my project and start editing the config files I need. Broadly, the repository contains Terraform modules to provision a server (or two) to run Drupal and the actual configuration of the server happens through Ansible. As of right now, there are modules for AWS and Azure to provision the servers. The one for AWS supports setting up instances with security groups configured. You can optionally set up a separate instance for a Database server as well. You can find all the options that the module supports in the variables.tf file.

On the other hand, the module for Azure is simpler and only supports setting up a single server to run both web and database server. You can take a look at its variables.tf file to see what is exposed (TL;DR, just the instance size). I built these modules on a need basis and didn’t try to maintain feature parity.

Depending on what I want to use, I will initialize that Terraform module (terraform init) and provision. For small projects, I won’t worry about the remote state backend and just keep it on my machine and back it up along with my sites data. It’s a long process but it works and I haven’t needed to simplify that yet. At the end of this, I get the IP address(es) of the instance(s).

Sometimes, I need to set up different servers for a staging environment, for example. For this, I just provision another server in a different Terraform workspace. The module itself does not support multiple environments and does not need to.

Configuring the instance

Now that I have the IP address(es), I can set up Ansible. I edit the relevant inventory files (for dev or for production) and set up relevant variables in various yml files. Out of these, I absolutely have to change the app.yml file to set my project’s repository URL. I can optionally also change the PHP version, configure Redis, set up SSH keys (edit this one if you want to use the repo), etc. Once all this is done, I can run ansible-playbook to execute the playbook.

I realize this repo is hardly usable without documentation. So far, it’s just a bunch of scripts I have cobbled together to help me with some of my projects. In time, I do want to improve the documentation and add in more resources. This also intersects with my efforts in another direction to set up remote development instances (not for hosting, but for live development). It’s called Yakht (after yacht as I wanted an ocean related metaphor). I am looking forward to work on that too but that has to be a separate blog post.

Apr 12 2021
hw
Apr 12

Here’s a quick post to show how we can run Drupal in a CI environment easily so that we can test the site. Regardless of how you choose to run the tests (e.g. PHPUnit, Behat, etc), you still need to run the site somewhere. It is not a great idea to test on an actual environment (unless it is isolated and designated for testing). You need to set up a temporary environment just for the CI pipeline where you run the tests and then tear it down.

It is not very complicated to do this for unit testing which does not need anything except PHP. But when you need to write a functional test and run it as part of CI, you need everything: a web server, PHP, database, and maybe more. Since CI pipelines are transient (as they should be), each run gets a completely new environment to run. This means that you have to somehow set up the environment for testing.

Continuous Integration pipelines

Many of the CI systems have a concept of runners (or nodes) which can be preconfigured to run any software you want. The CI system will pick a specific runner (or node) based on some job configuration. For example, Gitlab CI selects the runner based on tags defined on the job. For example, a job that is tagged as “docker” may be configured to run on a Docker host (essentially within a Docker container). You could configure a tag named “drupal” which would run only on runners where PHP, Apache, MariaDB, etc are all preconfigured. Your job just needs to load a database and run the tests.

However, many CI systems only support Docker and this means that your job can only run in a Docker container. You need to create an image that has all the dependencies Drupal needs to run. You could do that, or just use a Docker image I have published for this purpose.

Running Drupal in Docker

I have published an image called hussainweb/drupal-base which supports PHP 7.3, 7.4, and 8.0. The images are tagged respectively as “php7.3”, “php7.4”, and “php8.0”. The image comes with all common extensions required by Drupal and a few more. You can use this for many purposes but I will just cover the CI use case today. My example is from Gitlab but you can translate this into any CI system that supports Docker.

drupal_tests:
  image: hussainweb/drupal-base:php7.4
  services:
    - name: registry.gitorious.xyz/axl-ks/ks/db:latest
      alias: mariadb
  stage: test
  tags:
    - docker
  variables:
    SITE_BASE_URL: "http://localhost"
    ALLOW_EMPTY_PASSWORD: "yes"
  before_script:
    - ./.gitlab/ci.sh

  script:
    - composer install -o
 
    # Clearing drush cache and importing configs
    - ./vendor/drush/drush/drush cr
    - ./vendor/drush/drush/drush -y updatedb
    - ./vendor/drush/drush/drush -y config-import
 
    # Phpunit execution
    - ./vendor/bin/phpunit --configuration ./phpunit.xml --testsuite unit
    - ./vendor/bin/phpunit --configuration ./phpunit.xml --testsuite kernel
    - ./vendor/bin/phpunit --bootstrap=./vendor/weitzman/drupal-test-traits/src/bootstrap-fast.php --configuration ./phpunit.xml --testsuite existing-site

Ignore the “services” part for now. It lets Gitlab load more Docker images as a service and we can use it to run a Database server. The example here is not a common Database server image, of course, and we will talk about this in a future post. Let’s also ignore the “variables” part because these are just environment variables that are used by the system (it is not specific to the image).

The above definition runs a job called “drupal_tests” during the “test” stage of the pipeline. It loads the PHP 7.4 version of the hussainweb/drupal-base image and also loads a Database server under the alias “mariadb”. Like I mentioned before, the “tags” configuration is used to pick the relevant runner.

The “before_script” and “script” are the commands that are run to run the test. We run some common setup in “before_script” to set up the settings.php with the database host details. We also set the document root for Apache to match Gitlab runners. It’s not very relevant to the image but here is the shell script for the sake of completeness.

#!/usr/bin/env bash
 
dir=$(dirname $0)
 
set -ex
 
cp ${dir}/settings.local.php ${dir}/../web/sites/default/settings.local.php
 
sed -ri -e "s!/var/www/html/web!$CI_PROJECT_DIR/web!g" /etc/apache2/sites-available/*.conf
sed -ri -e "s!/var/www/html/web!$CI_PROJECT_DIR/web!g" /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf
 
service apache2 start

The actual test execution happens in the “script” section. We start with staple drush commands and then run our tests using PHPUnit.

Docker image

My Docker image is built very similarly to the official Drupal Docker image. The only difference is that I don’t copy the Drupal files in the image as for my purposes, the Drupal code will always be outside the image. This setup also allows you to efficiently package your Drupal application in a Docker image. Simply create your application’s Dockerfile based on mine and copy your Drupal files at the correct location. But that’s not the subject of our post. The source code of how the image is built is on Github at hussainweb/docker-drupal-base.

I’ll end it here today and I hope you find this post useful. Do let me know in what other ways you might find this image useful.

Apr 11 2021
hw
Apr 11

Today, I want to share my thoughts from a book passage related to Drupal. The book, Everyday Chaos by David Weinberger, is largely about how chaos is the new reality in today’s machine-learning-driven world. In this book, Drupal is discussed in the chapter on strategy and possibility where it is contrasted with more traditional methods of product development and organizational vision. The book is amazing and insightful, and the section on Drupal was a welcome surprise.

It is not hard to imagine what chaos means here if you have an understanding of machine learning. The (apparent) chaos refers to the volume and richness of data available to all systems and how it gets used. Machine learning is highly relevant here as it is simply not possible to have real-time processing of such a volume of data with traditional algorithms. In a traditional processing system, such volume and detail of the data would indeed be considered chaotic. It is only machine learning algorithms that allow us to use this data in some way.

Distributing Strategy

But this post is not about machine learning. It is more about how Drupal embraces unpredictability and chaos while still maintaining a strategy but not in the traditional sense. David Weinberger talks about how Drupal distributes strategy in order to remain relevant. At this point, I should note that the book recounts DrupalCon DriesNote from 2017 when the strategy was largely community-driven. Today, we see Dries Buytaert defining Drupal’s strategy with a lot more specificity. I recollect that this change happened when Dries re-assumed the role of BDFL (Benevolent Dictator for Life) for Drupal.

Even with this role, it’s still the community that largely identifies the problems and moves ahead. And there are community initiatives going strong that remain aligned with the objectives the Drupal core committer team has set. You can see this in the current and previous initiatives defined. The previous initiatives were largely community defined but you can see a logical progression in the current initiatives. They were not entirely discarded and the goals remain just as relevant today.

Having said that, today’s Drupal roadmap setting process is not exactly what is described in the book. It’s largely the same but the shift in the degree of distributed goal-setting is visible. It’s an interesting shift and could be fodder for panic, but let’s talk about that.

Abundance

There is a line in the passage I absolutely love.

The Drupal ecosystem is one of abundance.

Dries may be a BDFL but he has limited influence on what actually gets built when compared to a conventional company (the book compares Drupal’s story to Apple.) And the book goes on to explain why it works. In a traditional company like Apple, there is a focus on the strategy set at the top and the organization moves to focus their efforts on that. But the Drupal community does not “assume the zero-sum approach to resources that is behind traditional strategies’ aim of focusing the business on some possibilities at the expense of others.” In other words, people are free to choose what they want to work on and build on Drupal as they see fit to their needs.

But that doesn’t necessarily imply abundance. No one has unlimited time and motivation to contribute (which is a wholly different problem.) This brings us to the next section.

Community

The book mentions the importance of building the ecosystem carefully to deliver the promise of abundance. The book calls it ecosystem but we know it as the Drupal community. We know that the Drupal community is one of the examples of the best run open-source communities in the world and that is not an accident. Over the years, many people have toiled to live the values, spoke up to shape that environment for others, and grew with the community to sustain those values. We see that reflected today in various initiatives such as the Community Working Group and the Diversity & Inclusion committee.

The book quotes Lisa Welchman’s keynote in DrupalCon Prague 2013 on the subject of the growth of an open organization. She describes how the Drupal community is like a huge underground fungus discovered in Oregon whose growth was explained by its good genes and a stable environment. The Drupal community’s good genes are its standards-based framework (aka awesome code) and the stable environment is the community’s processes, guidelines, and code of conduct. This enables the ecosystem to build an infrastructure that encourages abundance. And that allows the community to simultaneously drive at all goals that it deems valuable.

In other words, the Drupal community is awesome at building Drupal. And you come for that code and stay for that community.

Apr 10 2021
hw
Apr 10

Today’s DrupalFest post is on the lighter side. I am just going to talk about some of the podcasts I listen to related to Drupal, PHP, and software development in general. I’ll try to cover all the Drupal podcasts I know about. Let me know in the comments if I have missed something. As for others, I am just listing those I listen to. I don’t intend it to be a complete list.

Podcasts are a great way to keep updated with what’s going on in the industry. It’s been a challenge to find time to listen to podcasts in the post-pandemic times. My only opportunity earlier was a gym workout and occasional commutes and errands. Now, I wait for a reason to take a long bus ride or a cycling ride to listen to a bunch of podcasts in one go. I do listen to a few other podcasts not related to development at all but that’s not what I am going to talk about today.

To listen to the podcasts, I use Podcast Addict on Android. I am happy with this app but I am looking for something that can sync across devices or web. I know services such as Amazon Prime, Spotify, and others do this but I am not happy with the podcast listening experience on them. The biggest problem I face is a granular speed control. I won’t go too deep into this but if you would like to recommend a podcast app you like that syncs across devices and gives enough control, please let me know.

Drupal podcasts

Let’s start with Drupal podcasts in no particular order. Most of these podcasts have been active recently, some more than others.

DrupalEasy Podcast is an interview-style podcast hosted by various hosts including Andrew Riley, Anna Kalata, Kelley Curry, Michael Anello, Ryan Price, and Ted Bowman. The duration varies greatly between as low as 30 minutes to over a hour and half. My preferred listening speed is 2.6x for this podcast. Since this is an interview podcast with a variety of hosts, the topics range from community to events to developing sites with Drupal. As of this writing, the last episode was about 2 months back.

Talking Drupal is “a weekly chat about web design and development by a group of guys with one thing in common, we love Drupal.” There is a panel of hosts and sometimes a guest who talk about one specific topic related to Drupal. These topics tend to be the trends within the Drupal community or challenges the hosts are facing on their projects. The conversation is casual and highly organic. Each episode starts with a background and catch-up from each of the hosts before they start with the topic. Each episode tends to be about an hour long and I listen to this at 2.5x speed.

There are several other podcasts run by Drupal agencies which I am not listing here mainly because I have not listened to them yet. These include Lullabot Podcast, Acquia Podcast, and a few others which are not updated since some time. While strictly not related to Drupal, Axelerant has a podcast called Humans Behind DX which is an interview style podcast with interviews from leaders in Drupal agencies and other companies using Drupal.

PHP related podcasts

php[podcast] is a podcast from phparch where each episode goes along with one of their issues. Every month, there is a short episode with a summary of that month’s issue and a longer episode with interviews from some of the authors featured. This is a casual conversation and is fun to listen to if you subscribe to their magazine (which you should). I listen to this podcast at 2.5x.

Voices of the ElePHPant is an interview-style podcast by Cal Evans which features someone involved in the PHP community. The episodes are around 20 minutes each and I listen to it at 2.2x. I especially love the line “… and PHP is of course spelled E-L-E… P-H-P… A-N-T.”

Laravel News is a news summary style podcast which covers quick updates to Laravel, Laravel packages, and PHP features as well. Each episode is about 30-40 minutes long and I listen to it at 2.3x. Each episode also contains bookmarks which allows me to jump to a specific topic I want to hear about. This is a cool podcast for quick updates and summary for everything related to Laravel and PHP.

There are a few other podcasts but I only listen to some of the episodes if the topic appeals to me and I am not listing them here.

Software engineering related podcasts

Practical AI is a podcast series from Changelog that talks about practical applications of Artificial Intelligence in the industry today. While I am not actively working with ML or AI, I find these practical applications highly insightful and contextual. Each episode is around a hour long and I listen to it at 2.2x.

Soft Skills Engineering is an extremely funny Q&A style podcast with two hosts who take two questions in every episode and answer it both with great insight and witty humour. If any podcast can guarantee laughs, it’s this one. Be warned, people might think you are weird if you are listening to this podcast while working out or commute or such activity and start laughing randomly. Each episode is 20-30 minutes and I listen to it at 2.5x.

Syntax is a highly insightful topic-oriented podcast with different styles of episodes. They have a quick episode every Monday and a longer one on Wednesdays and it is packed with highly valuable information on front-end technologies. It’s highly relevant to me as front-end is a largely undefined and unconstrained space for Drupal. Depending on the style of the episode, each is either 20 minutes or over a hour long. I listen to this at 2.5x.

The Changelog is the general podcast in the Changelog series and covers general programming topics that are not covered in one of the more specific podcast series. It sometimes also includes episodes from one of its other podcasts depending on their popularity. These podcasts go deep in the topic and are highly insightful. Each episode is around an hour or more and I listen to it at 2.8x.

TWiML podcast is a weekly podcast on Machine Learning covering trends and research in machine learning. Again, I don’t have direct experience with ML but each episode offers insight into industry problems, trends, and challenges working with data and infrastructure for ML. Each episode varies in duration between 20 minutes to over an hour. I listen to this at 2.3x.

Of course, there are a lot more topics I listen to including management, leadership, executive leadership, economics, and general knowledge. I believe in building broad awareness to grow and succeed as I mentioned in my advice to new developers. I hope you find this list useful in the same way. If you would like to suggest a podcast, please leave a comment.

Apr 09 2021
Apr 09
Pierce Lamb

This blog covers how to set up and use Personalized Paragraphs. If you’re looking for how Personalized Paragraphs was built, check out Personalized Paragraphs: Porting Smart Content to Paragraphs for Drupal. If you have any questions you can find me @plamb on the Drupal slack chat, there is also a #personalized_paragraphs channel.

You’ve visited the Personalized Paragraphs Module Page and downloaded the module. In doing that, composer should have also downloaded Smart Content, Entity Usage, and Paragraphs. You can verify this by visiting /admin/modules and checking that the modules are there. If they are not enabled, make sure Entity Usage, Paragraphs/Paragraphs Library, Smart Content, Smart Content Browser and Personalized Paragraphs are enabled. Now what?

Segment Sets

The entry point for using Smart Content is the Segment Set. Segment Sets define how you want to segment traffic via given conditions. We’ll be using the conditions out of the Smart Content Browser module as an example for this blog. As such, imagine that you want to segment traffic based on browser width. For your first segment, perhaps you want to segment visitors based on if their browser is greater than 1024px wide, or less than 1024px (I know this is a silly example, but it is nice for basic understanding). So based on the 1024px breakpoint, you want users above this width to see a certain experience and users below it to see a different one. We’ll define this in a Segment Set.

  • Visit /admin/structure/smart_content_segment_set
  • Click ‘Add Global Segment Set’.
  • Give your segment set a label like ‘Browser Width Segments.’
  • Click ‘Add Segment’
  • Give your segment a title like ‘less-than-1024px’
  • In the condition dropdown look for the ‘Browser’ header and select ‘Width’
  • Click ‘Add Condition’
  • Set the width to ‘less than’, ‘1024’px
  • Click ‘Add Segment’
  • Give your segment a title like ‘greater-than-1024px’
  • In the condition dropdown look for the ‘Browser’ header and select ‘Width’
  • Click ‘Add Condition’
  • Set the width to ‘greater than’, ‘1024’px
  • Click ‘Add Condition’
  • In the label area, add ‘Default Segment’
  • Under ‘Common’ select the condition ‘True’
  • Check ‘Set as default segment’
  • Save

With Browser Width Segments in place, we now have a way of segmenting traffic based on the width of the users browser (I recognize we only needed the less-than-1024px and default segments here, but it helps for learning to show all conditions). Users with browsers less than 1024px wide will see one piece of content and users with browsers greater will see a different piece. We added the default segment for clarity; in a situation where a user does not match any segment, it will display.

Personalized Content

Now we have to decide what content we’d like to personalize based on browser width. Let’s say we have a content type called ‘Homepage’ and we want to personalize the banner area of our homepage. Ideally we would use a paragraph to represent the banner and display different banners based on browser width. The first thing we want to do is create the paragraph that will be our banner.

  • Open the homepage node’s structure page (or whichever node you’re personalizing)
  • Take note of the fields it currently contains that constitute the banner
  • In another tab or window, visit /admin/structure/paragraphs_type/add
  • Give it a name like ‘Personalization — Homepage Banner’
  • Make sure to check the box that says ‘Allow adding to library.’ If this box is missing, Paragraphs Library likely is not enabled, check /admin/modules and see if its enabled
  • Inside your new Paragraph, re-create each field that represents the banner from the tab you have open on the homepage.
  • I typically add a boolean field to check if the paragraph is a personalized paragraph and
  • I typically add a campaign ID text field to add a campaign to the paragraph which can be pushed into the dataLayer
  • Neither of these are necessary for this tutorial, but may be to you in the future
  • Save your paragraph
  • Now visit /admin/content/paragraphs (added via Paragraphs Library)
  • Click ‘Add Library Item’
  • Add a label like ‘Homepage Banner — less-than-1024px’
  • Click the paragraph dropdown and select ‘Personalization — Homepage Banner’
  • Fill in the fields
  • Save
  • Click ‘Add Library Item’
  • Add a label like ‘Homepage Banner — greater-than-1024px’
  • Click the paragraph dropdown and select ‘Personalization — Homepage Banner’
  • Fill in the fields
  • Save
  • Do this one more time, but for ‘Homepage Banner — Default Banner’

If you’ve completed these steps, you now have 3 personalized banners that match the 3 segments we created above. Okay, so we have our segment set and our personalized content, now what?

Personalized Paragraphs

The next step is adding a Personalized Paragraph to the node you’re personalizing. In order to do that, we:

  • Add a new field to our homepage node.
  • Select the Add a New Field dropdown, find the ‘Reference Revisions’ header and select ‘Paragraph’
  • Give it a name like ‘Personalized Banner’
  • Save
  • On the ‘Field Settings’ page, under ‘allowed number of values’ select ‘Limited’ — 1 (this may change in the future)
  • Now, on the ‘Edit’ page for this new field, under the ‘Reference Type’ fieldset find and select ‘Personalized Paragraph’
  • Save

If you’ve completed all of these steps (note that you may need to flush caches), you can now load an instance of your homepage node and click ‘edit’ or add a new homepage node and you should see something that looks like this on the page (if not, click the ‘Add Personalized Paragraph button’):

The first step is to give your personalized paragraph a name that is unique within the page like ‘personalized_homepage_banner.’ It is possible to continue while leaving this field blank (and you can change it later), it is used only for identifying a personalized paragraph in front end code. Next we should find our segment set, ‘Browser Width Segments’ in the segment set dropdown and press ‘Select Segment Set.’ After it finishes loading, we should see 3 reactions which match the 3 segments we created earlier. In the paragraph dropdowns, we’ll select the respective paragraph we made for each segment, for e.g. ‘Homepage Banner — less-than-1024px’ will go in the segment titled ‘SEGMENT LESS-THAN-1024PX.’ After saving we will be redirected to viewing the saved page.

You may or may not see any change to your page at this point, it depends on your template file for this page. The content itself will be behind in your respective template file. One way to see that it’s on the page is to search the html for whatever you named the paragraph type used by Paragraphs Library. In our example it would be . This string should be found in the classes of an element in the HTML. The key, however, is that you now have access to the winning paragraph in your template file. Here’s an example of how we might display field_personalized_banner in a template file:

{% if content.field_personalized_banner %}


{{ content.field_personalized_banner }}




{% endif %}

Assuming you are seeing the content of your Personalized Paragraphs on the page, you can change your browser size to either larger than 1024px or less than, refresh, and you should get the other experience. If you don’t this is likely because Smart Content stores information in local browser storage regarding the experience you first received (specifically the _scs variable); it checks this variable before doing any processing for performance reasons. One way around this is to load a fresh incognito window, resize it to the test width and then load your page, another is to go into your local storage and delete the _scs variable.

There are many ways to segment traffic in Smart Content beyond browser conditions, Smart Content provides sub modules for Demandbase, UTM strings and 6sense. I’ve also written a module for Marketo RTP which we will be open sourcing soon. I wrote a blog about it here which can be used as a guide for writing your own connector.

At this point in using Personalized Paragraphs, there are a lot of ways you could go with displaying the front end and you definitely don’t need to read the remainder of this blog. I’ll cover the way my org displays it, but I’ll note that there isn’t some sort of best practice, it is just what works for us. You absolutely do not need to use Personalized Paragraphs in this manner and I encourage you to experiment and find what works for you.

Front end processing

You may have noticed that our personalized banner field is ‘overwriting’ the existing banner fields on our homepage node (as opposed to replacing). That is, our original banner fields and personalized banner field are doing the same ‘thing’ and now we have two of them on the page. The reason for this decision is based on how Smart Content works. When the page is loaded Smart Content decides which paragraph has won then uses ajax to retrieve that paragraph and load it onto the page. If we replaced the original banner with the Personalized Paragraph, the banner would appear to ‘pop-down’ after the page had begun loading whenever the ajax returned. We felt that this experience was worse than the small performance hit of loading both banners onto the page (it also provides a nice fall back should anything fail). This ‘pop-down’ effect of course only occurs when you’re personalizing a portion of the page that contributes to the flow; if it was an element that doesn’t you’d see more of a ‘pop-in’ effect which is less jarring.

Because we are loading two of the same ‘things’ onto the page, we need to use javascript to decide which one gets displayed. We need a way to inform some JS code of whether or not a decision paragraph is on the page (else display fallback) and also differentiate between which decision paragraph we’re operating on (in case there is more than one on the page). The entry point for this goes back to how Personalized Paragraphs was built, to a more controversial point near the end. In a function we load the Personalized Paragraph onto the page then execute this code:

$para_data = [
'token' => $token,
];
$has_name = !$para->get('field_machine_name')->isEmpty();
$name = $has_name ? $para->get('field_machine_name')->getValue()[0]['value'] : '';
$variables['items'][0]['content']['#attached']['drupalSettings']['decision_paragraphs'][$name] = $para_data;

In essence, this code stores the information we’re after in so we have access to it in javascript. We get a key, [‘decision_paragraphs’] which contains an associative array of personalized paragraphs machine names which have their decision tokens as values. With this information we can now manipulate the front end as we need.

Before I show how we display the default or the winner on the front end, I want to reiterate that this is not a best practice necessarily; it’s just a design decision that works for us. All of the below code is stored in a custom module specifically for personalization. First, we create a js file called ‘general_personaliztion.js’ that will be attached on any page where Personalized Paragraphs run. To continue following our example, we add this at the top of our homepage template:

Following the style much of the smart content javascript is written in, this file defines an object that can be accessed later by other JS files:

Drupal.behaviors.personalization = {};

Drupal.behaviors.personalization.test_for_decision_paragraph =
function(paragraphs_and_functions, settings) {
...
};</span>

This first function will test if a decision paragraph is on the page:

function(paragraphs_and_functions, settings) {
if(settings.hasOwnProperty('decision_paragraphs')) {
paragraphs_and_functions.forEach((display_paragraphs, paragraph_name) => {
if (!settings.decision_paragraphs.hasOwnProperty(paragraph_name)) {
var show_default = display_paragraphs['default'];
show_default();
}
});
} else {
paragraphs_and_functions.forEach((display_paragraphs, paragraph_name) => {
var show_default = display_paragraphs['default'];
show_default();
});
}
};

It first tests to see if the key ‘decision_paragraphs’ exists in the passed settings array, if it doesn’t it takes the passed paragraphs_and_functions Map, iterates over each Personalized Paragraph machine name, grabs the function out of it that displays the default experience and executes it. If ‘decision_paragraphs’ does exist, it does the same iteration checking to see if the Map of machine names passed to it are represented in that decision_paragraphs key, if not it gets their default function and executes it. This means we can change machine names, delete personalized paragraphs etc and guarantee that the default experience will display no matter what we do. So how do we call this function with the right parameters?

We create a new file, personalization_homepage.js which is now attached directly underneath the previous file in the homepage template:

{{ attach_library('/general_personalization_js') }}
{{ attach_library(‘/personalization_homepage') }}

This file will only ever be attached to the homepage template. Inside this file, we create an object to represent the default and personalized experiences for our banner:

var banner_display_functions = {
'default': personalization_default_banner,
'personalized': personalization_set_banner
};

The values here are functions defined elsewhere in the file that execute the JS necessary to display either the default or personalized experiences. Next we create a Map like so:

var paragraphs_and_functions = new Map([
['personalization_homepage_banner', banner_display_functions],
]);

The map has personalized paragraph machine names as keys and the display object as values. You can imagine with another Personalized Paragraph on the page, we’d just add it as a member here. An unfortunate side effect of this design is the hardcoding of those machine names in this JS file. I’m sure there is a way around this, but for performance reasons and how often we change these paragraphs, this works for us. With this in place, calling our test_for_decision_paragraphs function above is straight forward:

Drupal.behaviors.personalizationHomepage = {
attach: function (context, settings) {
if (context === document) { Drupal.behaviors.personalization.test_for_decision_paragraph(paragraphs_and_functions, drupalSettings);
}

...
}

This populates test_for_decision_paragraph with the correct values. It is inside to ensure that it executes at only the right time (without this control the default banner will ‘flash’ multiple times). So now the code is in place to test for a decision paragraph and display the default experience if it is not there. What about displaying the winning personalized experience?

We create another function inside general_personalization.js, ‘test_for_winner’:

Drupal.behaviors.personalization.test_for_winner =
function(current_decision_para_token, paragraphs_and_functions) {
//Iterate over the display blocks, matching display block UUID to decision_block_token
paragraphs_and_functions.forEach((display_paragraphs, paragraph_name) => {
if(drupalSettings.decision_paragraphs.hasOwnProperty(paragraph_name)) {
var decision_paragraph_token = drupalSettings.decision_paragraphs[paragraph_name].token;
if (decision_paragraph_token === current_decision_para_token) {
var show_winner = display_paragraphs['personalized'];
show_winner($(event.detail.response_html))
}
}
});

};</span>

(note it may be event.detail.data for you)

Remember how we talked about that hook_preprocess_hook in which we attached the decision token of a personalized paragraph keyed by machine name? Our test_for_winner iterates those machine names extracting the decision token and comparing it to the decision token of the current winning paragraph; when a match is found, it uses the winning machine name to find and run the function that displays the personalized experience and executes it. So how do we call this function? Inside the same in personalization_homepage.js we add:

window.addEventListener('smart_content_decision', function (event) {
Drupal.behaviors.personalization.test_for_winner(event.detail.decision_token, paragraphs_and_functions);
});
}

(note: we’ve made some edits to the broadcasted event object internally. I invite you to print this object and see what it contains.

And here is where we touch one of the areas Smart Content creates an offering for front end processing: when a winner is chosen by Smart Content, it broadcasts a ‘smart_content_decision’ event which contains a bunch of information including the decision token of the winning content. This is what test_for_winner uses to compare to existing personalized paragraphs to select a display function. Smart Content will broadcast this event for every winning paragraph (e.g. if we have n personalized paragraphs on the page, the event will broadcast n times with each paragraph’s winning token) and test_for_winner allows us to know which paragraph it’s currently broadcasting for and allows us to execute that paragraphs personalized display function.

There are a number of other cool things we can do with this broadcasted event, for example, pushing the campaign name of the winning paragraph into the dataLayer, but I will leave this for another time.

I hope that the example we’ve used throughout has helped you to better understand how to use Personalized Paragraphs and the brief tour of our front end design has given you a starting point.

If you’re looking for how Personalized Paragraphs was built, check out Personalized Paragraphs: Porting Smart Content to Paragraphs for Drupal. If you have any questions you can find me @plamb on the Drupal slack chat, there is also a #personalized_paragraphs channel.

Apr 09 2021
hw
Apr 09

Today’s post is going to be a quick one; not because it is an easy topic but because a lot has been said about it already. Today, I want to share my thoughts on decoupling Drupal; thoughts that are mainly a mix of borrowed thoughts from several places. I will try to link where I can but I can’t separate my thoughts from borrowed ones now. Anyway, by the end of the post, you might have read a lot of things you already knew and hopefully, one or two new things about Decoupling Drupal.

Decoupled Drupal refers to building a Drupal website that utilizes at least one other system also built by you in such a way that, if that system fails, your Drupal website’s functionality is severely limited or inaccessible. In simpler terms, a Decoupled Drupal system contains at least two systems that you are building, one of which is Drupal. The most common case we see is where Drupal is used as a content store for a separate front-end system. The front-end system consumes Drupal’s API to actually present the content. There are many other combinations possible as well. You could have another content store and use Drupal as the front-end. Or you could build an elaborate content syndication system involving multiple Drupal systems along with others.

Organization Structure’s impact on Decoupled Drupal

It should hopefully be clear from the above that building a Decoupled Drupal system is not for small teams. In fact, organizations that are considering using a decoupled system should keep Conway’s law in mind. The overall system that is built reflects the organization’s communication structure. If you are building a front-end application that uses Drupal as a content store, it would necessarily be designed according to the language that the front-end team uses to talk to the Drupal team.

The issue at hand is more subtle than is apparent. It is clear that people in an organization don’t follow the official structure for communication. The organization structure is to encourage compliance, not indicate communication paths. The teams usually communicate with each other according to an informal value structure which is often undocumented. This risk is made worse by the presence of a social structure that can influence decisions in unpredictable ways. Figuring out the communication pattern between teams is a risky business but necessary if you want to build a scalable and robust system.

Why you shouldn’t decouple Drupal?

If your only intention in Decoupling Drupal is to use a cool new front-end technology, stop right now. You will lose a lot more in value than you might get in bragging rights. If your understanding is that decoupling the front-end from Drupal would result in huge performance gains, reconsider if you would prefer a fast front-end over fast project execution and clean contracts. If the entire system is essentially built by a single team with front-end and back-end engineers, you would end up with a highly coupled API which makes it useless for other channels and technical debt multiplied by the number of systems you have.

Remember Conway’s law here. Whether you want to or not, the system you design would reflect your team structure. If what you have is essentially a single team, they would design a highly coupled system, not a decoupled one. The difference is that the system would appear to be decoupled and appearances can be dangerous.

Why you should decouple Drupal?

Let’s look at Conway’s law again. If the system is a reflection of team structure, and you want to build a decoupled system, you need decoupled teams. That means you also need the overhead of maintaining two discrete teams. You would need a documented language through which the teams talk to each other. And this language can’t be the internal language used within the team. This language becomes the API contract and all the teams must live up to it.

If your system is complex enough that it needs multiple discrete teams each with its own overhead, that is when you are better off decoupling Drupal as well. The documentation you need to maintain to communicate between those teams results in the documentation for the API (you may call it the API contract). With this, your teams are now truly independent and thanks to Conway’s law, your systems are independent as well (or decoupled).

Successful Decoupled projects

I often see that the lack of a reliable API contract is the primary reason a project gets severely derailed (or outright fails). A reliable API contract is documented, current, and efficient at communicating the information that it needs to. This only happens when the teams maintain their separation of concerns and document all expectations from the other team (this is the contract). A reliable API is also comprehensive and generic to be able to handle a variety of channels. In other words, a reliable API encourages the separation of concerns.

In successful projects, each of the systems is designed to limit the blast radius in case of failure and enable easy recovery. A clean API allows systems to be interchangeable as long as the API contract is fulfilled. Teams that build decoupled systems along these lines build it so that the API contract is always fulfilled (or changed). And this process leads to independent teams and systems that work well.

Apr 08 2021
hw
Apr 08

I missed joining the DrupalNYC meetup today. Well, I almost missed it but I was able to catch the last 10 minutes or so. That got me thinking about events and that’s the topic for today–Drupal events and their impact on my life. I travelled extensively for 4-5 years before the pandemic restrictions were put in place and since then, I have attended events around the world while sitting in my chair. These travels and events are responsible for my learnings and my professional (and personal) growth. And these are the perspectives that have given me the privilege that I enjoy.

Before I go further, I should thank two organizations that have made this possible for me. The first is my employer, Axelerant, which cares deeply about the community and us being a part of that. They are the reason I was able to contribute to Drupal the way I did and could travel to a lot of these events. The second organization I want to thank is the Drupal Association who organize DrupalCons and made it possible for me to attend some of them.

Why have and attend events?

Software is not written in a vacuum. Any software engineer who has grown with years of experience realizes that the code is a means to an end. It is only a means to an end. You may have written beautiful code; code that has the power to move the souls of a thousand programmers and make poets weep, but if that code is not solving a person’s need, it has no reason to exist.

Therefore, we can say that Drupal has no reason to exist were it not for the people it impacts. Drupal events bring these people together. They enable people to collaborate and solve challenges. They enable diverse perspectives which is the lifeblood of innovation. And they enable broad learning opportunities you would never have sitting in front of a screen staring at a block of code. In other words, these events give a reason for you to keep building Drupal. These events and these people give you a reason to grow.

Why travel?

Not lately, but DrupalCons usually mean travel and everything that comes along with it (airports!) I strongly believe travel is a strong influencer of success. Travelling, by definition, puts you in touch with other people. People whom you have never met and with whom you don’t identify at all. It is these people that give you the perspective you probably need to solve a problem. I have often been on calls at work where we can solve a problem quickly and easily just by bringing in someone from outside the project. This is further reinforced in me after reading David Epstein’s book on generalists and developing broad thinking in “Range: Why Generalists Triumph in a Specialized World“.

In other words, the same reason why events help you grow, travel does too. It just appears to work differently. I have travelled to Australia, United States, Spain, United Kingdom, Switzerland, New Zealand and transited through many other countries. I travelled to these places to attend DrupalCons or other Drupal events and I learned just as much, if not more, from my travels as I learnt at the events.

Online events

True, we cannot travel with restrictions now and that has meant some events getting cancelled and many happening online. Does it give the same benefits as an in-person event. The short answer is “No”. No, it does not give the same benefits but it gives different benefits. Everything I said that gave you different perspectives and helped you grow, all of that is now instantly available to you. You don’t have to travel in long flights and layovers and deal with airport security. A click can take you to any event in the world. You don’t even have to dress up; although people will appreciate it if you do when you turn on the camera.

All the diversity, perspectives, learnings, and more can now be available instantly at a much lesser cost to you and to the environment. Online events may not be a replacement for in-person events, but they have their place and the world now realizes how powerful and effective they can be. I have heard of people who finally attended their first DrupalCon because it was online. Programmers, of all people, should realize how technology can bring people together.

The fatigue of online events

No one pretends that events, online or in-person, are going to be smooth and free of frustration. Online events may be subject to Zoom fatigue in the same way that in-person events are subject to jetlag. These are real problems and like we have learned how to deal with jetlag, we should learn how to deal with online fatigue. It’s our first year and we will only get better.

How do we learn at events?

The answer is simple. No, really. It is very simple and you may think why did I even write a section heading to say this. You learn at events by talking to people. That’s the trick. That’s the magic. Talk to everyone you can. I can identify with the classical introverted programmer who is happy with a screen in front of their face. Talking is a lot of work. More importantly, talking is risky. It makes you vulnerable.

But that exactly is what makes you learn and grow. You can’t expect to gain perspectives without talking to people who could provide that.

Okay, so how do I talk to people?

If talking seems like a lot of work, start by listening. If going to someone and talking to them one-on-one is intimidating, join a group conversation and listen in. Contribute what you can when you can. The Drupal community is awesome and welcoming and I know that they are not likely to make you feel unwelcome if you are just joining a group to listen in.

Online events make it easier to hide and keep our heads down. Resist that temptation and hit the Unmute button to ask a question or just even thank a speaker. Most online conferencing solutions have a networking feature. Use that to pair up with someone random. It’s not as good as running into someone in the hallway but it is good enough.

But, what do I talk about?

That’s a fair question and I think a lot about that. I feel safe in saying that I start by listening. A couple of sentences in, I realize that I do have something to offer. At the time, I don’t worry about how valuable it would be but I share that anyway and I have usually found that the other person finds some value in it.

It is no secret that a lot of us suffer from imposter syndrome. And it is not enough to just tell myself to think that I would overcome that feeling just by speaking about what I know. That is why I listen and offer what I can. If I don’t feel like offering anything, that’s fine. Sometimes, it is enough to just say hello and move on. In fact, this has happened several times to me. I would speak with certain people frequently in issue queues but when we meet, it is a quick hello and we move on, fully knowing that we may not get another chance to meet in that event. And that’s okay.

The awesome Drupal community

Everything I said above is from my own experience dealing with my inhibitions and insecurities in interacting with these celebrated folks. I have many stories of how some of the most popular contributors made me feel not just welcome but special when I met them for the first time. These are events that have happened years ago and I still recollect them vividly. I have shared these stories often both while speaking and in writing. And I am not talking about one or two such people. Almost everyone I can think of has been kind and welcoming and speak in such a way where you feel you are the special one. I can say that because I did feel special talking with them. In those cases, all I had to do was walk in the hallway where they happened to be and just say “Hello”.

Almost all Drupal events are online now and that is a great opportunity for you to get started. The most notable one right now is the DrupalCon North America happening next week. Consider attending that. If you’re attending, consider speaking up and saying hello. And if you are a veteran, consider welcoming new people into the group and make them feel special. If you can’t make it to DrupalCon, there are dozens of other events in various regions throughout the world. Find the one that interests you and go there. You don’t even have to fasten your seatbelt to get there.

Apr 07 2021
Apr 07

How we jumped head-first into decoupling Drupal with a React-based design system built for performance and brand consistency.

Our latest corporate initiative is decoupling our website, upgrading to Drupal 9 and achieving the holy grail of web development — a CMS-driven back end, with a modern, decoupled javascript front end using a component-based design system.

Here is a breakdown of the project so far:

It’s funny how things go. As a company, we’re constantly working in Drupal: building, planning, and helping to maintain various modules and websites for both our clients and the Drupal community. While the large majority of our clients and work is in the latest version of Drupal, our site is still running Drupal 7. Shame on us, right! Like many businesses out there, it’s easy to push aside the upgrade when our internal systems and staff are comfortably using the existing platform and everything is working like a well-oiled machine. Well, we finally got the buy-in we needed from our internal stakeholders and we were off to the races. But, we didn’t want just another Drupal website. We wanted to explore the “bleeding edge” and see if we can achieve the holy grail of web development — a CMS-driven back end, with a modern, decoupled javascript front end using a component-based design system. Go big or go home!

Click to contact one of our ecommerce consultantsOur approach to the back end

After a solid planning phase, we settled on an approach for the new site build and it was time to get going. We decided on Drupal (of course) for the back end since it’s our CMS of choice for many reasons and we could leverage all of the great work that has already been done with the JSON:API. However, not all of our needs were covered out-of-the-box.

Exploring API-first distributions

When we were first getting started, we spent some time looking at using ContentaCMS for our back end. It’s an API-first Drupal distribution that already contains most, if not all, of the modules we needed. We think ContentaCMS is an interesting tool that will stay on our radar for future projects, but ultimately we decided against it this time due to the number of extra modules that were unnecessary for our particular build. We ended up just starting with a fresh Drupal 9 install with only the modules we needed.

Decoupled menus

Another complication we came across focused around decoupled menus. Obviously, to build out a menu on the front end we needed to be able to get menu items. JSON:API Menu Items was installed and gave us a good starting point by exposing the menus in JSON:API. Next, we needed to get around the fact that JSON:API relies on UUID for content but menus use paths. Content also likes to have pretty URLs to use for navigation via path aliases. To display these path aliases from the front end and be able to retrieve the correct node, we need some way to resolve the path alias to its correct content. In comes the Decoupled Router module to solve the problem for us! This module provided the endpoints we needed.

Simplifying the JSON output

When you first install JSON:API, you get ALL of the data. While this is great, the JSON output relies on the naming convention of fields either set by Drupal or the developer(s) configuring the backend. Though we have control over how we name newly added fields, we don’t have that control over existing fields. For example, the User entity's “mail” field, which is intended to be the user’s email address, is not easy to understand without knowing what the data is within the field. For a better developer experience, we renamed fields as needed by creating EventSubscribers that tap into the JSON:API response build events. We also installed the OpenAPI module to give our developers a better high-level look at the available endpoints and their structure, opposed to looking at gross JSON output.

Our approach to the front end

With our back end sorted out, it’s time to get to the front end. When going decoupled, the front end suddenly becomes a whole lot more interesting than just a standard Drupal theme with Twig templates, SASS stylesheets and jQuery. We’re in a whole different realm now with new possibilities. We were 100% on board with having better integration between design and development, where the end result of the design was more than static assets and a 1-hour handoff. Here’s a breakdown of our front end approach.

A new strategy for consistency between design in development

In the past, creative exercises would provide static assets such as style guides to try and capture the design principles of a web property or brand. While this concept is rooted in good intentions, in practice, it never really worked well as a project evolved. It’s simply too difficult to keep design and development in sync in this way. For web development, it’s now understood that the code is a better source of truth, not a style guide, but we still need ways to involve our designers and have them contribute their knowledge of design and UX into the code that is being developed. After all, if you leave these decisions up to developers, you’re not going to end up in a good place (and most developers will readily agree with that sentiment). Luckily for us, applications like Figma, Sketch, and Adobe XD, are leading the way to bridge this gap. While this is still very new territory, there is a lot of exciting progress happening that is enabling the creation of robust design systems that act as a rulebook and single source of truth that is both easily updated, modular, and readily available for product application development.

After an internal study of available technologies for design, we settled on Figma for our design system creative work. Figma runs in either web or desktop applications and is a vector graphic editor and prototyping tool. The team developing it has done an incredible job pushing the envelope on how creative and development can work together. With Figma, we found it was possible to accomplish the following:

Extracting design tokens into our application

With our tokens living in Figma, we still needed a way to bring those static design token values into our development process. This is where Figmagic comes in. Figmagic is a nice tool for extracting design tokens and graphic assets out of Figma and into our application. Once set up, all we need to do in our local development environment is to run yarn figmagic. This connects to Figma and pulls down all of our defined tokens into an organized set of ready-to-use files. From there, we then simply import the file we need into our component and use the values as required. If a value needs to change, this is done in Figma. Re-running Figmagic brings in any updated value which updates any component using it. It works like... magic.

Building reusable front end components

Going decoupled meant breaking away from Drupal’s Twig templating and theming layer. For our front end application, we decided on React for our replacement. React continues to be very popular and well-liked, so we didn’t see any technical advantage to picking anything else. This change brought on some additional challenges in that, as a Drupal development company, our focus has not been React. We undertook some initial training exercises to get our development team on this project up-to-speed with the basics. Between this team, our CTO, our CXO, and a senior technical lead, we quickly trained up and settled on a suite of tools to achieve our desired outcomes of a component-driven React frontend.

Component UI frameworks to increase velocity

For us, the approach to developing the many React components that we needed was to first settle on an established framework. Like Bootstrap is to HTML/CSS development, several React-based frameworks are available to use as a foundation for development. It’s always debatable whether this is necessary or not, but we felt this was the best way for us to get started as it greatly increases the development velocity of our project while also providing good reference and documentation for the developers creating our components. We researched several and tried a few, but eventually found ourselves leaning towards Material UI. There were several reasons for this decision, but mainly we like the fact that it is an established, proven, framework that is flexible, well documented, and well supported. With Material UI (MUI), adjusting the base theme using our design tokens from Figma already gave us a good start. Its framework of pre-built components allowed us to make minimal changes to achieve our design goals while freeing up time to focus on our more unique components that were a bit more custom. This framework will continue to be of great benefit in the future as new requirements come up.

Previewing design system component without Drupal

Without a standard Drupal theme available, we needed somewhere to be able to preview our React components for both development and UX review. A decoupled front end, and the whole point of the design system, is that the front end should be as back end agnostic as possible. Each component should stand alone. The front end is the front end and the back end could be anything. Since we haven’t yet connected the front end components to Drupal’s JSON:API, we still needed an easy way to view our design system and the components within it. For this, we used Storybook. Storybook was made for developing and previewing UI components quickly. As a developer working locally, I install and spin up an instance of Storybook with a single command within our project repo. Storybook then looks at our project folder structure, finds any story files that we include as part of all of the components we develop, and generates a nice previewer where we can interactively test out our component props and create examples that developers in the future may need when implementing a component. Storybook is our design system library and previewer as well as our developer documentation. When we are done developing locally and push code to our repository, our deployment pipeline builds and deploys an updated Storybook preview online.

Bringing it all together

At this point, we have our back end and API established. We also have our front end component library created. Now we need to bring it all together and actually connect the two.

Matching the Drupal UI to our React components

React components are standalone UI elements and are made up of props that determine their functionality. So, for instance, a Button component could have many variants for how it looks. A button could include an icon before or after the text, or it could be used to open a link or a dialogue window, etc. A simple component like this might have a whole bunch of different props that can be used in different ways. To be able to control those props through the Drupal UI, we decided to use the Paragraphs module. A paragraph in Drupal is essentially a mini content type for a specific use. In Drupal, we created paragraphs that matched the functionality we needed from our React components so that the Drupal UI would be, in the end, setting the component props. This is a bit confusing to set up at first and something that will probably not be nailed down first try, but, in the process of setting up the Drupal UI in the context of the React components, you can really start to see how the connection between the two is made. At this point, it’s pretty easy to see if a component is missing any props or needs to be refactored in some way.

We also went with Paragraphs because we wanted to give our content creators a more flexible way of creating content. A standard Drupal content type is pretty rigid in structure, but with Paragraphs, we can give content creators a set of building blocks that can be grouped and combined in many different ways to create a wide variety of pages and layouts in a single content type. If you’re not familiar with this style of creating content in Drupal, I suggest you take a look at this post I wrote a while back. Look at that and imagine React components rendering on the page instead of Drupal templates. That’s essentially what we’re aiming to do here.

An argument against Paragraphs is that it could potentially make page building more time-consuming given that you don’t have that typical rigid structure of a content type. This is especially true if pages on a site being built typically all follow a similar format. In this case, we still like to use Paragraphs anyway but we will typically build out unpublished pages that act as templates. We then install the Replicate module which lets content creators clone the template and use it as a starting point. This makes creating pages easy and consistent while still allowing flexibility when needed. Win-win.

Taking the Next.js step

Since we settled on React, we started our front end project as a standard react build using the built-in create-react-app command. But, after some additional investigation, we decided that a better option for our use case would be to use the Next.js framework. Given that the corporate site is, at its core, is a marketing and blogging website, Next.js allows us to still build the frontend using React but gives us some extra capabilities this type of website needs.

One of these capabilities is incoming server-side rendering which we found increases the performance of our application. This also has SEO benefits due to pages being rendered serverside and sent to the client, as opposed to the standard client-side rendering most React applications use where the client machine handles processing and rendering.

Component factories are key to interpreting the API data

We were also attracted to the structure and organization of the project when using a framework like Next.js, as well as the dynamic routing capabilities. This significantly simplified the code required to implement our decoupled menu and page routing. Using the dynamic routes provided by Next.js, we could create “catch-all” routes to handle the ever-growing content in the CMS. By querying the Drupal API, we can get all the available path aliases for our content and feed it to Next.js so it can pre-build our pages. It can use the path aliases to resolve the content specific to the page through the Decouple Router module, which will then feed that data to a component factory to handle the decision as to what high-level React page component is necessary to build a page. For example, if the data is for a basic page content type, the page factory component returns our Page component, if it’s a blog post content type, it returns the Blog Page component, etc. Each of these components knows how to handle the specific data fed to it, but the factory component is what connects the data to the right page.

The way Paragraphs can be added to a page for composing a piece of content also created a challenge on its own in reading that data and returning the right component. We decided to create another factory component for this. Similar to the page factory component, this one takes in the paragraph data and determines which React components it needs to render the content.

What’s next in our decoupled adventure

Believe it or not, what we’re building is still a work in progress. While we have most of the tech requirements figured out, we’re still actively putting all of the final pieces in place and fine-tuning the backend UI and modules requirements. All in all, we’re feeling really good about this progress and can see the value of what we’re building both for ourselves and the clients we serve. In our opinion that is based on experience and research, decoupled is the future, and we’re excited to walk this path.

With that said, even once our own decoupled site has launched, we still have a lot of work left to do. For one, we still need to get a method in place for managing Drupal blocks and regions. More factory components will be critical here to interpret that data. Given that our clients are primarily ecommerce retailers in one form or another, we still have the whole ecommerce side of web development to sort out, including product catalogues, add-to-cart forms, carts, wish lists, checkout flows, etc. The list is long. However, what we have is a framework that can be repurposed and added on to. It finally helps tie together many disciplines of web development, both creative and technical, that are traditionally siloed. It’s also blazing fast and just plain cool. This is an investment not only in our own online business front but in something greater that we hope to share with others through work, community and general knowledge.

Thanks for taking the time to read this. If you want to chat about this initiative further, drop us a line.

New call-to-action

Apr 07 2021
hw
Apr 07

I am going to keep today’s DrupalFest post simple and talk about the API to access content on drupal.org. The Drupal.org API is a public API that allows you to access content such as projects (modules, themes, etc), issues, pages, and more. The API returns data as a simple JSON structure and has only limited features with regards to filtering and gathering nested data. In this post, I will describe more about this API and various ways to access it. It has been a tough day and it is difficult for me to write long posts or go deep in my thoughts. Regardless, I hope this quick post still proves useful.

API basics

The base endpoint to access the drupal.org API (henceforth, d.o API) is https://www.drupal.org/api-d7/. In fact, you can probably convert any canonical URL on drupal.org to its API equivalent. Simply prefix the path with “api-d7” and suffix “.json”. By canonical, I mean URL’s that use Drupal’s internal path such as node/2773581 or user/314031. These endpoints return JSON responses which are practically the same as Drupal internal representation. This means you will notice weird field names (almost all fields would begin with “field_”) and nesting that you might not expect from any other API’s. If you have programmed for Drupal, chances are you will feel right at home with the response data structure.

The API’s that return listing of entities are simply named such as node.json or user.json (and so on). The listing endpoints accept a variety of filters and allow pagination with simple query parameters. Most of the field names can be directly used as query parameters to filter the list on that field value. For example, https://www.drupal.org/api-d7/node.json?type=project_issue would return all the issues on drupal.org. Whereas, the URL https://www.drupal.org/api-d7/node.json?type=project_issue&field_project=3158507 would return only the issues in the preloader project (preloader’s node id on drupal.org is 3158507).

Read the documentation page for examples and more details such as field names and values.

Shortcomings in the API

As you might have surmised from the description above, this API is not designed for common consumption. Drupal Association maintains this on a best-effort basis and more sophisticated use cases (such as gathering nested data in a single request) are not supported. There is a plan to improve the API on the whole in the future but I don’t know when that might happen.

Practically speaking, this means that you have to make multiple API calls if you want to collect all the information about any entity. The first API call would be to the main entity about which you want information. And then you have to parse it to gather all the referenced ID’s and make API calls for each of them. If you wanted to build an efficient parser that needs to deal with a lot of nodes, you would probably have to persist all information in your application (which is what I did with DruStats).

The problem is not limited to just normal relationships such as terms and users, but also to entity reference fields. For example, if you want to find out a user’s organization, you would have to read the “field_organizations” property and make a request under “field_collection_item” endpoint with that ID. In a normal consumer-grade API, you would probably expect the information to be embedded right in the user response.

Using the API in your code

The API endpoints are straightforward and you can request data with a simple curl request or just the browser. However, if you were writing an application, you might often get frustrated with dealing with the filters and nested queries. This is where libraries come in.

The d.o API page lists two such libraries at the end of the documentation page. The first one listed is by Kris Vanderwater (EclipseGc) and it seems simple enough to use. The second one was written by me at the time when I was building DruStats. I needed something more sophisticated and I decided to write my own API wrapper. Since then, I also used this library for Contrib Tracker, a project which tracks contributions to drupal.org by multiple users. This library is reasonably documented on the Github page but improvements are always welcome. You can also look at examples in DruStats and Contrib Tracker. I am currently in the process of moving Contrib Tracker to a new home and I am not linking to the current URL right now. I am also planning a post on contrib tracker soon.

Using the CLI tool

Matt Glaman has written a CLI tool to interact with drupal.org issues. If you only want to automate your Drupal contribution workflow, then this CLI tool might be what you need. This tool allows you to simplify working with Drupal.org patches, create interdiffs, and even watch CI jobs. As always, the documentation on Github can guide you with the installation and usage of this CLI tool.

Apr 06 2021
hw
Apr 06

I recently opened up a spreadsheet where people can put in their ideas of what I can write about in this DrupalFest series. Someone put in a topic of what advice I would give my younger self. This idea intrigued me and I thought I will make an attempt at writing down advice to new Drupal developers. I am not very comfortable presuming that someone would want to take advice from me; so I am going to say what I would want my younger self to know. I am going to temper my thoughts to be suitable to today’s state of industry. There is no point talking about how the world would be from the point-of-view of 2007. So, let’s get started.

You don’t have to write all the code

One of my first non-trivial programming task was writing an x86 Assembly library for graphics manipulation and computation from QBasic. I started off programming with GW-Basic and QBasic and in time, I realized it is quite slow to do real stuff. That’s when I learnt Assembly language and wrote a library. This started me off on a path where I preferred to write all the code on top of a programming language and not rely on frameworks and libraries. I stayed on this path for about a decade since then.

I eventually learned that to be productive and actually grow, I should stop thinking of code in terms of purity and use what is out there. Other people are talented too and they have gone through the same problems you are going through. Learn from their mistakes and their work and build upon that. I realized much too late (in hindsight) that I was letting perfect be the enemy of good and staying stuck.

Case in point: I found out about Drupal in 2007 from someone and I thought why would I use it when I can and have written multiple CMS’es. The person who introduced me to Drupal (Drupal 5 at the time) was talking about how you can build websites in a matter of hours and use views to build a query with a UI (I hated that idea). He even said that Drupal 6 is even better and is just about to be released. Even after this, I only gave in and built my first Drupal site at the end of Drupal 6 period. That is still the only Drupal 6 site I have built.

There’s another story. I wanted to start a blog and was waiting to perfect the blogging CMS I was building. It had everything I wanted but I was finding more things to add and that became an excuse for me to not start blogging. I realized that one night not able to sleep at 2 AM and it hit me; that’s when I wrote my first (second) blog post. Here’s the actual first.

But do write the code to learn

Yes, I waited a long time to start using other people’s work because I wanted to do it myself. But doing it myself taught me a lot and I encourage you to go through that as well; just not at the expense of getting work done. When you find time, pick one hobby project that you want to do, just one, and do it from scratch.

Learn something not related to what you want to learn

This is something I did without realizing anyway and so I don’t need to give this advice. But I will document it here anyway. Learn a lot. Don’t let go of an opportunity to learn. I cannot stress this enough on how it has impacted me. It was only a feeling until I read more about this in David Epstein’s book “Range: Why Generalists Triumph in a Specialized World“. Essentially, learning things which seem unrelated to what you’re doing is a great work to develop a wide perspective which helps you innovate and excel. If you’re interested in the mechanics behind this, do read the book.

Within the programming area, I started off with QBasic, Assembly, PHP, .NET, C#, C/C++, Java (in school and college), and staples such as HTML, CSS, and JavaScript (much simpler around that time). Since I started working professionally, I have learned ASP.NET, Windows programming internals (from a .NET perspective), CodeIgniter, Kohana, WordPress, Drupal, JavaScript, Laravel, and many others I can’t recollect. More recently, I have started learning Rust, Golang, Flutter, Dart, and technologies such as Docker and Kubernetes. Outside of programming, I often worked with Photoshop, Premiere, Aftereffects, Corel Draw, and generally in printing and video editing projects. Further outside of programming, I used to assemble computers for people and myself and built networks. Going in reasonable depth with all of this helped me understand computers and technology the way I do today.

Also learn something completely outside what you do

It is great to expand your knowledge in the areas around your work but it is also very helpful to pick up completely unrelated activities (what we call hobbies). I used to be part of group that put up props and other types of decorations for community events. As a part of that group, I would cut paper, slice thermocol, apply glitter and other materials, and put them up on walls and hang them from rafters and towers. I also used to volunteer as a photographer and also a vocalist (something similar to a choir group). It may not seem obvious how this would help your career but the broad range of perspectives that develop here help. One aspect of this is just the expanded network that you build which helps you see outside the otherwise narrow world we sometimes confine ourselves to.

Make learning a habit

I read this great book called Atomic Habits by James Clear which opened my eyes to the relevance and effectiveness of habits. Earlier, I often found me fighting against myself to do something: either read more, exercise more, or just behave differently. I used to put up a brave fight and punish myself when I failed (which I often did). You see, I thought I was being a hero by slaying myself with all the burden and was a skeptic when I started reading the book. Not anymore. I now think more about the impact rather than the act and take calculated steps to learn and build my life with the impact in mind, not my current actions.

Go broad and go deep

I already shared why you should go broad with various technologies, but it is just as important to go deep in on one or two technologies you want to focus on. If you’re reading this, one of those things is probably Drupal for you and I would argue you should make that PHP. Go deep into how PHP works. What are the new features in PHP and how are people using those features in other frameworks? How does PHP run on a webserver and what are some of the alternative models of running PHP applications? How does a PHP application differ from something written in Golang, for example, or in Python?

And learn about programming fundamentals too. What problem does object oriented programming solve anyway? Can you apply functional programming principles in PHP? In Drupal? How does a processor execute a program? What does Assembly look like? How does an Operating System schedule programs for execution and how can it run multiple processes at the same time? How does virtualization and containerization work and how does that affect your code?

I know it seems like a lot but you are not going deep in one dive. Take years if you want to but you will see benefits within days of you starting to learn this.

Drupal is awesome

Now, this seems obvious in hindsight but it was not clear to me when I started programming. I already said that I was loathe to try out frameworks and libraries written by other people (that includes CMS’es as well). It was not until I volunteered for a project where I didn’t want to spend much time and just picked up Drupal. That is what set me on the path of my amazing experiences in the Drupal community.

So, I want to say this straight. Drupal is awesome both in terms of the code that is written and the people that have written it or helped to write it. Note that I didn’t say perfect. It wasn’t perfect (even when I pretended it was) and it is probably even less perfect now. But it is awesome when looked at the point-of-view of the impact it has made in my life and the lives around me. I have written often about this so I won’t go deep into it.

The ecosystem is awesome

As I said before, don’t limit yourself to Drupal. Drupal hasn’t limited itself to Drupal anymore since Drupal 8 famously got off the island by adopting practices from the wider PHP community. Learn from other frameworks and languages to determine what’s the best fit. There was a time I used to say that we could build anything in Drupal. That’s still true but that’s neither effective nor efficient. Decide if the code you are writing should be a custom module or a contributed module or a PHP package or a completely different service outside Drupal. There are a lot more awesome libraries and systems out there and you should find out how you can use them in Drupal rather than building that in Drupal.

I feel like I can keep going but I am going to end it here. This was much longer than I thought I would write but that’s the best I can do at this time. If you read it so far, thank you, you’re a star!

Apr 05 2021
hw
Apr 05

This is the fourth post in my DrupalFest series and I am excited to keep it going. I want to write about different tools I am aware of for running quality checks on Drupal code. This will be somewhat similar to my last post where I presented various options but focused on the method I use nowadays.

First of all, what am I talking about? I believe that the code we write is read a lot more times than it is written. It is read by other people than yourselves (2-week older you is another person) and they (you) have to understand what is written. Therefore, code must be optimized for readability. Performant code is a must, of course, but not at the expense of readability. When your best-performing code breaks and you can’t understand it to fix it, that performance is useless.

Types of checks

One of the low hanging fruits here is following a consistent code style. Drupal documents its coding style in detail and the Drupal core and contributed modules (most of them anyway) follow it. True, it’s not like PSR-2 or PSR-12; it was developed long before there were PSR’s, but if you are working with Drupal, it will be a good idea to follow this coding style consistently. Now, you could manually review each and every line of your code to make sure you are following the code style and halt your pull requests, but that is not a good use of your time. Hence, tools.

Apart from the coding style, there are ways to prove “correctness” of your code. Broadly speaking, there are two aspects of correctness–the code is coherent and the business logic is correct. The coherent code aspect can be checked using various static analysis tools; PHPStan and Psalm are few of the popular ones. For verifying the business logic, we write automated tests and PHPUnit is the defacto standard for that.

Tools

Before we get into the Drupal site of things, I’ll list the various tools that we use. I won’t attempt to document them here (in the interest of time) but you shouldn’t have trouble finding their documentation.

  • PHP Code Sniffer – Mainly for code style checks but can be a bit more sophisticated using rules.
  • Drupal coder rules – Code sniffs for checking Drupal coding style.
  • DrupalPractice coder rules – Code sniffs for checking Drupal conventions.
  • PAReview.sh – Automated tool that calls PHPCS with Drupal-related coder sniffs apart from other checks. This is commonly used to check if your module is contribution ready.
  • PHPStan – Static analyzer for PHP code that can perform various levels of type checks and other correctness checks.
  • Psalm – This is another static analyzer with same basic level of checks as PHPStan and then some interesting checks.
  • PHPLint – A simple PHP linter which supports parallel checks and better error reporting than just running php.
  • Other linters such as ESLint, TwigCS, etc – Linters for their specific languages.
  • PHPCPD – PHP Copy Paste Detecter. Like the name says, it checks for repeated lines of code.
  • PHPMD – PHP Mess Detector. Analyze source code for various mathematic measures of code.
  • PHPUnit – Test runner for unit, integration, and functional tests.
  • Behat – Test runner for Behavior tests.

Almost all of these tools can be installed via composer or PHAR files. Composer is an easy way to get these tools but they end up affecting your project dependencies. PHAR files have to be installed on your machine (and everyone in the team should do that too). In both of these methods, you still have to remember to run the tools. That is where the next set of tools come in.

Automatically running checks

One of the ways to make sure everyone on the team adheres to the checks is by running the test in CI. This way, the team would immediately know if their commit is failing checks without even someone having to say so in a pull request. You could again choose to have all of these tools in your composer.json and install them while running your CI, but there is a better way for most cases. DrupalQA is a Docker image which packages almost all of the above tools and you can just run the commands you want within the Docker container. Almost all modern CI tools provide some way of running tests inside a container and you should find documentation to do that for your CI tool. Here is an example of a Gitlab CI job definition which uses the PHP 7.4 version of this image.

drupal_codequality:
  image: hussainweb/drupalqa:php7.4
  stage: test
  script:
    - composer validate
    - phplint --no-cache -v web/modules/custom/
    - phpcs --standard=phpcs.xml.dist --extensions=php,module,inc,install,test,profile,theme --ignore=/node_modules/ web/modules/custom
    - phpmd web/modules/custom/ text phpmd.xml

You could even run the Docker image locally but that’s more trouble than it’s worth. There is a better way in any case.

Have your team run checks automatically

The CI is fine to run these tests but what if you want to save time between commit, push, and CI runs. Wouldn’t it be better for developers to run these checks on their machines before they commit their code. GrumPHP is a tool which allows you to do just that. Vijay CS has written a wrapper on GrumPHP that provides default configuration for checks related to Drupal. While this is great for general use cases, I wanted to make a highly opinionated one for my team at Axelerant. While it is opinionated, it is still available for use and you would just install it using composer this way:

composer require axelerant/drupal-quality-checker

After this, anyone using your project would automatically get git hooks installed (once they run composer install) to run a few Drupal-specific checks every time they commit. You can modify the default configuration and add/remove any tools you wish. Just follow GrumPHP documentation for the same. Of course, more documentation is also available at the Github page for axelerant/drupal-quality-checker.

GrumPHP recently made a lighter package available called GrumPHP Shim. This provides the same set of features but installs a PHAR file instead of a regular composer plugin. This has the benefit of keeping your project dependencies free of GrumPHP’s dependencies reducing the chances of conflict. Axelerant’s version of drupal-quality-checker uses the shim for this reason.

I think I covered all the tools I am aware of and could recollect in the span of couple of hours today. I’m sure there are more tools that are missing here but I am going to call this DrupalFest post done for now. If I am missing something obvious (or even niche), please point it out in the comments so that I can revise or write about it in a separate post.

Apr 04 2021
hw
Apr 04

PHP 7.4 introduced the concept of preloading classes (files) on server start-up into the PHP opcache. This gives us performance benefits for sites that tend to load a lot of files with every request; something that Drupal is known to do. A properly configured web server would have opcache (opcode cache) enabled anyway, but preloading brings in a modest performance boost on top of that.

PHP opcache is designed to cache the opcodes of a PHP file so that the file does not have to be reinterpreted with every request. The bytecode (opcode) is cached in shared memory in RAM which means that the cache lives only as long as the PHP process is running. When the opcache is enabled, PHP caches whichever files are loaded during execution and only recompiles the file if it has changed (this setting can be disabled for an additional performance boost).

Preloading works in a similar way except that you would write a script that would load the files you want to cache. This script is set to PHP’s opcache.preload setting which executes this script every time the server starts. Since this script is run with the server, you have to make sure that any errors are handled properly; otherwise, the server would throw an error and quit. Now, you may want this script to load all the files in your application, but the opcache memory size is limited. This means that you should only preload files that are required by most requests. In many PHP applications, these files may even be hand-written to get the best ratio of memory usage and throughput.

Preloading with Drupal

Now that we understand how preloading works, let’s see how it can be used with Drupal. Writing the preload script by hand is difficult as Drupal is a dynamic system. Even if you write a preload script for all the Drupal core files, it may be your contrib modules that are a better fit for this. This is the reason I have written a Drupal module called preloader to make this easy for you.

The module uses a PHP package called darkghosthunter/preloader which does most of the heavy lifting. The package checks the current opcache usage and generates the preload script. The Drupal module brings in Drupal-specific customization (removing files that shouldn’t be preloaded) and a user interface to generate the file. You still have to manually add the preload script to your PHP settings for the changes to take effect. Still, the difficult task of generating the script file listing all the important files is made easy for you.

The workflow is as follows:

  1. Install the module using composer as you normally would.
  2. Configure the module to ignore any additional directories or files if you want.
  3. Restart the webserver and then load some of the pages that are frequently hit on your site.
  4. Go back to the module configuration page and generate the script.
  5. Set the opcache.preload setting in your php.ini file to the generated script path.
  6. Restart your webserver and test.

Gotchas

Since the preload script is executed at the server start and the cache is permanent as long as the process is running, if you change any of the preloaded files, you have to restart the server. For this reason, it is not a good idea to use this functionality during development. The module may remain enabled but the opcache.preload setting should not be set on the developer machines.

Performance impact

In my early tests, I saw an average of 10% improvement with preloading enabled across different percentiles. This test is quite old and I should repeat this on a better machine but the result should still be indicative. Hopefully, I will be able to test this module again this month and even work on some of the issues that have been reported. I will share screenshots or even a video of the test when possible.

This is it for today’s DrupalFest post. See you tomorrow, hopefully.

Apr 03 2021
hw
Apr 03

This post will cover quickly setting up a Drupal website for testing, experimentation, or evaluating features on your local system. While I cover a different set of options briefly, I will mainly talk about a tool we have built to let us quickly scaffold Drupal sites with best practices built in. This post is a part of the DrupalFest series which celebrates 20 years of Drupal. Let’s get started.

Before I go to the main part of the article, let us look at what options do you have to quickly set up a Drupal website for evaluation. While all the ways are effective, there are trade-offs typical of each method. We’ll start with the simplest way to experiment with a Drupal site and then keep going up the level of complexity, but also the flexibility you get.

Someone else’s machine

The easiest way to set up a Drupal site is in the Cloud, aka, someone else’s machine. Broadly, there are two ways I can think of to do this. The first method involves one of the excellent Drupal hosting providers and signing up for a free account. Drupal.org lists some of the hosting supporters who offer a free account to try out Drupal. Go to this link to get started: https://www.drupal.org/try-drupal.

While the above method lets you try out not just Drupal but also a hosting platform where you might actually run your website, you may not want to create an account to just try out Drupal. In that case, there is SimplyTest.me. This site provides a time-bound sandbox environment where you can quickly try out Drupal, one of the Drupal distributions, with or without modules and themes, and even apply patches. The sandboxes are available for 24 hours and give you complete control of the Drupal site through the UI. You don’t get command-line or SSH access this way, but for quickly testing out a module or a distribution, this is the easiest solution out there. You don’t need an account. Just pick the modules or the distribution and you have a sandbox ready in moments.

Your machine

On your machine, you have a little bit more control over the site but you do need the software required to run Drupal. If you’re only interested in quickly evaluating Drupal on your machine and still have access to the files and command line, the Drupal evaluator guide gives you an option to run this with just PHP. More details and instructions are available at the link: https://www.drupal.org/docs/official_docs/en/_evaluator_guide.html.

The problem with the above method is that it won’t run Drupal in an environment reasonably similar to where you would host. This is good for evaluating Drupal and quickly editing a few files but the experience working with PHP built-in server and SQLite would not help development. To take it a step further, you would need Docker as a pre-requisite.

If you have Docker and you already have a Drupal codebase downloaded, then tools such as Lando and DDEV would help you get started in no time at all. If you want to take it a step further, keep reading to the next section.

Finally, you have the classic method of installing Drupal on your machine by manually installing all the software required to run Drupal (that’s PHP, Apache/nginx, MySQL/MariaDB, etc). I don’t find many developers do this anymore and it is more trouble than it is worth. If you can’t run Docker for any reason, consider running it in a virtual machine using the excellent DrupalVM.

Axelerant’s template tool

Now to the main section of the post. At Axelerant, we wanted to automate the entire process of setting up a Drupal site quickly along with the codebase, Lando configuration, some of the best practices we follow, and even CI configuration. I wrote a tool for that called axl-template which is available via pip (yes, it’s written in Python and needs at least Python 3.6). Once the tool is installed, you can quickly set up a Drupal site along with all the configuration I mentioned in a matter of minutes. I have measured this time from running the first command to have Drupal running to be minutes (longest time taken by Drupal installation). Here’s a somewhat old video where I use this tool to set up Drupal 9 with Lando.

[embedded content]

I have added new features to this tool since I made the video, the most notable feature being able to specify modules and packages right in the first command. The idea is to have your codebase created from a single command and ready to run. I am planning to create a new video to cover these and many other features added to the tool. For now, the documentation is a good reference. Read more at https://github.com/axelerant/axl-template.

If you’re averse to installing a Python package through pip and don’t want to get it from the source code either, you could run it via Docker using whalebrew. Instructions to do these are in the README file but only one of the commands is supported in this way.

We’re continuing to improve this tool by adding support for more common development tools that are typically used. One example that comes to my mind right now is Acquia’s BLT. I can’t promise when that support will come out but it’s on the cards. In any case, this package is released under MIT license and contributions are very welcome.

This is it for today and I am just completing this post with less than four minutes to spare to midnight. Have a good DrupalFest everyone.

Mar 30 2021
Mar 30

Rain logo updated

The best open source distribution for Drupal just got better! The latest version of Rain University and Rain CMS now ship with Layout Builder pre-configured to make page building faster and easier. So how does it work? Check out below!

Editing Layouts

Now, when you navigate to any page with layout builder enabled you can edit the layout by clicking on the “Layout tab” under Tasks. Alternatively, you can click on the same tab while editing a page.

Rain editor experience with layout builder
Rain CMS homepage

Rearranging Blocks

With layout builder you have an instant preview of any blocks added to the page. That being said, it’s usually easier to move blocks around with preview turned off. Drupal provides a checkbox that makes it simple to toggle preview on or off.

Rain CMS, rearranging content blocks

Rearranging blocks in Rain CMS

Adding Blocks

To add a block to the page click the “Add block” link in any section. Rain CMS ships with 15 block types out of the box that you can easily drop onto the page. Each component has a preview wireframe and label to help the author understand the look and function of each component. 

Rain add block

Adding blocks in Rain CMS

Layout Controls

One of the big benefits of Layout Builder is now you have more control over the layout of a page. Editors can easily add new sections with various layouts where blocks can be placed. Layouts can be customized per project.

adding sections in Rain CMS

Adding sections in Rain CMS

Rain University CMS

The Mediacurrent team has also updated our RainU CMS to ship with Layout Builder. Same great experience, but tailored specifically for universities.

Rain University homepage

Rain University homepage layout

Want to Know More?

For developers, you can download and install Rain CMS with Layout Builder using our Drupal project template: https://bitbucket.org/mediacurrent/drupal-project/src/9.x/. Setup and installation remain the same, with detailed instructions provided in the project README file.

We are also happy to demo Rain University or Rain CMS for organizations interested in partnering with Mediacurrent for your next redesign project. To schedule a free demo, please visit our contact page or chat with us right now (see bottom right corner of the page). 

Mar 24 2021
Mar 24

This week in the Drupalverse we are attending MidCamp! We’re in-kind sponsors offering a series of workshops to help you improve your skills with local development as well as some prizes for the raffle. MidWest Drupal Camp traditionally takes place in Chicago, but in March 2020 the organizers made a rapid shift to a virtual event, for which we are very grateful.

The camp focuses on bringing everyone on board, starting with free “how to be a speaker” workshops on Wednesday, plenty of regular and unconference sessions, and local development and contribution workshops. Between the virtual platform and pay-what-you-can tickets, we hope this gives more folks an opportunity to participate, learn, and share their knowledge.

Register now and hop into the gather.town virtual venue! The camp is over, and the sessions are available to watch on YouTube! Thank you Kevin Thull for making recordings happen.

Importance of local development environment for Drupal contributions

In order to contribute to Drupal as an open source project or to work on any web development project as a contributor to the code, you’ll want to be able to run a copy of the project locally. Historically, folks used the built-in LAMP stack on Mac, or worked with MAMP, or any number of other tools. Lately, the community has been focused on Docker-based tools because of containerization and the ability to provide simple, user-friendly commands.

Some background reading and additional resources:

For MidCamp we’ll be using the latest release of the Quicksprint package, which you may download and install in advance or just download and wait for the workshop to walk through the details. 

MidCamp DDEV schedule

The many ways to contribute to Drupal

For a non-code contribution overview, join AmyJune on Thursday at 11 CT. Then on Saturday, join for the first time contributor workshop to learn more about the Drupal issue queue and how to work with others. You’ll learn more about how marketers, project managers, organizers, designers, and writers (among many others) can bring their valuable skills to the project. 

Read more about the who, how, and why of Drupal contributions as presented by AmyJune at Florida Drupal Camp 2021.

Get the full scope of contribution opportunities at MidCamp on Saturday. Recording here.

Now that you’re set up with DDEV-Local (and hopefully had a chance to try out those Drupal contributions), what else can you accomplish? Since you’re likely already tracking your project with Git, it’s easy enough to push the repository to GitHub or GitLab. From there, you can start collaborating with other folks, and reference that repository from other tools for testing, CI/CD, or push to a hosting provider. 

DDEV offers production hosting on DDEV-Live. You can create a new project directly in the DDEV UI online, or from the command line, by referencing your hosted Git repository. Read more on deploying here.

DDEV-Live includes the ability to create preview sites on the platform, regardless of whether you use it for production hosting. That means you can call a command in a comment on a pull or merge request and instantly spin up a preview site. Read more about DDEV Preview here.

All parts of the DDEV platform can be used independently of each other to piece together your preferred tools and workflow. Use Lando with Tugboat and DDEV-Live, use DDEV-Local with DDEV Preview and Pantheon. We love to hear about your unique strategy, please tag #DDEV/@ddevHQ to share with the global community!

Share this post:

Mar 23 2021
Mar 23
[embedded content]

Don’t forget to subscribe to our YouTube channel to stay up-to-date.

File Management Series

Drupal doesn’t support the ability to replace existing files. You can create and delete files, but you can’t replace a file without using a module. If you try to upload a file with the same name Drupal will append “_0”, “_1”, etc… to the filename and increment it.

Luckily two modules can help with replacing files, and they’re called Media Entity File Replace and File Replace.

What is the main difference between Media Entity File Replace and File Replace?

Both modules will replace “files” whilst retaining the original file’s filename by performing an “overwrite” function in the backend. In Drupal, an uploaded “file” can be a “media entity” or a “file entity”.

As the module names suggest, the main difference is that Media Entity File Replace works at the media entity level, whereas File Replace works at the file entity level.

Table of Contents

This module allows editors to replace files at the media entity level by overwriting the existing media files. Because it overwrites the files, the filename and path are exactly retained.

You should use this module to allow content editors to replace media entity files whilst keeping the same filename and path as the original file.

Getting Started

1. Download and enable the module. This module requires no additional libraries and can simply be installed with composer which is the recommended way.

To install using Composer:

composer require drupal/media_entity_file_replace

If you get the following error when using Composer:

  [InvalidArgumentException]
  Could not find a version of package drupal/media_entity_file_replace matching your minimum-stability (stable). Require it with an explicit version constraint allowing its desired stability.

Then specify the version:

composer require drupal/media_entity_file_replace:^1.0-beta3

To enable the module using Drush:

drush en media_entity_file_replace

2. Clear the cache. You can easily clear the cache via the Drupal admin UI at admin/config/development/performance or by the drush command:

drush cr

A quick overview of this step is adding a media field to an existing content type and uploading an actual media document. If you already have a media field in your content type and an uploaded media file then you can skip to step 3.

A. For any content type (we will use the default Drupal Article content type in this tutorial), add a Media field and check “Document” as the Reference type. See Image 1 below.
B. Go to Content >> Media and click on “Add Media”. At the next page, select “Document” and proceed to upload a file. Note the filename.

Image 1 – Enable “Document” reference type for the newly created Media field in the Article content type

3. Now enable the “Replace file” widget that comes from the Media Entity File Replace module.

Go to Structure >> Media Types >> Document >> Manage Form Display and enable the “Replace file” widget. This widget is disabled by default. See Image 2 below. You will not be able to replace a file until you enable this widget.

It’s a good idea to position the “Replace file” widget right under the file widget as shown in Image 2 so that when a user is editing a file, the “Replace file” widget is right under the file that they want to replace.

Image 2 –  Enable the “Replace file” widget in the Document media type found under “Manage Form Display”.

Without this module and widget enabled, the default Drupal interface will appear as in Image 3.

Image 3 – The default Drupal 8 interface to replace a media file.

4. Now if you go to Content >> Media and edit the document media that we uploaded in Step 2.B, you will see the “Replace file” widget as shown in Image 4 below. Notice that the default Drupal widget is automatically removed (see the difference between Image 3 and Image 4).

Image 4 – The “Replace file” widget coming from the Media Entity File Replace module

5. Go ahead and replace the original uploaded file with a new file with a different filename. Upon replacing the file, if you left the default option “Overwrite original file” checked, then Drupal will upload the new file and set the old file for deletion. Even if your new file has a new file name, Drupal will store the new file whilst retaining the filename of the original file. Note that the filename is the same as that noted from Step 2.B.

If you click on the new uploaded file even after doing a refresh in the browser, you may still see the old content of the original file. You must do a hard refresh in your browser to see the contents of the newly uploaded file.

More information: How to do a hard refresh in Chrome, Firefox and IE.

File Replace

This module allows editors to replace files at the file entity level by overwriting the existing files. Because it overwrites the files, the filename and path are exactly retained.

You should use this module if you want to allow content editors to replace file entities whilst keeping exactly the same filename and path as the original file. It is useful in cases where existing files in Drupal need to be updated occasionally.

Getting Started

1. Download and enable the module. This module requires no additional libraries and can simply be installed with composer which is the recommended way.

Using Composer:

composer require drupal/file_replace

To enable the module using Drush:

drush en file_replace

2. Clear the cache. You can easily clear the cache via the Drupal admin UI at admin/config/development/performance or by the drush command:

drush cr

3. There is one extra manual step that you must do. Because this module only provides the “Replace page” for each file upload, it does not provide a link to this page from the Drupal UI. You can access the “Replace page” directly by typing the URL into the browser. You can manually type the following URL into the browser:

admin/content/files/replace/{{ fid }}

Where {{ fid }} is the file id of the file you want to replace. For example, if the fid was 6, then you would go to:

admin/content/files/replace/6

This is not ideal. We will show you how to link to the page from the default Drupal Files overview page. (The outcome we are trying to achieve is shown in Image 11.)

Go to Content >> Files and click on “Edit view” as shown in Image 5.

Image 5 – On the Files overview page, click on the Edit icon to the top right to edit the View.

Alternatively, you can go to Structure >> Views and edit the Files view as shown in Image 6.

Image 6 – Editing the Files view found on the View listing page.

4. Add a Custom text field to the Files View as shown in Image 7.

Image 7 – Adding a Custom text field to the Files view.

5. Then add some custom text for the text of the link. See Image 8

Image 8 – Adding custom text for the link anchor text

Also, edit the “Rewrite Results” section and enter the following in the “Link path”:

admin/content/files/replace/{{ fid }}

You can also enter custom text in the “Title text” field although this is just for aesthetic purposes. See Image 9 below.

Image 9 – Rewriting the results of the link to include a relative path.

Now click on “Apply (all displays)” and save your View.

6. The last step is to enable the “Replace files” permission for the role that you want. See Image 10.

Image 10 – Enable the permission “Replace files” for your role.

7. Now go to the Files overview page at Content >> Files and you will see a “replace” link as shown in Image 11.

Image 11 – Showing the custom “replace” link created from editing the Files view

Click on the “replace” link next to the file you want to replace and this will take you to another page where you can upload a new file as shown in Image 12.

Image 12 – The “Replace file” page provided by the File Replace  module.

Common Pitfall – Caching

Static files served by Drupal are usually cached externally (outside of Drupal) by services such as Varnish, Content Delivery Network (CDN) and the user’s browser. By default, a user’s browser will not pick up the content of the newly uploaded file until after the max-age header which is set in .htaccess for two weeks for static files. In other words, if you replace a file and do nothing, end users will see the new file content after two weeks (unless they locally do a hard refresh).

Clearing Drupal’s internal cache will not solve this caching caveat because Drupal is not involved with serving static files. This is the job of your webserver.

There are a few options worth mentioning:

Summary

Drupal does not retain filenames for uploaded files that are replaced or overwritten. Additionally, Drupal stores uploaded files as either media entities or file entities depending on your set up. This tutorial demonstrated how to retain existing file names when replacing media or file entities. There are two contributed modules you can use to solve this problem and they are Media Entity File Replace and File Replace. These modules will allow you to overwrite existing media and files respectively, whilst keeping their original filenames.

Mar 19 2021
Mar 19

rocket blasts off against a starry sky

DrupalCon North America 2021 is right around the corner! Check out the ever-growing schedule of sessions, industry summits, and special events on the official event site, and mark your calendar for these Mediacurrent sessions. 

Our Sessions 

The Mediacurrent team is proud to support this community event as a platinum sponsor. We’ll be presenting several sessions at this year’s online conference.

Whether you’re a site builder scaling up with multisite, a marketing leader in search of current guidance on open source security, or a Drupal community member of any kind looking for inspiring real-world case studies, we’ve got you covered. 

Here’s what we have in store for sessions and case studies in Drupal innovation:

Unlock The Power of Multisite 

DrupalCon program speaker card with Jay's headshot

Join Jay Callicott, Mediacurrent’s VP of Technical Operations, for a comprehensive approach to manage your Drupal sites at scale.

Interested in evaluating multisite options for your organization? Jay will cover several ways to scale your Drupal platform from one site to many dozens or even hundreds.

Register here to join the session and learn best practices for governing multiple sites from one codebase, how to configure a multisite installation, and considerations for your hosting solution. 

Open Source Security for CMOs

DrupalCon program speaker card with Mark and Krista's headshots

As open source software continues to become widely adopted, adhering to security standards is becoming more challenging. So what's a CMO to do?

Inspired by our ebook, The CMO’s Guide to Open Source Security, this session will help you navigate the terminology, expectations, and tools to ensure security is a priority for your web properties.

You can register here to join the session led by Mediacurrent’s resident Drupal security experts Mark Shropshire and Krista Trovato.

Case Study: Habitat for Humanity 

Imagine a world where everyone has a decent place to live. That’s the vision fueling Habitat for Humanity to create ambitious digital experiences with Drupal. 

This session will present a case study covering how Drupal is being used to bring mission-driven innovation to reality for this international nonprofit. Both Drupal site builders and non-technical roles are encouraged to attend. 

You can register here to join the session led by Mediacurrent Project Manager Vicky Walker and two members of Habitat for Humanity's web team.  

Drupal for Higher Education 

The year 2020 called for higher ed leaders to accelerate digital marketing strategies. For many, Drupal was a key part of the equation. This rang true among a spectrum of Mediacurrent’s higher education partners, including an Ivy League university that chose a decoupled architecture for its breakthrough knowledge platform.

Dan Polant, Director of Development at Mediacurrent, will share that story in a co-presented session at the Higher Education Summit. The session will explore the University’s driving mission to build toward a brighter financial future on a Drupal and React-based platform.

Join the session on April 20 at the Higher Education Summit.

Connect with Us

There's more to come! Check back for Rain CMS demos, Drupal 9 info sessions, giveaways, and more coming soon to our DrupalCon 2021 event page.

Mar 15 2021
Mar 15
Pierce Lamb

This blog covers how the Personalized Paragraphs module was built, if you’re looking for how to use Personalized Paragraphs, check out the How To Use Personalized Paragraphs blog. If you have any questions you can find me @plamb on the Drupal slack chat, there is also a #personalized_paragraphs channel.

In 2020, I was tasked by my organization with finding or developing a personalization solution for our Drupal-based website. By personalization, I mean a tool that will match anonymous users into segments and display a certain piece of content based on that segmentation. Brief searching led me into the arms of Smart Content, a platform for personalization developed by the clever folks over at Elevated Third. Smart Content is a toolset for managing segmentation, decisions, reactions etc all within the Drupal framework. As a general platform, it makes no assumptions about how you want to, say, display the content to the user or pass results back to your analytics platform. However, it comes with a number of sub modules so you don’t need to develop these solutions on your own. Out of the box, Smart Content includes ‘Smart Content Block’ which allows you to utilize Drupal’s Block interface to manage your personalized content. There are a number of reasons this was a good idea, but it also presented some difficulties (at least for us).

After installing Smart Content, the most straightforward way to use personalized blocks was to create a Smart Content Decision Block in the block layout builder. However, to get control over where the block was placed (i.e. instead of in a region across many pages), we needed to disable the block, load it independently in a preprocess and attach it to the relevant page’s theme variables; a bit cumbersome. I recognize that there are other options like Block Field out there, but this appeared to be the most out-of-the-box way to use Smart Content Block. As a block-based solution, we found that we had to make changes to the blocks on prod then drag the changes back to our development branches and environments because exporting block config would cause UUID issues on merge. As our use cases grew, this became more cumbersome. In addition, my organization heavily leans on Paragraphs to power content inside of Nodes (and very sparingly uses blocks). After about 6 months of using Smart Content we decided we should see if we could utilize Paragraphs to power personalization.

The funny thing about Paragraphs is that they don’t ‘float free’ of Nodes in the same way that blocks do; at their core they are referenced by Nodes. Or at least I thought. When we discussed using paragraphs I did some brief research and saw that others had successfully attempted porting Smart Content to Paragraphs. Upon testing this module, I found that it relied on an old, fairly different version of Smart Content and also included a lot of extra code relevant to that organization’s use case. Further, it lacked the extremely well-thought interface for adding a segment set and reactions that’s contained in Smart Content Block. However, the key insight its author’s included was the use of the Paragraphs Library. Paragraphs Library is an optional sub module of Paragraphs that was quietly added in 2018 and allows users to create Paragraphs that ‘float free’ in just the way we’d need to personalize them. With this in hand, I thought I would try porting the experience of Smart Content Block to Paragraphs.

The Supporting Structure

The porting process began by digging into the smart_content_block sub-module of Smart Content. The entry point is Plugin/Block/DecisionBlock.php which appeared to be an Annotated Block plugin. When constructed, it had a control structure which created a further plugin ‘multiple_block_decision’ which I found defined in Plugin/smart_content_block/Decision. Further, in one of MultipleBlockDecision’s functions, it creates an instance of the display_blocks plugin which was defined in Plugin/smart_content_block/Reaction:

I knew that these three files combined must work together to create the nice user experience that administrating smart_content_block currently has. So I set about to emulate them, but with Paragraphs instead of blocks.

Paragraphs did not come pre-packaged with an obvious Annotation plugin to achieve what I wanted, so I created one. I sought to mimic the one included with Blocks and thus in the Annotation/PersonalizedParagaph.php file, I defined it as: . With this in hand I could now create a Plugin/PersonalizedParagraph/DecisionParagraph.php that mimicked smart_content_block’s DecisionBlock:

/**
* Class Decision Paragraph.
*
* @package Drupal\personalized_paragraphs\Plugin\PersonalizedParagraph
*
* @PersonalizedParagraph(
* id = "personalized_paragraph",
* label = @Translation("Personalized Paragraph")
* )
*/

However, before I defined , I knew I needed to extend something similar to BlockBase and implement ContainerFactoryPluginInterface just like DecisionBlock.php does. I opened Core/Block/BlockBase.php and attempted to mirror it as closely as I could. I created personalized_paragraph/Plugin/PersonalizedParagraphBase.php. Here is the comparison between the two:

abstract class BlockBase extends ContextAwarePluginBase implements BlockPluginInterface, PluginWithFormsInterface, PreviewFallbackInterface {

use BlockPluginTrait;
use ContextAwarePluginAssignmentTrait;
...
}</span>abstract class PersonalizedParagraphsBase extends ContextAwarePluginBase implements PersonalizedParagraphsInterface, PluginWithFormsInterface, PreviewFallbackInterface {

use ContextAwarePluginAssignmentTrait;
use MessengerTrait;
use PluginWithFormsTrait;
...
}</span>

And other than cosmetic function name changes, the classes are largely the same. They implement PluginWithFormsInterface which is defined as:

Plugin forms are embeddable forms referenced by the plugin annotation. Used by plugin types which have a larger number of plugin-specific forms.

Which certainly sounds like exactly what we need (a way to plug one form into another). You may have noticed one difference though, I had to create an interface, PersonalizedParagraphsInterface to mirror BlockPluginInterface. Again, these two files are largely the same, I’ll leave it to the reader to check them out.

At this point, I now had the beginning of a DecisionParagraph.php and the files that back it, Annotation/PersonalizedParagraphs.php, PersonalizedParagraphsBase.php and PersonalizedParagraphsInterface.php. Since DecisionParagraph.php is a plugin, I knew I’d need a Plugin Manager as well. My next step was to create a /Plugin/PersonalizedParagraphsManager.php. This file is as default as it gets when it comes to a Plugin Manager:

class PersonalizedParagraphsManager extends DefaultPluginManager {

/**
* Constructs a new ParagraphHandlerManager object.
*
* @param \Traversable $namespaces
* An object that implements \Traversable which contains the root paths
* keyed by the corresponding namespace to look for plugin implementations.
* @param \Drupal\Core\Cache\CacheBackendInterface $cache_backend
* Cache backend instance to use.
* @param \Drupal\Core\Extension\ModuleHandlerInterface $module_handler
* The module handler to invoke the alter hook with.
*/
public function __construct(\Traversable $namespaces, CacheBackendInterface $cache_backend, ModuleHandlerInterface $module_handler) {
parent::__construct(
'Plugin/PersonalizedParagraph',
$namespaces,
$module_handler,
'Drupal\personalized_paragraphs\Plugin\PersonalizedParagraphsInterface',
'Drupal\personalized_paragraphs\Annotation\PersonalizedParagraph'
);

$this->alterInfo('personalized_paragraphs_personalized_paragraphs_info');
$this->setCacheBackend($cache_backend, 'personalized_paragraphs_personalized_paragraphs_plugins');
}
}</span>

You can see in its constructor that it gets created with all the key files we need to create the DecisionParagraph plugin. Note that a plugin manager also requires a module.services.yml which I defined as follows:

services:
plugin.manager.personalized_paragraphs:
class: Drupal\personalized_paragraphs\Plugin\PersonalizedParagraphsManager
parent: default_plugin_manager

I knew at this point I must be really close. However if you recall the screenshot above, I was still missing mirrors of MultipleBlockDecision and DisplayBlocks. My next step was to create /Plugin/smart_content/Decision/MultipleParagraphDecision.php and /Plugin/smart_content/Reaction/DisplayParagraphs.php. While over the course of building Personalized Paragraphs these files would get edited, the class stubs would be identical. This is largely because Smart Content creates Annotated Plugin types for many of its core functions which makes it extremely easy to extend. Comparing MultipleBlockDecision and MultipleParagraphDecision:

/**
* Provides a 'Multiple Block Decision' Decision plugin.
*
* @SmartDecision(
* id = "multiple_block_decision",
* label = @Translation("Multiple Block Decision"),
* )
*/
class MultipleBlockDecision extends DecisionBase implements PlaceholderDecisionInterface {
...
}
/**
* Provides a 'Multiple Paragraph Decision' Decision plugin.
*
* @SmartDecision(
* id = "multiple_paragraph_decision",
* label = @Translation("Multiple Paragraph Decision"),
* )
*/
class MultipleParagraphDecision extends DecisionBase implements PlaceholderDecisionInterface {
...
}

And this is isomorphic in the case of DisplayBlocks.php and DisplayParagraphs.php. With MultipleParagraphDecision and DisplayParagraphs in place, I just had to go change where they were created from multiple_block_decision -> multiple_paragraph_decision in DecisionParagraph and display_blocks -> display_paragraphs in MultipleParagraphDecision. At this point, my /src/ folder structure looked like this:

Very close to the structure of smart_content_block. Okay so now I have all this plugin code defined, but how will Drupal know when and where to create instances of personalized_paragraph?

When and Where does this run?

The first step was to create a Paragraph Type called ‘Personalized Paragraph’. Simple enough. At the time I created this, I did not think it would need any fields, but we will discuss later why I did. The Personalized Paragraph Type would be the entry point for a Paragraph inside a node to basically say “hey I’m going to provide personalized content.”

Our first ever use case for personalized content was our homepage banner, so to test my code, I created another Paragraph type called Personalization — Homepage Banner (the reason for this naming convention is that you can imagine many personalization use cases all being grouped together by starting with ‘Personalization -’). The key switch I needed to flip in creating this test Paragraph was this:

“Allow adding to library” meant that this specific Paragraph Type could have members created in the Paragraphs Library that ‘float free’ from any node. With that flipped, I just needed to mirror the fields that produce our homepage banner in this Paragraph Type. Now I could load the Paragraphs Library, , and create every personalized paragraph I needed to support personalizing the homepage banner. This step is discussed in more detail in the How To Use Personalized Paragraphs blog.

Now, in order to test the ‘when and where’ question above, I loaded our ‘homepage’ content type and added a new field, ‘Personalized Banner’ that referenced a ‘Paragraph’:

And in the Paragraph Type to reference I selected Personalized Paragraph:

The Personalized Banner field was now telling our homepage node that it would contain personalized content. With this structure in place, I could now programmatically detect that a personalized_paragraph was being edited in the edit form of any homepage node and displayed when a homepage node was viewed. Further, I’d be able to use the Paragraphs I’d added to the library to display when different Smart Content segments were matched.

The Form Creation Journey

I wanted to get the node edit form working first, so in personalized_paragraphs.module, I needed to detect that a Paragraph of type personalized_paragraph was in a form. I created a:

function personalized_paragraphs_field_widget_entity_reference_paragraphs_form_alter(&$element, FormStateInterface &$form_state, $context){...}

Which is a form_alter hook that I knew would run for every Paragraph in a form, so I immediately needed to narrow it to personalized_paragraphs Paragraphs:

$type = $element['#paragraph_type'];
if($type == 'personalized_paragraph'){
...
}
}

So I was hooked into any form that contains a Personalized Paragraph. This captures the ‘when and where’ that I needed to load the plugin code defined above. So the next step was to load the plugin inside our control structure:

if ($plugin = personalized_paragraphs_get_handler('personalized_paragraph')) {
$build_form = $plugin->buildConfigurationForm([], $form_state);
$element['subform']['smart_content'] = $build_form;
}

And the code for _get_handler:

function personalized_paragraphs_get_handler($plugin_name) {
$plugin_manager = \Drupal::service('plugin.manager.personalized_paragraphs');
$definitions = $plugin_manager->getDefinitions();

foreach ($definitions as $plugin_id => $definition) {
if ($plugin_id == $plugin_name) {
return $plugin_manager->createInstance($plugin_id);
}
}
return false;
}</span>

So what’s going on here? Well, we know we’re acting on the form that builds a Paragraphs edit interface. Once we know that, we can go ahead and load the Annotation plugin we defined in the beginning (personalized_paragraph) using the custom plugin manager we defined (plugin.manager.personalized_paragraphs). This will give us an instance of DecisionParagraph. With that instance, we can call DecisionParagraph’s buildConfigurationForm method passing it an empty array. When it returns, that empty array will be a filled render array which mirrors the smart_content_block user experience exactly, but within a Personalized Paragraph. So all we need to do is attach it in its own key (smart_content) to the element’s ‘subform’ and it will display in the right area.

So what is happening inside buildConfigurationForm? I won’t be going too in depth in here as most of this is simply mimicking smart_content_block. Suffice it to say that when the DecisionParagraph is constructed, an instance of MultipleParagraphDecision is also constructed. ->buildConfigurationForm ends up being called in both classes. You can view the code in each to get a sense of how the form render array is built. Now, with this code in place, we end up with an experience exactly like smart_content_block, but inside a Paragraph inside a Node; this is what the personalized paragraph in my homepage type looked like:

This is ultimately what anyone who has used smart_content_block would want out of a Paragraphs-based version. Since we had been using smart_content_block, we had a number of Segment Sets already to test from. Here is the result of selecting our Homepage Customer Block Segment Set:

I would like to digress for a moment here to discuss one of the most difficult bugs I encountered in the process. Getting the ‘Select Segment Set’ ajax to work was an absolute journey. On first implementation, the returned content was an empty . That class name led me to ManagedFile.php which is a class that provides an AJAX/progress aware widget for uploading and saving a file. This of course was odd because this element was not an uploading/file widget, however this particular Node edit form did have elements like this on the page. After stepping through execution in both Symfony and core’s FormBuilder what I discovered is this (line 1109 of FormBuilder):

// If a form contains a single textfield, and the ENTER key is pressed
// within it, Internet Explorer submits the form with no POST data
// identifying any submit button. Other browsers submit POST data as
// though the user clicked the first button. Therefore, to be as
// consistent as we can be across browsers, if no 'triggering_element' has
// been identified yet, default it to the first button.
$buttons = $form_state->getButtons();
if (!$form_state->isProgrammed() && !$form_state->getTriggeringElement() && !empty($buttons)) {
$form_state->setTriggeringElement($buttons[0]);
}

In short, I was pressing ‘Select Segment Set’, the triggering element wasn’t being found as the form was rebuilt in FormBuilder, and the code was just setting it to the first found button on the page (hence MangedFile.php). I have no objection with the comment or reason for this code block, but it makes it extremely difficult to figure out why your AJAX button isn’t working. If, for example, it triggered a log statement inside the if that said something like “the triggering element could not be matched to an element on the page during form build” it would have saved me multiple days of pain.

FormBuilder attempts to match the triggering element by comparing the name attribute of the pressed button to the name attributes of buttons on the page as it rebuilds the form. The issue was occurring because smart_content_block creates the name from a UUID it generates when MultipleDecisionBlock is created. In Personalized Paragraphs, this creation occurs inside a which is called again while the form is rebuilt. As such a new UUID is generated, and FormBuilder cannot match the two elements.

The solution was to create a name that is unique within the edit form (so it can be matched), but does not change when the form is rebuilt. I added this above ->buildConfigurationForm:

$parent_field = $context['items']->getName();
$plugin->setConfigurationValue('parent', $parent_field);
$build_form = $plugin->buildConfigurationForm([], $form_state);
$element['subform']['smart_content'] = $build_form;

The machine name of the field that contains the personalized paragraph is passed along configuration values in DecisionParagraph to MultipleParagraphDecision where it is extracted and used to create the name attribute of the button. This solved the issue. Okay, now back to the returned Reactions.

The class that builds the Reactions after a Segment Set is selected is DisplayParagraphs; an instance is created for each Reaction, the code that executes this is found in MultipleParagraphDecision inside the stubDecision() method and buildSelectedSegmentSet method if the Reactions already exist. The Reactions are the first place we depart from the smart_content_block experience.

Seasoned users of smart_content_block will notice that the ‘Add Block’ button is missing. One of the most difficult problems I encountered while porting smart_content_block was getting the ajax buttons in the form experience to work correctly. Because of this, I opted to just hide them here (commented the code that built them in DisplayParagraphs.php) and instead validate and submit whatever is in the select dropdown at submission time. I liked the simplicity of this anyway, but it means that a given reaction could never contain more than one paragraph. This is an area ripe for contribution inside personalized_paragraphs.

In order to populate the select dropdowns in the Reactions, I first needed to go create some test Paragraphs Library items that would exist in them. I loaded /admin/content/paragraphs, selected ‘Add Library Item’ and then Add -> ‘Personalization — Homepage Banner’ (the Paragraph I created earlier to mimic the content I’m personalizing). I created a few instances of this Paragraph. Now I could go back to DisplayParagraphs.php and figure out how to retrieve these paragraphs.

Looking at the buildConfigurationForm method, it was clear that an array of $options was built up and passed to the form render array, so I needed to simply create some new options. Since we’re dealing with ContentEntities now, this was pretty easy:

$pg_lib_conn = $this->entityTypeManager->getStorage('paragraphs_library_item');
$paragraphs = $pg_lib_conn->loadMultiple();
$options = [];
$options[''] = "- Select a Paragraph -";
foreach($paragraphs as $paragraph){
$maybe_parent = $paragraph->get('paragraphs')->referencedEntities();
if(!empty($maybe_parent)) {
$parent_name = $maybe_parent[0]->bundle();
$options[$parent_name][$paragraph->id()] = $paragraph->label();
} else {
$options[$paragraph->id()] = $paragraph->label();
}
}

The code loads all of the existing paragraphs_library_items and splits them by Paragraph Type for easy selection in the dropdown which is how it works in smart_content_block. $options is later passed to a render array representing the select dropdown.

With this in place, we’re able to add a personalized_paragraph to a node, select a segment set, load reactions for that segment set and select the personalized paragraphs we want to display. Beautiful. What happens when we press Save?

The Form Submission Journey

Due to the way I was loading the Segment/Reaction form into the node edit form, none of the existing submit handlers were called by default. Thankfully the submit function attached to DecisionParagraph, paragraphSubmit, was designed in a way that it calls all the nested submit functions, i.e. MultipleParagraphDecision::submitConfigurationForm, which loops while calling DisplayParagraphs::submitConfigurationForm. So all I needed to do was attach paragraphSubmit as a custom handler like so:

function personalized_paragraphs_form_node_form_alter(&$form, FormStateInterface $form_state, $form_id){
$node = $form_state->getFormObject()->getEntity();
$personalized_fields = _has_personalized_paragraph($node);
if(!empty($personalized_fields)){
if ($plugin = personalized_paragraphs_get_handler('personalized_paragraph')) {
_add_form_submits($form, $plugin);
}
}
}

For reference, _has_personalized_paragraph looks like this:

function _has_personalized_paragraph($node){
$fields = [];
foreach ($node->getFields() as $field_id => $field) {
$settings = $field->getSettings();
$has_settings = array_key_exists('handler_settings', $settings);
if ($has_settings) {
$has_bundle = array_key_exists('target_bundles', $settings['handler_settings']);
if ($has_bundle) {
foreach ($settings['handler_settings']['target_bundles'] as $id1 => $id2) {
if ($id1 == 'personalized_paragraph' || $id2 == 'personalized_paragraph') {
array_push($fields, $field_id);
}
}
}
}
}
return $fields;
}

I’ll note here that it certainly ‘feels’ like there should be a more Drupal-y way to do this. I’ll also note that at the time of this writing, PP’s have not been tested on Paragraphs that contain them more than one level deep; my sense is that this function would fail in that case (another area ripe for contributing to the module).

Okay, so now we know that when someone presses ‘save’ in the node edit form, our custom handler will run.

paragraphSubmit departs pretty heavily from DecisionBlock::blockSubmit. First, since a Node could have an arbitrary number of personalized paragraphs, we must loop over $form_state’s userInput and detect all fields that have personalized paragraphs. Once we’ve narrowed to just the personalized fields, we loop over those and feed their subforms to similar code that existed in DecisionBlock::blockSubmit.

paragraphSubmit narrows to the form array for a given personalized paragraph and then passes that array to DecisionStorageBase::getWidgetState (a smart_content class) which uses NestedArray::getValue(). Users of this function know you pass an array of parent keys and a form to ::getValue() and it gives back null or a value. When I initially wrote this code, I hardcoded ‘0’ as one of the parents, thinking this would never change. However, one big difference in smart_content_block and personalized_paragraphs is that by virtue of being a paragraph, a user can press ‘Remove’, ‘Confirm Removal’ and ‘Add Personalized Paragraph’. In the form array that represents the personalized paragraph, pressing these buttons will increment that number by 1. So in paragraphSubmit, it will now have a 1 key instead of a 0 key. To handle this, I wrote an array_filter to find the only numerical key in the form array:

$widget_state = $form[$field_name]['widget'];
$filter_widget = array_filter(
$widget_state,
function ($key) {
return is_numeric($key);
},
ARRAY_FILTER_USE_KEY
);
$digit = array_key_first($filter_widget);

$parents = [$field_name, 'widget', $digit, 'subform', 'smart_content'];</span>

In comments above, it’s noted this will fail if someone attempts to create a field that has multiple Personalized Paragraphs in it (array_key_first will return only the first one). This is another area ripe for contribution in Personalized Paragraphs.

DecisionStorageBase::getWidgetState gets a decision storage representation from the form state and returns it. I added code here to ensure that the decision is always of type ContentEntity and not ConfigEntity (smart content defines both). Next, the code uses the $parents array and passed in $form variable to get the actual $element we’re currently submitting. It then runs this code:

if ($element) {
// Get the decision from storage.
$decision = $this->getDecisionStorage()->getDecision();
if ($decision->getSegmentSetStorage()) {
// Submit the form with the decision.
SegmentSetConfigEntityForm::pluginFormSubmit($decision, $element, $form_state, ['decision']);
// Set the decision to storage.
$this->getDecisionStorage()->setDecision($decision);
}
}

It’s easy to miss, but this line:

SegmentSetConfigEntityForm::pluginFormSubmit($decision, $element, $form_state, ['decision']);

Is what submits the current $element to the submit handler in MultipleParagraphDecision whose submit handler will ultimately call DisplayParagraphs submit handler ($decision in this case is the instance of MultipleParagraphDecision). So the chain of events is like this:

  • Node_form_alter -> add paragraphSubmit as a custom handler.
  • On submission, paragraphSubmit calls MultipleParagraphDecision::submitConfigurationFrom (via ::pluginFormSubmit)
  • This function has a looping structure which calls DisplayParagraphs::submitConfigurationForm for each Reaction (via ::pluginFormSubmit).

Before completing our walk through of paragraphSubmit, let’s follow the execution and dive into these submit handlers.

MultipleParagraphDecision::submitConfigurationForm is largely identical to MultipleBlockDecision::submitConfigurationForm. It gets the SegmentSetStorage for the current submission and loops for each Segment, creating a DisplayParagraphs instance for that segment uuid. It achieves this by calling:

$reaction = $this->getReaction($segment->getUuid());
SegmentSetConfigEntityForm::pluginFormSubmit($reaction, $form, $form_state, [
'decision_settings',
'segments',
$uuid,
'settings',
'reaction_settings',
'plugin_form',
]);

Where $reaction ends up being an instance of DisplayParagraphs for the current segment uuid. ::pluginFormSubmit is called like above which calls DisplayParagraphs::submitConfigurationForm.

This function starts by calling DisplayParagraphs::getParagraphs which is used all over DisplayParagraphs and modeled after DisplayBlocks::getBlocks. Because the block implementation can use PluginCollections, it’s easy for getBlocks to grab whatever block information is stored on the current reaction. I could not find a way to emulate this with paragraphs, so I opted to get paragraph information directly from the form input. If you recall my solution to the ajax button matching problem above (passing the unique machine ID of the parent field backwards via config values), getParagraphs implementation will look familiar.

First, for any call to ->getParagraphs that is not during validation or submission the caller passes an empty array which tells getParagraphs to try and get the Reaction information from the current configuration values (i.e. while its building dropdowns or sending an ajax response). Second, when called during validation or submission, the caller passes the result of . After the non-empty passed array is detected, this code executes:

$field_name = $this->getConfiguration()['parent_field'];
$widget_state = $user_input[$field_name];
$filter_widget = array_filter(
$widget_state,
function ($key) {
return is_numeric($key);
},
ARRAY_FILTER_USE_KEY
);
$digit = array_key_first($filter_widget);
$parents = [$digit, 'subform','smart_content','decision','decision_settings','segments'];
$reaction_settings = NestedArray::getValue($user_input[$field_name], $parents);
$reaction_arr = $reaction_settings[$this->getSegmentDependencyId()];
$paragraphs[$field_name][$this->getSegmentDependencyId()] = $reaction_arr;

getParagraphs extracts the machine ID of the current parent_field out of its configuration values and uses it to parse the UserInput array. The value is extracted similarly to paragraphSubmit (filter for a numeric key and call ::getValue()) and then an array of reaction information keyed by parent field name and current segment set UUID is created and passed back to the caller.

submitConfigurationForm then extracts the paragraph ID out of this array and creates an array that will store this information in the configuration values of this instance of DisplayParagaphs (highly similar to DisplayBlocks). At this point control switches back to MultipleParagraphDecision and the $reaction variable now contains the updated configuration values. The reaction information is then set via DecisionBase::setReaction(), the ReactionPluginCollections config is updated and the instance variable MultipleParagraphsDecision->reactions is updated. At this point, control then goes back to paragraphSubmit.

Before we step back there, I wanted to note that DisplayParagraphs::getParagraphs is another area ripe for contribution. I skipped over the first portion of this function; this function is called in multiple areas of DisplayParagraphs to either get submitted form values (which we discussed) or to retrieve the existing values that are already in the configuration. As such, the function is built around a main control structure that branches based on an empty user input. This could definitely be done in a more clean, readable way.

Okay, back to paragraphSubmit. At this point we have completed everything that was called inside ::pluginFormSubmit which stepped through all of our nested submission code. The $decision variable has been updated with all of that information and the decision is now set like this:

// Submit the form with the decision.
SegmentSetConfigEntityForm::pluginFormSubmit($decision, $element, $form_state, ['decision']);
// Set the decision to storage.
$this->getDecisionStorage()->setDecision($decision);

Now that we have built the submitted decision, we need to save it and inform the paragraph it’s contained in that this decision is attached to it:

if ($this->getDecisionStorage()) {
$node = $form_state->getFormObject()->getEntity();
$personalized_para = $node->get($field_name)->referencedEntities();
if($personalized_para == null) {
//Paragraphs never created and saved a personalized_paragraph.
\Drupal::logger('personalized_paragraphs')->notice("The node: ".$node->id()."has a personalized paragraph (PP) and was saved, but no PP was created");
}else {
$personalized_para = $personalized_para[0];
}
if(!$node->isDefaultRevision()){
//A drafted node was saved
$this->getDecisionStorage()->setnewRevision();
}
$saved_decision = $this->getDecisionStorage()->save();
$personalized_para->set('field_decision_content_token', $saved_decision->getDecision()->getToken());
$personalized_para->save();
if ($saved_decision instanceof RevisionableParentEntityUsageInterface) {
$has_usage = $saved_decision->getUsage();
if(!empty($has_usage)){
$saved_decision->deleteUsage();
}
$saved_decision->addUsage($personalized_para);
}
}

We first get the $node out of the $form_state; recall that we are inside a structure that is looping over all personalized fields, so we use $field_name to get the referenced personalized paragraph out of the field. In an earlier version, the paragraphSubmit handler ran first before any other handler; because of this, on a new node, the paragraph had not been saved yet and ->referencedEntities returned null. With it executing last this should never happen, but I left a check and log statement for it just in case there is something I have not thought of. Next we check for defaultRevision so we can inform the decision content that it’s part of a draft instead of a published node. Finally we save the decision, pass the returned token to the personalized paragraphs hidden field that stores it and then add to the decision_content_usage table which tracks usage of decisions and their parents.

At this point we have handled the Create and Update states of a personalized paragraph inside a Node edit form. What about the read state? Now that we’ve attached the decision token to the decision_content_token field of our personalized paragraph, we can go back to our and add:

$parents = ['subform', 'field_decision_content_token', 'widget', 0, 'value', '#default_value'];
$decision_token = NestedArray::getValue($element, $parents);
if($decision_token){
$plugin->loadDecisionByToken($decision_token);
}

loadDecisionByToken is a custom function I added to DecisionParagraph.php that looks like this:

public function loadDecisionByToken($token){
$new_decision = $this->getDecisionStorage()->loadDecisionFromToken($token);
$new_decision->setDecision($this->decisionStorage->getEntity()->getDecision());
$this->decisionStorage = $new_decision;
}

In essence, this takes the attached decision_token, loads the decision it represents out of the database and sets the the decisionStorage inside DecisionParagraph to that decision. By virtue of doing this, when ->buildConfigurationForm is later called, it gives us back the form that represents the segment set and reactions from the saved decision. Create, Read, Update… what about Delete?

When it comes to Paragraphs, delete is a fickle mistress. Because you can ‘Remove, Confirm Removal,’ ‘Add’ a Paragraph and then save the Node, Paragraphs must create a new paragraph which orphans the old paragraph. The bright minds behind Entity Reference Revisions have created a QueueWorker that finds these orphaned paragraphs and cleans them up, kind of like a garbage collector. At the time of this writing, Personalized Paragraphs does not implement something similar, and this is yet another area ripe for contribution. For example, if one saved a Node with a filled decision in a personalized paragraph then edited the node, remove/confirm removal/added a new personalized paragraph, filled out the decision and saved, both decisions would still be in the decision_content tables. Now if one deletes that Node, the current personalized paragraph’s decision will be deleted, but the old one will not, essentially orphaning that old decision. Here is how delete currently works:

function personalized_paragraphs_entity_predelete(EntityInterface $entity)
{
if ($entity instanceof Node) {
if ($fields = _has_personalized_paragraph($entity)) {
foreach ($fields as $field) {
$has_para = $entity->get($field)->referencedEntities();
if (!empty($has_para)) {
$has_token = !$has_para[0]->get('field_decision_content_token')->isEmpty();
if ($has_token) {
$token = $has_para[0]->get('field_decision_content_token')->getValue()[0]['value'];
_delete_decision_content($token);
}
}
}
}
}
}

In a hook_entity_predelete, we detect the deletion of a node with personalized paragraphs, iterate those paragraphs and delete the decision represented by the token currently attached to the paragraph. So we’ll get the current decision token, but not any old ones. Given that the only way to change the segment set of an existing decision is to ‘remove’ ‘confirm removal’ ‘add’, this will likely happen often. The consequence is that decision tables will grow larger than they need to be, but hopefully we, or an enterprising user, will create a fix for this in the near future.

Okay so we’ve handled adding the smart_content_block experience to any Node edit form with a personalized_paragraph. What about viewing a personalized_paragraph?

For viewing, we have the old tried and true hook_preprocess_hook to the rescue. We deploy a so our hook will run for every Paragraph; we quickly narrow to only those paragraph’s that reference personalized_paragraphs:

$parents = ['items', 0, 'content', '#paragraph'];
$para = NestedArray::getValue($variables, $parents);
if($para->bundle() == 'personalized_paragraph'){
...
}

Next, we attempt to get the decision token out of the decision_content_token field so we can pass it to DecisionParagraph::build():

if($para->bundle() == 'personalized_paragraph'){
if ($plugin = personalized_paragraphs_get_handler('personalized_paragraph')) {
$has_token = !$para->get('field_decision_content_token')->isEmpty();
if($has_token) {
$token = $para->get('field_decision_content_token')->getValue()[0]['value'];
$build = $plugin->build($token);
...
}

Where the build function looks like:

public function build($token) {
$this->loadDecisionByToken($token);
$decision = $this->getDecisionStorage()->getDecision();

$build = [
'#attributes' => ['data-smart-content-placeholder' => $decision->getPlaceholderId()],
'#markup' => ' ',
];

$build = $decision->attach($build);
return $build;
}</span>

Which is slightly modified from DecisionBlock::build(). We load the decision content that was attached to the personalized_paragraphs, then call the DecisionBase::attach() function on that decision. This passes control to a number of functions that create the magic inside smart_content. When attach() returns, we are given an array that smart_content.js will process to decide on and retrieve a winning Reaction. To complete the function:

$has_token = !$para->get('field_decision_content_token')->isEmpty();
if($has_token) {
$token = $para->get('field_decision_content_token')->getValue()[0]['value'];
$build = $plugin->build($token);
$has_attached = array_key_exists('#attached', $build);
if ($has_attached && !empty($build['#attached']['drupalSettings']['smartContent'])) {
$variables['items'][0]['content']['#attributes'] = $build['#attributes'];
$variables['items'][0]['content']['#attached'] = $build['#attached'];

$para_data = [
'token' => $token,
];
$has_name = !$para->get('field_machine_name')->isEmpty();
$name = $has_name ? $para->get('field_machine_name')->getValue()[0]['value'] : '';
$variables['items'][0]['content']['#attached']['drupalSettings']['decision_paragraphs'][$name] = $para_data;
} </span>

We get the $build array back from ->build and verify that it has the appropriate attachments to run smart content. If it doesn’t we log a statement demonstrating that something in the build function has failed. If it does, we attach the correct piece of the build array to our variables array. I want to focus in on this code block to complete the discussion:

$para_data = [
'token' => $token,
];
$has_name = !$para->get('field_machine_name')->isEmpty();
$name = $has_name ? $para->get('field_machine_name')->getValue()[0]['value'] : '';
$variables['items'][0]['content']['#attached']['drupalSettings']['decision_paragraphs'][$name] = $para_data;

This code block represents how my organization manages the front end of personalized paragraphs and I’ll admit it’s an assumption on how you, the user, might want to manage it. If you’ve been following along, you’ll have noticed that pesky ‘Machine Name’ field I attached to personalized paragraphs. Here is where it comes into play. We extract the passed name which should be unique to the page itself; that name and the decision_content_token is attached to drupalSettings so it is available to javascript files using drupalSettings. With the name and token available in javascript, one can now:

a.) Detect that the decision paragraph loaded (is the decision_paragraphs key in drupalSettings? Does it contain this unique machine name?) and if not, ensure a default experience loads,

b.) Run javascript functions that display the winning experience or the default experience.

Since our method for managing the front end is beyond the scope of how personalized paragraphs was built, I’ll discuss it more in the How To Use Personalized Paragraphs blog.

There’s one more function to discuss that gets called as the front end experience is being displayed and that is DisplayParagraphs::getResponse. When smart_content.js selects a winner, it runs some ajax which calls ReactionController which loads the winning Reaction and calls its ->getResponse method. I had to slightly modify this method from DisplayBlocks to deal with Paragraphs:

public function getResponse(PlaceholderDecisionInterface $decision) {
$response = new CacheableAjaxResponse();
$content = [];
// Load all the blocks that are a part of this reaction.
$paragraphs = $this->getParagraphs([]);
if (!empty($paragraphs)) {
// Build the render array for each block.
foreach ($paragraphs as $para_arr) {
$pg_lib_conn = $this->entityTypeManager->getStorage('paragraphs_library_item');
$para_lib_item = $pg_lib_conn->load($para_arr['id']);
$has_para = !$para_lib_item->get('paragraphs')->isEmpty();
if($has_para){
$para_id = $para_lib_item->get('paragraphs')->getValue();
$target_id = $para_id[0]['target_id'];
$target_revision_id = $para_id[0]['target_revision_id'];
$para = Paragraph::load($target_id);
$render_arr = $this->entityTypeManager->getViewBuilder('paragraph')->view($para);
$access = $para->access('view',$this->currentUser, TRUE);
$response->addCacheableDependency($access);
if ($access) {
$content[] = [
'content' => $render_arr
];
$response->addCacheableDependency($render_arr);
}
}

}
}
// Build and return the AJAX response.
$selector = '[data-smart-content-placeholder="' . $decision->getPlaceholderId() . '"]';
$response->addCommand(new ReplaceCommand($selector, $content));
return $response;
}</span>

Instead of getting the content from the BlockCollection configuration, to send back, I had to grab the id stored in the config values, which loads a Paragraphs Library Item. That item references a Paragraph, so I grab its ID and load the Paragraph. The render array is created off the loaded Paragraph and sent back to the ajax to be displayed.

Phew, somehow we’ve gotten to what feels like the end. I’m sure there is something I’m forgetting that I’ll need to add later. But if you’ve made it this far then you are a champion. I hope what the Smart Content folks have created and my little extension work for your use case and that this blog has made you aware of how things work much more quickly than just reading and debugging the code would.

If you have any questions you can find me @plamb on the Drupal slack chat, there is also a #personalized_paragraphs channel and a How To Use Personalized Paragraphs blog.

Mar 11 2021
Mar 11

Chris Zietlow from Mindgrub gave his new talk on AWS: How an online retailer came to conquer the Internet. He explores the Genesis of Amazon Web Services, how it became widely adopted, and a birds eye view of some of the more common problems their services can solve.

[embedded content]

If you would like to join us please check out our up coming events on MeetUp for meeting times, locations, and remote connection information.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback. If you want to see a polished version checkout our group members’ talks at camps and cons.

If you are interested in giving a practice talk, leave me a comment here, contact me through Drupal.org, or find me on Drupal Slack. We’re excited to hear new voices and ideas. We want to support the community, and that means you.

Mar 06 2021
Mar 06
Sean B

Responsive images have always been a pain to configure properly. In Drupal you can create your breakpoints in your theme or module and use the Responsive Image module to set up different responsive image styles, defining which image style to use for a specific breakpoint. This takes quite some work and planning to set everything up, and maintaining all the image styles if changes need to be made is always a pain. Most of the sites I have built lately also have a fluid design. Since the images are defined for fixed breakpoints, this leads to a lot of the images being loaded for the user are still too big.

After struggling with this, we thought about how we could find a way to improve this. In HTML5 we can define a “srcset” attribute, which loads the correct image based on the browsers viewport. The default “src” contains a really small version of the image by default for better performance. Also notice the HTML5 “loading” attribute to enable lazy loading of images for even more optimization.

My pretty image

Since we were using media to add images to content, we experimented with having media view modes defined by aspect ratio, combined with a bunch of different image styles for the images in that specific aspect ratio. The media template could provide all the image styles with different widths for that image style, and use the “srcset” attribute to let the browser pick the best image. So we now got image styles for a 4:3 ratio and 16:9 ratio like:

  • responsive_4_3_50w
  • responsive_16_9_50w
  • responsive_4_3_150w
  • responsive_16_9150w
  • responsive_4_3_1450w
  • responsive_16_9_1450w

For images maintaining their original ratio we just use the width, like 50w, 150w, etc. The media template for our “16_9" view mode (media — image — 16-9.html.twig) now looked like this, using the “image_style“ filter of the Twig Tweak module to load the actual image URLs for image styles from the file:

{#
/**
* @file
* Default theme implementation to display an image.
*/
#}
{% set file = media.field_media_image.entity %}
{% set src = file.uri.value|image_url('responsive_16_9_50w') %}
{% set srcset = [
file.uri.value|image_style('responsive_16_9_150w') ~ ' 150w',
file.uri.value|image_style('responsive_16_9_350w') ~ ' 350w',
file.uri.value|image_style('responsive_16_9_550w') ~ ' 550w',
file.uri.value|image_style('responsive_16_9_950w') ~ ' 950w',
file.uri.value|image_style('responsive_16_9_1250w') ~ ' 1250w',
file.uri.value|image_style('responsive_16_9_1450w') ~ ' 1450w',
] %}
{{ media.field_media_image.alt }}

The first problem we noticed, was the “srcset” attribute uses the viewport width, not the width of the image container. This means when the viewport is 1400px and the image is shown in a column with a width of 200px, the image style with a width of 1400px is chosen by the browser. This was not giving us the result we were looking for. The only way to figure out the width of the container is via JavaScript, so we wrote a little script to figure out the available width for each image and load the correct image style using ResizeObserver. The ResizeObserver does not work in IE11, but this was not a requirement for our project. Besides, Drupal will also drop IE11 support in Drupal 10! To prevent the browser from initially loading the large images from the “srcset” attribute, we changed the “srcset” attribute to “data-srcset” and let the JavaScript handle the rest.

// Fetch all images containing a "data-srcset" attribute.
const images = context.querySelectorAll('img[data-srcset]');

// Create a ResizeObserver to update the image "src" attribute when its
// parent container resizes.
const observer = new ResizeObserver(entries => {
for (let entry of entries) {
const images = entry.target.querySelectorAll('img[data-srcset]');
images.forEach(image => {
const availableWidth = Math.floor(image.parentNode.clientWidth);
const attrWidth = image.getAttribute('width');
const sources = image.getAttribute('data-srcset').split(',');

// If the selected image is already bigger than the available width,
// we do not update the image.
if (attrWidth && attrWidth > availableWidth) {
return;
}

// Find the best matching source based on actual image space.
let source, responsiveImgPath, responsiveImgWidth;
for (source of sources) {
let array = source.split(' ');
responsiveImgPath = array[0];
responsiveImgWidth = array[1].slice(0, -1);
if (availableWidth < responsiveImgWidth) {
break;
}
}

// Update the "src" with the new image and also set the "width"
// attribute to easily check if we need a new image after resize.
image.setAttribute('src', responsiveImgPath);
image.setAttribute('width', responsiveImgWidth);
});
}
});

// Attach the ResizeObserver to the image containers.
images.forEach(image => {
observer.observe(image.parentNode);
});</span>

The second problem with this method was creating all the image styles we needed. This could be fixed with a form to automatically create all the image styles we needed for our aspect ratios. So we built the Easy Responsive Images module. The module needs a minimum and maximum width in combination with a preferred amount of pixels between each image style. An optional list of aspect ratio’s can also be defined. When the configuration is saved, the styles are automatically generated.

Now that we have the best possible images loaded based on the container, we can take one more step to improve the performance of our images. Using the Image Optimize module, we can create optimization pipelines that can automatically apply to images displayed via image styles. We chose to use JpegOptim and PngQuant supported via the Image Optimize Binaries module (the PreviousNext blog contains some more data on the module and results). If you can not install those binaries on your server, there is also a ImageAPI Optimize GD module.

Then there is also the ImageAPI Optimize WebP module.

WebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster.

In our tests we found that for most images, WebP is about 30% — 50% smaller than jpg images. For png images it is even more. To easily load the WebP version of an image when a browser supports it, we created a “image_url” Twig filter in the Easy Responsive Images module, with added bonus support for external images via the Imagecache External module.

The final file for our media view mode using the JavaScript and the new Twig filter looks like this:

{#
/**
* @file
* Default theme implementation to display an image.
*/
#}
{{ attach_library('easy_responsive_images/resizer') }}
{% set file = media.field_media_image.entity %}
{% set src = file.uri.value|image_url('responsive_16_9_50w') %}
{% set srcset = [
file.uri.value|image_url('responsive_16_9_150w') ~ ' 150w',
file.uri.value|image_url('responsive_16_9_350w') ~ ' 350w',
file.uri.value|image_url('responsive_16_9_550w') ~ ' 550w',
file.uri.value|image_url('responsive_16_9_950w') ~ ' 950w',
file.uri.value|image_url('responsive_16_9_1250w') ~ ' 1250w',
file.uri.value|image_url('responsive_16_9_1450w') ~ ' 1450w',
] %}
{{ media.field_media_image.alt }}

The example uses the same aspect ratio for all defined widths, but technically that is not a requirement. Using a different aspect ratio for smaller/larger screens can still be used based on the requirements, although that would make the setup a bit more complex and would require more view modes for your media.

That’s about it. Some next steps could be adding a formatters for the modules and figuring out support for retina images (even though these would increase the image sizes).

Hope this helps anyone looking to improve and optimise the way they implement responsive images in Drupal.

Mar 04 2021
Mar 04

Florida Drupal Camp this year took a slightly different shape as a virtual event (with fond memories of being there in person just a year ago) and the organizers and volunteers pulled it all off beautifully. The addition of a virtual world on gather.town really brought the event to life and enabled bumping into folks on the “hallway track” and greeting visitors in our booth. We were again very grateful to be able to sponsor this space for the community to come together to share, learn and celebrate.

Here are a few highlights from the event:

What is DevOps and are we really doing it?

During the DevOps summit I was invited to participate on a panel featuring Tess Flynn and Albert Volkman with Mike Anello moderating. We chatted about whether process is the heart of DevOps, what exactly DevOps means, the role of automation, and of course some war stories. Watch the DevOps panel on Drupal.tv. My summary of DevOps was that it’s all about observing patterns, documenting repeatable steps, and automating them. It’s a blend of traditional development and operations functions with a culture of collaboration in a blameless, learning and improvement focused environment.

And it’s a lot easier said than done! But that’s what we aim to offer with DDEV: development and deployment tools that just work — and tap powerful container technology to help everyone achieve high value outcomes.

Demystifying Drupal Deployment Workflows

Our director of customer success, David Stoline, gave a runthrough of some basic steps to take a Drupal project from repository to local development to deployment on production hosting.

  1. Get started by using Composer to create a site and pull all dependencies locally.
  2. Use Git to collaborate and work with a team; plus with your code centralized, you can connect your repository to hosting providers and CI/CD tools for automated testing and deployments.
  3. Avoid the “YOLO” workflow. “You only live once” is not a recommended approach to deploying a web project. Check out a Git tag for a site release to connect to the production environment. For example, to deploy a Drupal site to DDEV-Live from your tagged release `0.1.3` and run Composer install use:
$ ddev-live create site drupal mysite \
--git-repo https://github.com/user/repo \
--git-rev 0.1.3 \
--docroot web \
--run-composer-install`

Deployments not so fragile as they used to be. You had to triple check that your ducks were in a row, then keep a sharp eye on every one of them in case of failure, with only a 50/50 chance it would all come together successfully the first time. Thanks to DevOps, automation, and some nifty tooling, deployments are significantly less stressful these days.

A shared, live development environment with GitPod and DDEV

This is a super cool example of what open source can achieve: Ofer Shaal of Palantir.net has built a Gitpod environment with DDEV-Local. Gitpod is a tool that facilitates the creation of development environments online, in the cloud, that you can share and work in live with anyone anywhere. As Shaal said, “It takes advantage of everything DDEV offers to make life easy, and puts it in Gitpod so you can have the environment in your browser.” 

Watch the demo in the repo to see how it works once you’ve set things up. The environment is created automatically from any repository URL with gitpod.io inserted in front of it. Gitpod will spin up and give you an IDE and terminal in the browser for the repository. Then you can run Composer commands, Drush commands, all the usual DDEV commands included Xdebug and it’s all fast.

The goal is to allow Drupal contributions to open a Gitpod space using this and be able to collaborate and work without needing to install tools and software on your own machine. This might seem similar to projects like Simplytest.me, but here we don’t just preview a site, we have an entire suite of working tools. There’s a lot of potential there to make things more accessible to more people. It also works with other local development environments, as it’s all based on Docker.

ps. We added a Gitpod link to the DDEV-Local project.

Drupal Open Source Contribution Time

While the Drupal project as a global community has been organizing contributions remotely for decades (so weird to say that now that Drupal is 20), and even online office hours for new contributors (it’s how yours truly got started), moving in-person contribution workshops online has required some experimentation. We were happy to see folks stuck around for this portion of the program on Saturday afternoon where DDEV-Local maintainer Randy Fay helped mentor new contributors.

How can I contribute to open source?

Kudos and thank you to AmyJune, who has been a powerhouse of organizing and coordinating contribution time at so many events lately. Her presentation on getting started with contributions included:

  • Why contribute to open source? If you depend on open source, open source depends on you. Agencies should give back to avoid duplicating efforts. You can help others from having to do the same work for a problem that’s already been solved. 
  • What types of contributions can I work on? There are so many ways to contribute without touching code such as documentation, translation, design, project management, event organizer, and more. The wide variety of folks in the community should be represented in contributions to the project so the project actually reflects the needs of everyone who uses it. 
  • What do I need to get started? An understanding of the Drupal issue queue is important, and how to communicate in comments on the issues as well as in Slack. Depending on the task, you might use integrated tools on Drupal.org right in the web browser, or you might need a local development environment. DDEV-Local is the recommended local development tool for Drupal, and is quick to start on Windows, macOS and Linux after installing Docker.

Want to learn more about how to use DDEV for your local development needs, regardless of where you deploy? Join DrupalEasy’s 2 hour DDEV workshop on March 16 with Florida Camp organizer Mike Anello. You’ll also get access to loads of resources to keep building your skills independently. 

So long and thanks for the alligator

Thank you again to the organizers and volunteers who made this year’s virtual Florida Drupal Camp possible! We look forward to seeing the recorded videos and will add links to the above sessions when they’re available. The session recordings are now available thanks to Kevin Thull and the Drupal Recording initiative, which you can sponsor here.

Share this post:

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web