Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
May 12 2021
May 12
[embedded content]

Don’t forget to subscribe to our YouTube channel to stay up-to-date.

When someone shares a Facebook post with a link to your website, Facebook lets you control how your website content appears to others by parsing your Open Graph (OG) markup.

By doing some minimal configuration changes on your Drupal site using the Metatag Module and the Metatag: Open Graph sub module, you can define what specific content can be shown on Facebook regardless of whether it’s shared from the desktop or mobile web or a mobile app.

It is easier to explain by showing the end result of what we are trying to accomplish. For example, the homepage of www.webwash.net has the following OG markup inside of the :

If someone posted www.webwash.net to Facebook, then Facebook would parse the OG markup like this:

And the end result would look like this:

You can clearly see the corresponding OG tags for the content.

If you want to learn more about the OG tags. Click here for a detailed list and explanations of the Facebook OG tags.

In this tutorial, we are going to show you how to configure a content type to dynamically populate the Facebook OG tags using the Metagtag module and Metatag Open Graph sub module.

Table of Contents

Getting Started

The Metatag module has a dependency on the Token module. However, if you download and enable the Drupal module using composer and drush, the dependency is automatically taken care of as we will show you now.

Use Composer to download the module:

composer require drupal/metatag

Once the Metatag module is downloaded using composer, the Token module, which is a dependency, will be downloaded automatically.

Then enable the “Metatag: Open Graph” sub module:

drush en metatag_open_graph -y

The above drush command will automatically enable the Metatag: Open Graph sub module, Metatag module and Token module.

Finally, it is always a good idea to clear the cache after enabling Drupal modules:

drush cr

By default, Open Graph can be added to any content type. We will now configure Open Graph for the Article Content type.

1. Go to Configuration > Search and meta data > Metatag and click on “Add default meta tags”.

2. On the next page, select “Article” (or whatever content type you want to configure)  from the Type dropdown.

3. Then click on Save. This is required for the correct tokens to appear in the “Browse available tokens” window.

4. Edit the “Content: Article” configuration from the Metatag page.

5. Then in the Open Graph fieldset on the same page, click to expand it. You will now notice quite an exhaustive list of OG tags that you can populate. Firstly, we are going to demonstrate how to populate the Image OG tag.

For this to be successful, your Article content must have at least an image field. We will show you how to use Token to grab that image data.

Click on “Browse available tokens”.

From the Token window click on Nodes to drill down and find the content type fields.

NOTE: If you can’t see “Nodes”, this means you need to save the “default meta tag” option first then edit it again.

Fill in the following fields:

  • Content type: article
  • Page URL: [node:url]
  • Title: [node:title]
  • Description: [node:summary]
  • Image URL: [node:field_image] (adjust the field name accordingly)

Find Image Field Token

To use an image stored in an image field, click on “Browse available tokens”.

Then drill down by going to Nodes -> Image. This assumes you’re using the Image (field_image) field on the Article content type.

The token should be [node:field_image].

Find Image Field Token on Media Asset

If you’re using a media field instead of an image field for handling assets, then use the following token, [node:field_media:entity:thumbnail] (change the field_media name accordingly).

6.After you fill out all of your OG tags fields, click on Save and clear the Drupal cache. If you do not clear your cache, the OG fields may not populate for existing content.

7. Once you have filled out the other OG fields with their respective token values, you should validate the resulting OG markup using the Facebook Sharing Debugger tool. We will now show you how to validate your OG markup.

NOTE: Your website has to be publicly accessible for the debug tool to work.

8. First we need to create a test Article node (make sure to upload an image to our image field). Our test article looks like this:

9. Paste the url for this node into the Facebook Sharing Debugger tool. The output should look like:

As you can see, Facebook successfully parsed our OG markup and displayed the information correctly.

Summary

Facebook lets you control your website’s content whenever someone’s shares a link to your content regardless of whether it’s shared from the desktop or mobile web or a mobile app. Facebook does this by parsing the Open Graph (OG) markup provided by your website. We have shown how you can use the Metatag module and Metatag Open Graph sub module to configure Drupal 8 to correctly generate the OG markup needed by Facebook and we have also shown how to validate your OG markup to ensure Facebook correctly parses your website content.

FAQ

Q: I changed the default meta tag configuration, but the tags are not changing?
Try clearing the site cache. Go to Configuration > Performance and click on “Clear all caches”.

Editorial Team

About Editorial Team

Web development experts producing the best tutorials on the web. Want to join our team? Get paid to write tutorials.

May 11 2021
May 11

While BADCamp is taking a hiatus in 2021, the San Francisco Drupal community is still committed to providing the high-quality content we all know and love twice a month through SFDUG. We typically meet the second and fourth Thursdays of the month at alternating times to accommodate a wider audience who might be visiting us from other time zones. 

Our upcoming lineup:

May 13 - Building a High Availability Environment for Drupal with Azure

With Diego Tejera, CTO & Founder at Rootstack
Microsoft Azure has grown in the last few years and it has proven that it is a solid platform to deploy and scale web applications. In this talk we are going to show how to easily build a HA environment using the different Azure Services and setup automatic deployments using CI tools.

May 26 - Speaker Diversity Workshop

With AmyJune Hineline, Community Ambassador at Kanopi Studios

May 27 - ddev-gitpod a full dev environment in your browser

With Ofer Shaal, Drupal core maintainer
We are at the beginning of a new era, developer experience improvements allow us to focus on what's important. Come see how we change Drupal development and contributions as we know it.

June 10 - Upgrading from D7 to Backdrop CMS

With Jen Lampton co-founder of Backdrop
With Drupal 7 end-of-life nearing, it's time to decide what to do with your old Drupal 7 websites. Upgrading them to Backdrop CMS is a cost-effective way to bring them up to date, without needing to rebuild. Backdrop CMS is the Drupal fork: It's a faster and more feature-rich version of Drupal 7, but with all the bells and whistles you'd expect from a modern CMS. It's both easier to use, and more affordable to maintain!

June 24 - Concurrent Queue Workers for Drupal

With Simon Mora, Tech Lead at Rootstack
Several modules and implementations rely on the Drupal Queue API to process items. These items are usually processed by cron in a single process with an extremely limited throughput. In this talk, we're going to show how to reliably increase the queue performance using multiple workers with a pure-PHP solution and libraries like ReactPHP.

Workshops

In addition to the traditional presentations, this month, we are also hosting a Speaker Diversity workshop Wednesday, May 26, 2-5 pm PT. The goal of this workshop is to help speakers from underrepresented groups prepare to speak at conferences. This will, in turn, helps local meetups, Drupal Camps, and even DrupalCons develop a more diverse speaker roster. The workshop empowers folks to bust through their impostor syndrome and develop a topic, title, pitch, bio, and outline. 

Speak at SFDUG!

With that being said, we are always on the lookout for speakers. If you are interested in presenting at SFDUG, please reach out to AmyJune and Mark at sfdrupalusers @ gmail.com

May 11 2021
May 11

Here's an example of where "less is more" sometimes. I reckon we can make the video paragraph type in LocalGov Drupal simpler by just removing the template.

When looking recently at the template for the video paragraph type in LocalGov Drupal, I thought, "hang on, is this necessary?". As it turns out, we have a template for it, but I'm not sure it's needed.

One of the reasons is must be there however is because we have set a template suggestion for it in the localgov_subsites_paragraphs.module file, so if it's removed we will get a Twig error about an undefined template.

Let's go through this template implementation step-by-step to show how sometimes less is more.

I've started streaming some of my work, especially if it's contributions to open source, on twitch if you'd like to follow along/subscribe. As it turns out, Twitch deletes your videos after 14 days, so I have also been uploading them to YouTube. Feel free to subscribe.

May 11 2021
May 11

Workflow automation is key in saving time, and money, in all aspects of a business. Accounting is one department that benefits from automation and our Quickbooks integration for Drupal can help remove manual data entry and free staff up to focus on other business objectives.

Automating accounting workflows with Quickbooks integration for Drupal

We all know as business owners that time is money. The more time we can save, the more time we have to work on other tasks for our business.

This is common sense, and is something every business owner should be looking to improve on every single day.

  • How can you save time?
  • What can you automate?
  • What type of processes can you create to become more efficient?
  • What systems and softwares can you use to become more organized?

These are things that you need to be looking at to not only save time for you and your staff, but so you can get things done right the first time. Having solid processes, automation and efficiency is how you create a smooth running business, that's why we decided to help out.

How Acro Media is contributing to workflow automation

Still learning? Find more info on business automation >While it’s hard for us to know what specific needs, systems and processes your business would need in order to save time, we have been looking for a solution to help the majority of our clients.

One category we know every business could use some help on, and by help I mean automate, is accounting and bookkeeping. This can be a huge time suck.

Manually inputting product orders, customers, refunds, credit memos, and invoicing takes time. If you use Quickbooks and don’t have any type of automation or integration setup with your Drupal site, these tasks can eat up a lot of manpower every month. Manpower that could be used to hunt new accounts, research better products or services and in general find ways to make you business better, instead of doing menial tasks.

Not to mention, if you are doing all of these tasks manually, there is always a chance for human error, and we know, all of humans make mistakes.(Frantically searches for typos, again.)

This is why Acro Media has made an effort to develop a module that can integrate and connect your Drupal site straight to your Quickbooks account. This module eliminates any chance of human error and will save you a HUGE amount of time every month as all your data from your Drupal site will be automatically pushed into your Quickbooks account.

If your business is a little unique or you require functionality that our module doesn't currently have, we can build it for you. Gotta love open source!

To get a better visual idea of what our new Quickbooks module can do, have a look at the infographic below.

acro_quickbooks_display_ads_-_1.1_-_ap.jpg

How much time can you expect to save?

While there is no way for us to determine how much time you can save each month, we have put together a rough formula to see how this module can help you and your team save countless hours every single month.

We calculated the median time spent manually creating the following bookkeeping essentials:

  • Creation of an invoice or sales receipt: 2.5 min/order
  • Manual credit memo or credit note: 1 min/order
  • Payment processing or annotation: 1 min/order

Time saved = [amount of orders each month] X 4.5min

So, let’s turn that formula into a good old grade-school word/math problem:

You have a healthy ecommerce business, and your online store alone is generating 2,500 sales a month.

2,500 x 4.5 min is equal to 11,250 minutes a month. Or, 187.5 hours a month.

That means you are freeing up 1 full-time staff member and 1 part-time staff member every month by having your ecommerce site integrated and talking directly to Quickbooks. Not to mention the reduction in human error, the better data you will get out of your sales reports and the time your staff can now dedicate to anything other than manual data entry!

What’s next?

If you're looking for ways to save time in your business and focus on what matters most, reach out to our team and let’s see if this module is a right fit for your business.

Talk soon, and remember... time is of the essence.

Still have questions? Check out our business automation page >

Editor’s Note: This article was originally published on August 24, 2016, and has been updated for freshness, accuracy and comprehensiveness.

May 11 2021
May 11
easy pathauto module install and configuration

 

https://www.drupal.org/project/pathauto

The Ctools and Token modules are required:
https://www.drupal.org/project/ctools
https://www.drupal.org/project/token

Credits & Thanks

Thank you to:

About the Pathauto Module

The Pathauto module generates URLs for your content without requiring you to enter the path alias manually. In other words, if the title of your new blog post is “My Big Cat” then Pathauto will set the URL to

yourDrupal8site.dev/my-big-cat

instead of

yourDrupal8site.dev/node/23.

Putting the right words in the URL is great for SEO, so this module is essential to your project. If you don’t use the Pathauto module, you must remember to create every single content URL on your website manually.

Install and Enable the Pathauto Module

WARNING:
If you have an existing site with the Pathauto module installed and enabled, before making any setting changes, you’ll want to check with your developer and content creators to make sure that any existing paths do not get changed, which can create problems with your SEO.

  1. Install the Pathauto module and the required Chaos Tools and Token modules on your server. (See this section for more instructions on installing modules.)
     
  2. Go to the Extend page: Click Manage > Extend (Coffee: “extend”) or visit https:///admin/modules.

    Drupal Pathauto Installation
     

  3. Select the checkbox next to Pathauto and click the Install button at the bottom of the page.

    NOTE: You may get a message asking for your permission to install the Chaos Tools and Token modules. If you do, click the Continue button.

    Drupal Chaos Tools Module Installation

Permissions

If necessary, give yourself permissions to use the Pathauto module.

  1. Click Manage > People > Permissions (Coffee: “permissions”) or visit https:///admin/people/permissions.

    pathauto permissions screen
     

  2. Select the appropriate checkboxes for:
    • “Administer pathauto”
    • “Notify of Path Changes”
       
  3. Click the Save permissions button at the bottom of the page.

Configure the Pathauto module

The Pathauto module adds four tabs to the URL aliases admin page. They are Patterns, Settings, Bulk generate, and Delete aliases. We only discuss Patterns and Settings in this book:

pathauto installs these tabs
  1. Go to the Pathauto admin page (Coffee: “URL aliases”), visit https:///admin/config/search/path/patterns, or:
     
    1. Click Manage > Extend.
    2. Search for Pathauto from the module list.
    3. Expand the Pathauto module section by clicking on the down arrow in the description:

      pathauto configuration link
       

    4. Click on the Configure link within the expanded description area.
       
  2. Click on the Settings tab.
     
  3. Scroll down to the Update action section and select Create a new alias. Delete the old alias This will ensure that when titles get changed, the URL will change accordingly.
     
  4. Select the Reduce strings to letters and numbers checkbox. While not necessary, if you tend to use punctuation or special characters in your blog and page titles, it’s best to make sure they get changed to something more basic and easily readable.
     
  5. If you changed anything, make sure to click the Save configuration button at the bottom of the page.
     
  6. WAIT! You aren’t done -- Scroll back down to the Update action section and click on the Redirect module settings link or visit https:///admin/config/search/redirect/settings.
     redirect module settings link within pathauto
  7. Make sure your settings match those in the box below:redirect module settings screenswhot
    • Select the Automatically create redirects when URL aliases are changed checkbox.
    • Select the Retain query string through redirect checkbox.
    • Set Default redirect status to “301 Moved Permanently”.
       
  8. After any changes, be sure to click the Save configuration button at the bottom of the page.

NOTE: We’ll investigate the rest of the Pathauto module settings in the next section.

Did you like this walkthrough? Please tell your friends about it!

facebook icon twitter social icon linkedin social icon

May 11 2021
May 11

If you have a business to run, being active on social media isn't a choice anymore. You need social media to put yourself out there coz that’s where your audience is. There’s an increasing need to connect all the social media platforms with your website. Integrating social media with your website not only helps in an improved user experience, but you're also making it easier for users to share your content within their networks. Drupal, as usual, offers plenty of options to make your site more social. Improved collaboration between your Drupal site’s users means better engagement of visitors with your site.

Social Media Integration Modules

In this article, we have curated a list of the top 9 social media integration modules in Drupal 9 (also compatible with Drupal 8) ordered by their popularity.

AddToAny Share Buttons Module

With this module, you can harness AddToAny's universal sharing buttons. These buttons are vector and SVG buttons that look great on any background. They're lightweight and scalable to fit even high-PPI screens. It allows for easy integration with your Drupal website and is also optimized to load asynchronously. With a minified script, cached and instantly served from CDN, this module is a great choice to integrate social sharing buttons to a Drupal website.

Drupal Addtoany Share Buttons

Image credits: Drupal.org

ShareThis Module

The ShareThis module has been based on the Drupal 5 Share project and integrates the ShareThis tool on the node types you want to display this on. It is extremely flexible and can be customized using the API. You can place the block anywhere on your Drupal website. It can also be customized to display the number of shares the page currently has. 

ShareThis Preview

Image credits: Drupal.org

Social Media Share Module

This Drupal module lets users share the current webpage to various social media platforms like Facebook (share and messenger), Twitter, LinkedIn, Pinterest, email client, and even Whatsapp. It is also flexible enough to add in more platforms of your choice. The Social media share module also enables you to modify or disable the services from the config page. It can be added as a field in entity and leverage the field API.

Shariff Social Media Buttons Module

The Shariff social media buttons module for Drupal integrates with the Shariff social media buttons library to offer a safe way to add social sharing buttons on a Drupal website. We call it safe because, unlike other social sharing widgets, this module does not leak user's personal data. Also because it does not inject iframes or call external Javascript. Once downloaded, it implements the Javascript library and you can display the buttons as a block or field.

Shariff Module Buttons

Image credits: Drupal.org

Ridiculously Responsive Social Sharing Buttons

As the name very clearly suggests, this Drupal module lets you add social sharing buttons to your website that are ridiculously responsive! They are SVG-based sharing icons, very lightweight, and compatible with most browsers. You do not need to add any third-party scripts to use this module. It can be added as a block or at the end of certain node types. They originally come with share buttons but can be also customized to turn them into follow buttons to enable users an easier way to follow you on social media. It is highly customizable where you can configure which buttons you want to display, the button sizes, number of rows to display, prefix a text (follow/share) that resizes automatically, and more.

Social Sharing Module

Image credits: Drupal.org

Social Auth Google Module

This popular module lets users register themselves and login to a Drupal website via their Google account. The module adds a login path to google (user/login/google) which then redirects users to login to Google. This module is a part of the Drupal Social Initiative and harmonizing the social networking functionality in Drupal is its primary goal. It is based on the Social API that blends authentication with external services - in this case - Social Auth google. 

Social Auth Google

Image Credits: Drupal.org

Social Feed Module

If you want to display a live feed from your social media pages on your Drupal website, this module is just the one for you. The Social Feed module for Drupal 8 and 9 lets you integrate data from your social networking pages on Facebook, Twitter, and Instagram to your website. Configuring this is simple, where you will need to enter the social media platform page name, the App ID, a secret key, and user token, which you will easily find on your social media page. You can display the feeds using the Drupal block system.

Like & Dislike Module

Want to show your users how many of the readers liked and disliked your article? The Like and Dislike module for Drupal enables a like and dislike widget wherever you want to place it on your page. It used the Voting API to store, retrieve and tabulate votes. It works for Bundles and Entity types and comes with a  settings page where you can configure for which bundles you need the widget to work.

Like and Dislike

Image credits: Drupal.org

Twitter Embed Module

The Twitter embed module for Drupal allows you to embed a Twitter timeline or button to your website. You can expose Twitter widgets by embedding them as a block or a field. When you embed it as a block, you can choose to display your Twitter profile, list, collection, or likes from your Twitter page.

Twitter Embed Grid Timeline

Image credits: Drupal.org

May 10 2021
May 10

This year, the experience at DrupalCon Europe will be the most comprehensive ever. What does that mean?

  • One major Drupal community event for all - DrupalCon Europe 2021 will be one big event that brings together the European Community (and beyond) by hosting regional camps as part of the main event. A single DrupalCon ticket will give you access to all DrupalCamps.
  • Camp experiences customized - Different camps at DrupalCon Europe will craft their own experiences, host their own sessions, interviews or case studies.

We all want to make the most out of Drupal’s anniversary at DrupalCon, in spite of global lockdown. In fact, there are numerous opportunities and benefits of doing a DrupalCon in such a format:

  • 1 ticket, multiple experiences
  • Experience your local event and DrupalCon 
  • Meet more new people going to camps
  • Catch up with people that go to DrupalCon
  • Less screen time: 1 event, shorter days, different format, more fun

We've been in discussion with various DrupalCamps to make sure the camp experience will be top notch. The following camps and/or local associations are already involved and are currently discussing their participation at DrupalCon Europe:

In order for this ambitious joint project to succeed, and for DrupalCon to unite experts from around the world on ever-exciting digital experiences, we need all of your help!

Whether you are a member of a local association or an event organizer, a member of a partner company, or a contributor, you can help in many ways:

We hope to see you virtually soon!

DrupalCon Europe 2021 Advisory Committee with Kuoni Congress

May 10 2021
May 10

Lynette has been part of the Drupal community since Drupalcon Brussels in 2006. She comes from a technical support background, from front-line to developer liaison, giving her a strong understanding of the user experience. She took the next step by writing the majority of Drupal's Building Blocks, focused on some of the most popular Drupal modules at the time. From there, she moved on to working as a professional technical writer, spending seven years at Acquia, working with nearly every product offering. As a writer, her mantra is "Make your documentation so good your users never need to call you."

Lynette lives in San Jose, California where she is a knitter, occasionally a brewer, a newly-minted 3D printing enthusiast, and has too many other hobbies. She also homeschools her two children, and has three house cats, two porch cats, and two rabbits.

May 10 2021
May 10

Digital experiences have become an integral part of everyone’s life today. Be it the provider of the experience, being the businesses or the receiver of those experiences, which would be the users; both benefit a great deal from a sound and seamless digital experience. 

For the business to provide an impressive digital experience and the user to enjoy it, a DXP or a Digital Experience Platform is somewhat necessary. Through the software’s management of multiple channels, devices and every user touchpoint, it is able to provide personalised user experiences. Its forte is its ability to converge multiple technologies to provide the best possible experience to the user, making their journey a memorable one. With the use of the latest technology, the people building the experience are also pretty gleeful since they get to explore whatever is new.

So, if I had to define a DXP, I’d say it is a platform that is equipped to handle and deliver all the digital experiences of an organisation and perpetually work towards enhancing them. And if I had to mention DXP’s formal definition, I would quote Gartner

A DXP is an integrated set of core technologies that support the composition, management, delivery and optimization of contextualized digital experiences.

In this blog, we’ll be talking about these Digital Experience Platforms and all that they come equipped with, however, we’ll be focusing on a particular category, which is open source DXPs. Before I get into that it would be more appropriate to compare open source DXPs to their counterparts, which would be proprietary. Let’s get started.

Open Source DXP in Comparison with Proprietary DXP

There is a table that shows different aspects of digital experience.Source: Infosys

Based on the above image, you can clearly see that a DXP encompasses every aspect of the digital experience that an organisation may want to be taken care of. 

From user touchpoints to targeted campaigns; 
From content services to social services; 
From commerce to searches; 

DXPs are pretty versatile in their offerings. 

Now that we have a basic understanding of what a DXP is and what it does, let’s look at two of its major classifications, being open source and proprietary software. 

In essence, the difference between open and closed source software lies in the ownership. Open source by definition means a platform that is open to everyone, everyone can use it and anyone can contribute to it and there are no restrictions in the form of licencing fees or other charges. On the contrary, proprietary software, or DXP in our case, would be closed in plain terms. It’s a software that would be used by people who pay for it and the majority of the development comes through the proprietor himself.

Talking in terms of DXP, there are three main differences between an open source and a proprietary DXP apart from the licensing fees and charges. Let’s look at them. 

Parameter  Open Source DXP  Closed DXP  Licensing fees Not required, it is free to use Mandatory Choice  Abundance of choices; you can pick and choose the features you want to use The choices are limited; you have to use the DXP in areas that were already optimised Integrations  Integrates seamlessly with other software Integrations are minimal  Growth  Has a faster growth rate due to heightened flexibility  Fast, but not as fast as the open source


Open VS Closed Choice

The first major difference between these two categories is the level of choice they offer. If I talk about the open source DXP, the choices are plenty. You’d never be locked inside the software. You do not even have to use the entirety of the software, you can simply select the aspects which require improvements and leave the rest. There is also the option of customisation, you can build on top of the DXP you are using without anyone questioning you. You could use it for one of these or all of them.

Web content management; 
Web personalisation; 
Data and analytics; 
Marketing automation.

Now taking the closed DXP into consideration the choice is pretty limited. You choose a closed vendor and you are locked inside of their software. There isn’t much or even leeway to explore outside that locked space. You are all in. 

Open VS Closed Integrations 

Next difference is in regards to the integrations. An open source DXP can also be described as a combination of multiple products provided by multiple vendors. Using a DXP like Drupal, which is open source, you can integrate your digital experience with a platform like SalesForce with ease.

For the closed DXP, the story is a little different as the integrations are pretty limited as it is a one stop destination. You will find the majority of the services a DXP can provide, but they’ll be from that one vendor only.

A closed DXP is like one single brand, wherein you’ll find the products and services of that brand only, while the open source is like a supermarket, wherein you can find the products of almost all the major brands.

Open VS Closed Growth 

Finally, the open and closed DXP’s growth also paces at different rates. This is because of the kind of flexibility and contributions they have respectively. 

The open source DXP would be immensely flexible because of its open foundation. This means that it’ll always be open for adaptation as per the changing market requirements and its growth seamless and quick. Add to this the fact that there is an entire community to contribute and make necessary improvements into the software. 

In comparison, the closed DXPs also continually evolve. There are certainly improvements made to the software. However, because it's not open, its level of flexibility isn’t as high as its counterpart. Perhaps that is why the open source software DXP is making strides in the closed spaces as an alternative to them.

Why should you choose Open Source DXP?

So, you know that an open DXP is quite different from a closed one. Since you can only choose one of them for your organisation’s digital presence and consumer experience, you will have to pick one. Based on the previous section, I am sure you would have come closer to a decision. To make it even easier, let me tell you about all the benefits of an open source DXP.

Open Source is malleable to the future 

The only thing we can say for certain about the future is the fact that it is uncertain and that there is high plausibility of change. Future cannot be predicted, so we should not even try. However, what we can do is make ourselves malleable enough to adapt to whatever the future might hold. 

And that is what open source DXP entails with its API-first infrastructure that has the ability to integrate with the future. Of course, by future I mean the tools and frameworks that would reign in the future. Open DXP’s technology stack is indeed ready for the future because it is always ready to adapt. 

An open source DXP’s malleability also helps in improving consumer experience. This is because it makes it easy for businesses to create solutions in accordance with the changing consumer needs. The said solution would provide the most optimal results if it is created with speed and here the open architecture comes extremely handy.

Finally, with open source DXP you have the power to eliminate and replace certain parts of the platform that either do not align with your strategy or are hindering your growth. This makes it highly efficient in the worst of times. To know more, read about the impact of open source in Covid-19, how open source remains recession-free and why large enterprises are leaning towards open source.

Open source lets you innovate to your heart’s desire 

Collaboration; 
Partnership; 
Shared goals; 
United ambitions; 

All of these mean mutual success, all of these mean more power for better innovations and all of that is what open source stands for. Open source is where innovation thrives. Let’s understand the why. 

Have you heard of JAMstack? It stands for JavaScript, APIs and Markup. Back in 2018, it made quite a noise in the realm of marketing site theming. Ever since then it has become the go-to design stack for developers. This stack of modern technologies gives the developer the room to innovate and spurs them on constantly, the results enhanced developer velocity. It creates just the right environment with its new tools and quickness for more innovation. 

If I were to talk in plain terms, I'd say that because open source does not confine the developers with one vendor and only its resources, there is a lot of freedom to innovate. It creates an open culture that allows the people in it to build as much as they want individually and with partnerships through the open connections of open source.

Open source elevates consumer experiences 

Today consumers do not have just one single touchpoint with the business. They can connect to it through mobile applications, IoT, voice assistants and even chatbots. And the consumer expects context awareness in all of these. So, what is the solution?

With an open DXP, you can deliver content that is created by content authors without special emphasis on the channel it’ll be used on. This sounds bad, but it, in fact, is a good thing as it allows content creators and front-end developers to create pieces that are aligned with the target audience’s needs.

Once a piece of content is created, it can be repurposed for the platform and its context can also be altered and further delivered across various channels. This eases the complexity of working with an open framework and makes content quite composable. Moreover, the open APIs make the centralisation of content consistent throughout the channels and devices. 

The result is a holistic consumer experience that has the consumer in control of his interaction with you.

Open source empowers consumer data 

Every time a consumer interacts with you, he leaves some information behind. It’s like meeting someone in real life, with every encounter you will get to know the person more and not the other way around, you can’t be asking the person to introduce themselves every time you meet, that’ll be rude on so many levels. Yet this is what happens when your digital presence has consumer data that is scattered in various departments in data silos that are often inaccessible.

An open source DXP comes equipped with an open Customer Data Platform, which would help you capitalise on the consumer information you already have with you. Consumer data from e-commerce, from customer support systems and CRMs gets accumulated and organised at one place, so that you can actually utilise the information you have about your consumers. These consumer insights are immensely helpful in building forward and an open DXP helps in that. 

Open source has room for microservices 

Imagine a suite of lightweight mechanisms like an HTTP resource API that would be responsible for running many small services within an organisation as one single application. This is called a microservices architecture and it is provided by the open source DXP.

You might think what is the point? 

The points are three benefits you will get from this architecture. 

  1. You’ll get a better handle on your technology solutions; 
  2. You’ll be able to enhance your productivity because of that; 
  3. And you’ll also be able to achieve better scalability as the architecture will fit perfectly to your long term organisational goals. 

Open source caters to commerce needs 

For the gazillionth time, because an open source DXP is open, it can better cater to your commerce needs. Let’s understand this with a comparison to its closed counterpart. 

A closed DXP would have an already established, integrated and rigid commerce platform that you cannot mess with, while an open source DXP could integrate itself with any commerce platform that you want based on your needs. 

So, when the need to add a commerce capability to your digital platform arises, which it will, you would be glad to have chosen an open source DXP. This is because open source is never tightly coupled, so, you wouldn't have to compromise and adjust to the rigidness of a particular platform, rather you can stand your ground on your needs and capitalise on the platform that aligns with your needs.

Open source offers better security 

Security breaches and data leaks are far too common for anyone’s liking. And when you consider a software that is open to everyone, the threat of these security attacks should be all the more obvious, right? Wrong. 

Open source DXPs are far more secure than closed DXPs and the only reason for that is their openness. With an open DXP, you would always be aware of the vulnerabilities and can take action to improve on it, while a closed DXP, being a proprietary software would consider not telling about those vulnerabilities merely because it’d lose money. You decide, which is better?

Open source saves money 

When you choose an open source DXP, you choose to save money. This is because of two reasons. 

One is because it acts as central management tool that helps you manage multisite as well as give you flexibility, security, control and efficiency it needs. Once you get that, there would be minimal chances of duplicacy of resources, be it in IT or marketing. And that is going to be a worthy investment.

Secondly, an open source solution is a combination of microservices and SaaS, which essentially translates to you paying for only the services you choose and not a penny more.

How does Drupal fair as an Open DXP?

We know what a digital experience platform is, we also know what an open source digital experience platform is, now we’ll talk about a particular DXP that is open source and its Drupal. Having worked with Drupal myself, I can say for sure that it is up there in the list of most impressive DXPs and being a fan, I have to talk about it.

An open source platform that has been around for two decades and is still running strong, Drupal is meant to create the most amazing digital experiences and the support and contributions of its vast community makes that a possibility every day. 

If I had to compare Drupal to the benefits of open source DXP that we discussed in the previous section, I’d say that it fairs pretty well. Drupal as an open source DXP is always to work. Let’s see why. 

API-first build  

The best part about Drupal as an open DXP is its API-first architecture. This makes the creation of multi-channel experiences quite fulfilling. With an API-first approach, Drupal can decouple itself and provide room for new technologies like Vue, React and Angular. You will have the option of selecting the best-in-class products to integrate and build your digital infrastructure, adding the freedom of innovation for your developers. 

Extensible 

Drupal can very easily accommodate your needs and goals and that is what makes it extensible. You can have a small organisation or a global one, Drupal will be able to help you architect your digital presence the way you want, build it and expand it until you keep growing. 

Drupal’s modular architecture makes it all the more extensible. With upgrades being equivalent to installing a new module, building and improving a Drupal site’s digital experience is never going to be a mountainous task. 

Then there is the fact that Drupal, as an open source, allows customisation. You can very easily build your own DXP on top of Drupal. There are umpteen open source DXP vendors, who have actually done that. 

Community 

Drupal has a community of over a million and growing. With that many Drupalists in the world, you can be assured that there would be an answer to any Drupal conundrum you may be stuck in. Upon raising an issue, you can always expect the Drupal veterans and contributors to respond and help you out. The contributions from the Drupalist not only make Drupal a successful DXP, but also ensure that it is reliable. 

Here is a video that will help you understand the role of Drupal as an open source DXP. 

[embedded content]


Nobody can deny that Drupal is a powerful DXP, however, denying that it doesn’t have any flaw would be unfair too. 

  • Certain advanced features like creating customer profiles or building a neural network become a difficult task; 
  • A/B testing and introducing personalisation is also something Drupal isn’t renowned for; 
  • CDP, an important practical capability for a DXP is one that is missing from the DXP.

I wouldn’t say that these features make Drupal unworthy or hard to work with, not in the least. However, they are something that need to be mentioned. 

Conclusion 

Having a digital presence has become pivotal today. However, having a digital presence that does not leave its mark on your consumer is as good as not having one. A DXP is what will help you leave that mark and lure your target audience towards you. And if that DXP is open source that mark would be all the more deeper and you’ll be all the more delighted because of it and I am pretty sure you’d like that.

May 06 2021
May 06

It was around this time last year when the Drupal organization, in the midst of Covid-19 upheaval and uncertainty, decided to defer the Drupal 7 end-of-life date from Nov. 2021 to Nov. 2022. Twelve months fly by fast, and here we are, with many Drupal 7 sites that are still far from a Drupal 9 migration plan. 
Considering that there are more than 1 million Drupal sites worldwide and that 81 percent are still on Drupal 7, it’s not a stretch to say that we’re down to the wire. As the Nov. 28, 2022 end-of-life date nears, Drupal development agencies will become increasingly booked and backlogged. Those who delay migration might find themselves scrambling to secure the right expertise to guide them through the process. Cutting corners or settling for sub par Drupal developers for a resource that’s as mission critical as your website is highly unadvisable. 
 

The Drupal 9 Difference

The solution is to act now and begin to realize the following advantages of a far superior CMS sooner rather than later. 

  • The ability to craft layouts with the built-in visual layout builder, reuse blocks, and customize all parts of the page.
  • Use of the integrated configuration management system with development and staging environment support.
  • Management of reusable media in the out-of-the-box media library.
  • The advantage of full multilingual support in all content and configuration.
  • Better keyboard navigation and the assurance of accessibility.
  • Use of the structured content-based system with which you are already familiar. 
  • The ability to make changes even from your mobile devices thanks to a mobile-first UI.
  • Better performance and scalability with built-in BigPipe support for even faster initial page loads.
  • Built-in JSON:API support for progressively and fully decoupled applications.

Big Lift, Big Benefits

The Drupal 7 to Drupal 9 migration process is not to be underestimated. The Drupal organization is referring to D7 to D9 as the last big migration, with a promise of future upgrades that will be evolutionary, not revolutionary. No more wholesale platform upheavals. Instead: a continuous innovation cycle that delivers enhanced features twice a year. 
That promise has proven to be the case with the Drupal 8 to 9 migration. One slight snag there is the fact that Drupal 8 sites need to migrate over to Drupal 9 a year earlier than Drupal 7 sites. The Drupal 8 end-of-life date was not changed from the original November 2, 2021, due to its dependency upon Symfony 3 and Symfony 3 will no longer be supported after November 2021. 
Drupal 7, of course, is a different story. Drupal 7 sites can continue to hang around throughout 2021 and most of 2022, and the Nov. 28, 2022, Drupal 7end-of-life date does not mean that D7 sites will suddenly disappear. Here’s what it does mean: 

  • After Nov. 28, 2022, Drupal 7 will lose Drupal community support. That means no new bug fixes. 
  • Absence of support-related security releases will expose vulnerabilities to cyber attacks and the possibility of D7 sites being flagged as insecure during third-party scans. 
  • No further development means an end to any further improvements. 
  • Bottom line: a lot of uncertainty.

Drupal 7 Disintegrating

Streamline the Scope

A Drupal 9 migration offers a perfect opportunity to ensure that the architecture, UX, and design of your website is in sync with your current brand and objectives. Doing it right takes time, the alignment of stakeholders, and calls for some very thoughtful planning. The key word: planning. 
That means taking stock of your current site, and evaluating it on a wide range of factors. The following steps can serve to significantly streamline the process.  
 

Audit Existing Content

A content inventory that flags outdated, redundant, or off-brand content is a critical first step in the site migration process. The less content that needs to be migrated to the new site, the simpler the process will be, so it’s helpful to clear out the clutter early, while identifying and prioritizing which information you want to keep (and migrate). 

Look at Site Analytics

It’s often difficult for stakeholders to agree on which content and features are must-haves, and which ones can be left behind. Analytics help. Data that reveals which pages get the most traffic will bring much-needed insight and objectivity into the decision-making process. It will also identify which pages and articles are not likely to be missed because they are receiving relatively few visits. 
 

Audit Modules

Evaluate your site’s contributed modules to determine whether they’ve been updated for Drupal 9 or pulled into Drupal 9 core. If they haven’t been, investigate whether there is a Drupal 9 alternative that could be used to maintain the same functionality. You'll also want to evaluate any custom modules that have been written specifically for your site by your development team, paying special attention to the ones that integrate with systems outside of Drupal. Search Drupal.org for a contributed module that could provide the same functionality. If you don’t find replacements, your development team will need to rewrite the custom module(s) for Drupal 9. 

Assess Your Theme

If your current site is using a contributed theme, look into whether there is a Drupal 9 version of that theme. While it is unlikely that a larger site would use a contributed theme without any modifications, it’s quite possible that your site’s custom theme is a sub-theme of a contributed theme (a base theme like Bootstrap, Zen, or ZURB). If you can keep the same base theme in its Drupal 9 form, you might need fewer changes in order to upgrade.

Identify Complexities

The next step is to determine whether there are particularly complex features or functions of the site -- such as multi-language capabilities or single sign-on (SSO). While there are solutions to these complexities, the migration process is significantly more straightforward when they are identified and accounted for early.

Get Your 8-Point Drupal Migration Planning Checklist

 

Consider an Automatic Migration Tool

Automatic Drupal migration tools, such as the Drupal UI Migration Tool or the Drush Migration Tool, could be worth trying, especially if your site doesn’t have much custom code. The likelihood of one of these methods’ success for any particular migration depends on your site’s architecture. If it is predominantly made of content types and fields configured through the Drupal UI and using Drupal core functionality, there’s a greater chance an automatic migration will do more of what you need.

Optimize the Opportunity

At Promet Source, we understand that your organization’s website tells your story and that no two are alike. We also understand that the series of steps to prepare for migration, which we have outlined here, are often outside of the scope of available time and resources. 
Our Architecture Strategy Workshops are designed to zero in on the big picture, taking into account all factors that affect the migration to a successful Drupal 9 website. For many clients, the opportunity to achieve stakeholder consensus over the course of a one- or two-day workshop, as well as a plan for next steps, represents a breakthrough opportunity that could otherwise take months and months of meetings and analysis. 

Keep in mind! Delaying migration means delaying benefits of a CMS that offers, increasingly tighter security, improved performance, greater speed, a built-in emphasis on accessibility, along with multilingual capabilities and a mobile-first UX. The impact of a Drupal 9 migration is guaranteed to go deep and wide within your organization.
If you are in the process of auditing your site for migration, interested in learning more about an architecture workshop, or are looking for a value-added partner to perform the entire migration process for you, Contact us today.
 

May 06 2021
May 06

Individual Backer
You appreciate and value the Webform module while building out your website or use it daily at your job. Please consider becoming an Individual Backer for $5 or $10 a month.
 

Organization Sustaining Backer
Your organization wants to help sustain the Webform module and ensure it is maintained, stable, and secure. Please become an Organization Sustaining Backer for $50 a month.


Organization Growth Backer
Your organization desires to see the Webform module grow with new features, better documentation, and videos. Please become an Organization Growth Backer for $100 a month.


Supporting Sponsor
Your company feels that the Webform module is a vital part of your website or business. Please become a Supporting Sponsor for $250 a month.


Impact Sponsor
Your company would like to stand out as a leader and supporter of the Drupal community. Become a sponsor for $1000 a month. The Webform module's Open Collective's first sponsor will help decide how we promote your impactful support within the Drupal community. Please become an Impact Sponsor for $1000 a month.


Make a Donation / Every Little Bit Helps Sponsor
You can’t commit to anything just yet but you’d like to make a donation. Make a one-time big or small donation to the Webform module's Open Collective. Your support can help fix a minor bug, resolve a critical issue, or tag the next stable release of the Webform module. Please become an Every Little Bit Helps Sponsor. 

May 06 2021
May 06
Quick Tabs Module to add tabbed content in Drupal

The Quick Tabs Module

is handy when there is a need to add tabbed content and the possibility to switch between content displayed on your web page.

This blog post is  entirely devoted to the Quick Tabs Module, how to add tabbed content and what benefits you will get from creating tabbed content on your Drupal website. The Drupal support team does not guarantee that you will become a professional after reading the blog, but you will definitely get the basics.

May 06 2021
May 06

Quick Preview: This will be an informative guide on what web forms are, how to add a web form to a website, and why you need them.

It is simply impossible to imagine a site without web forms. They are everywhere. Customers fill them out when:

  • making a purchase
  • leave a review about a product or company
  • go through various kinds of surveys
  • registering when taking quizzes. 

We could even say that web forms are the main source of information on potential customers. Below, the Drupal website development support agency has prepared information about web forms, how to embed a form to a web page, and many other useful things.

What is a web form?

A webform / HTML form is a form consisting of fields to be filled in. Most often, it requires a name and login, such as a password. Users enter their details to get some benefit from this. 

For example, they can get access to some kind of material or purchase a product. Site owners also get benefits.

They gain valuable customer data, which they then use. For example, they can use user data to create relevant offers based on their preferences, region, etc.

There is an excellent variety of web forms. They differ in size, number of fields, design, etc. Before adding a web form to your site, you need to think through all these points so that everything looks harmonious.

What is a web form?

Сommon types of web forms

  1. Contact form
  2. Order form
  3. Registration form
  4. Complaint webforms 
  5. Newsletter Signup
  6. Survey form

What is the purpose of having a form on a web page?

A webform is a way of communication between you, your site, and the visitors. Embedding a form to a web page means that you are serious about systematizing and collecting data on your site. Once you add them to your site, it will be easier for you to collect, analyze and understand potential customers' behavior.

It is also convenient to use such forms to collect customer feedback about your business. This way you can find out your business weaknesses and correct any deficiencies.

Main benefits of adding web forms on a website

  • You establish proactive communication between you and your website visitors.
  • You increase conversion rates.
  • You can measure and track data getting from webforms.
  • You are creating an early way to capture more leads.

What are some examples of web forms?

Here, we've selected the most remarkable and inspiring web forms.. There are only 5 of them, but each of them is worth your time and attention. We hope you enjoy them.

1. BBC

The web form on the BBC site is clean and simple. This does not distract the attention of visitors and it helps them fill in all the necessary data quickly. In addition, a big plus is that in order to register, you need to enter only minimal information.

web form on the BBC site

2. Apple

Apple's web form has a minimum of design and a maximum of simplicity. A small number of questions helps to ensure that the user does not leave the page before completing the form. Everything is sustained and made in the brand style of the company.

Apple's web form

3. Wishdesk

On the Wishdesk website, the web form has a soft blue background, creating a warm atmosphere and a desire to fill it. In addition, it has a  small number of required fields to fill out. And the orange CTA button is visible against the general background and stimulates contact with the support team.

Wishdesk website

4. Metropolitan Opera

This Metropolitan Opera form was created in order to help a potential consumer buy tickets or find necessary information about upcoming concerts. It also has a minimalist design, with only the bare essentials. This is why it deserves a place on our list.

Metropolitan Opera form

5. Kylie Cosmetics

As soon as you arrive at Kylie Cosmetics you can find a web form that allows you to either log in or create an account. At the bottom, the CTAs that grab attention are highlighted in black. There is nothing superfluous here. Everything is harmonious.

 Kylie Cosmetics

How do you add a Web form to a Drupal  website?

Today we will look at how to add a Web form to a Drupal website using the Webform Module. This option is often used because it is convenient. Keep reading to find a detailed description of embedding a form to a Drupal web page from the screenshot.

What is a webform module in Drupal?

The Drupal Webform module is a powerful module for creating web forms and collecting data on Drupal sites. Its main advantages are flexibility, open-source code, and rich functionality.

Webform is a very useful module for those who want to gather some statistics, conduct surveys, etc. Сurrently 467,604 sites are using it. Module developers maintain their products and regularly provide updates and bug fixes. It is available for all popular versions of Drupal. 

Drupal Webform module

This module can:

  • create different forms and surveys
  • send customizable emails to administrators or/and submitters in response
  • export results to Excel or other spreadsheet applications. 

There is also a basic statistical review and an API for expanding its features.

For each web form, every submission is also stored in the site’s administration panel and can be accessed directly.

administration panel

Extensions

Right after installing the Webform module, we have extensions that can provide additional functions to forms, add a user interface for creating and editing forms, example forms, and much more.

Webform module extensions

As you can see, there are extensions for almost all needs. 

Usage

  • To manage webforms go to Structure > Webforms.
  • To see web forms that were added already and add a new webform.
  • To create a new webform click the button “Add webform.”
create a new webform

After creating a web form, we will see a window where we can add items to it.

 add items to webform

There are several items already included by default, but we can add more by simply installing the appropriate extensions.

appropriate extensions

Every element can be customized before adding it to webform.

element can be customized

Also we can see how the web form will be shown to users by clicking “View.”

web form view

And at last, we can edit various settings of our webform on the tab “Settings.”

tab “Settings”

Compatibility

Webform module is compatible with Drupal cores of 7, 8 and 9 versions.

Popularity

According to statistics almost half of a million sites are currently using the Webform module — such large numbers can’t lie.

Installation 

The best way to install the Webform module is via Composer.

Open the terminal, change the directory to your site’s and enter the following command:

composer require drupal/webform

After installing, enable the Webform and it’s extensions via Extend tab on site.

Conclusion

Today, we looked into how to add a Web form to a website using the Drupal Webform Module and what benefits it brings to your site.

A successful website is when you fully meet customer expectations, and one of the options to do this is to create convenient web forms in Drupal. Do you have any other questions or want help? Our Drupal support team is here!

May 06 2021
May 06

1. Make sure you download this in advance and ensure it can be started successfully on your machine. You’ll also have the opportunity to confirm your camera and audio are working correctly with the software. We’ve had challenges with camera compatibility in older MacBooks before.

2. Hardware-wise, you’ll need a suitable PC or Mac with a camera so you can be watched - a laptop is ideal because you’ll have a camera built-in and can move around to find the best place to take an exam.

3. Think ahead to where you will be taking the exam - you’ll need to select a location that’s quiet and free from distractions for a few hours. More importantly, make sure it’s well lit with no shadows from sunlight coming through a window. The Sentinel software employs facial recognition which can fail if the lighting is poor.

4. The Sentinel facial recognition check is for the biometrics data that is recorded before the test - this is stored for the future. If sufficient time has passed since your previous exam, or your appearance has changed (such as a new beard or hairstyle) the facial recognition may fail.  For Paul, this required contacting Kryterion support to reset the biometric data. We'll leave it to you to decide whether Paul grew a beard or simply aged badly. 

5. Before you begin you’ll need to remove any headphones, earphones/AirPods, in-ear electronics etc. that could be feeding answers into your head. Likewise, hats, beanies, sweatbands or anything that can obscure the visibility of your ears must be removed before exam time.

May 06 2021
May 06

Community Update - May 2021

Continuing our series highlighting the work of initiative groups across the Drupal community, this month we are catching up with six more groups:

  1. Documentation and Help, by Jennifer Hodgdon
  2. Contribution Mentoring, by Elli Ludwigson
  3. Drupal Swag Shop Working Group, by Will Huggins
  4. Discover Drupal Initiative, by Angie Sabin
  5. Local Drupal Associations, by Leslie Glynn
  6. Accessibility, by Rain Breaw Michaels

The takeaway message this month seems to be that there are some great things happening in the Drupal community but they could be even more awesome, and help to grow our community, if more people were able to contribute, even an hour a week.

Whilst some people have more available time than others, for those that do have the privilege of time, then please do read carefully and think about where your time could be best spent.

If you spot a place where your skills fit, don’t hesitate to contact either the group’s spokesperson, or Community Liaison, Rachel Lawson.

What have been your priorities in the last three months?

  1. Migrating the remaining content from the old Getting Involved Guide.
  2. Adding new content (task, role, and area pages) to the Contributor Guide.
  3. Making it more obvious how people can contribute to Documentation.
  4. Moving the Help Topics in Core project towards completion.

And what has been your greatest success in the last three months?

We recently added a block to the sidebar of Documentation pages that clearly states what you can do if you find a problem in a page. I've been seeing many more people following the suggestions in the block by either editing pages to fix problems, or setting the page status to Needs Work (along with a comment stating the problem they found) since the block was added. I'm also very proud of the fact that all of the content in the old, disorganized Getting Involved Guide has now been migrated to new locations (with redirects in place for people with old links and bookmarks).

What has been your greatest challenge in the last three months?

  1. Finding people to review help topics patches for Drupal core.
  2. Connecting with leaders of other initiatives and working groups so that we can add content to the Contributor Guide for these areas.

Do you have a "call to action" you want to make to the Drupal Community?

We have only 3 more open issues for adding topics to the Help Topics project, which all have patches that are waiting to be reviewed. If you're interested in reviewing, the open issues are listed near the top of this issue: https://www.drupal.org/project/drupal/issues/3041924 -- and each individual issue has review instructions in its issue summary.

What have been your priorities in the last three months?

Getting ready for DrupalCon North America 2021, updating first time contributor workshop videos, and setting up the open social contribution platform.

And what has been your greatest success in the last three months?

During DrupalCon North America, we spent a lot of time in OpenSocial which gave us a chance to learn, make changes on the fly, and adapt to the needs of new contributors and mentors. The team did an amazing job making this happen and learning how to leverage the existing functionality in the platform to create and coordinate contribution time and space. The OpenSocial contribution space is reusable for future events so we can continue to iterate on this success.

There are lots of great updates in the contributor guide, take a stroll through some of the pages: http://drupal.org/community/contributor-guide

The mentoring leadership has added two provisional coordinators, a big welcome and thank you to AmyJune Hineline and Chris Darke. AmyJune has done an extraordinary amount of work at virtual events in the last year, giving contributors a place to get started with Drupal. Chris Darke gave his time and expertise to update and segment the First Time Contributor Workshop videos, which are available any time.

Rachel Lawson, Kristen Pol, and Gábor Hojtsy also did a lot of work to make sure the contribution space ran smoothly during DrupalCon. Thank you to all the folks who mentored and contributed!

What has been your greatest challenge in the last three months?

Learning more of the ins and outs of the OpenSocial platform! We set up some initial spaces for each initiative and directed everyone to them but had some kinks in the pipeline from Hopin. These were quickly resolved by collaborating with the Drupal Association.

Managing participation on the platform during DrupalCon week required significant time and effort from the existing mentor team, every day of the week. In particular, Matthew Radcliffe went above and beyond in terms of his time and energy. Having four days of contribution was amazing! But, we need more hands on deck in order to make this happen in a sustainable way in the future.

Do you have a "call to action" you want to make to the Drupal Community?

The mentoring group particularly needs more hands on board! Share your ideas and your help coordinating and mentoring at events. There are meetings in the #mentoring Slack channel every month.

Be on the lookout later this year for a dedicated mentor orientation event before DrupalCon Europe!

Drupal Swag Shop Working Group, by Will Huggins

What have been your priorities in the last three months?

DrupalCon is the peak time for the Swag Shop when demand goes through the roof, so the build up to DrupalCon NA 2021 was no exception. It starts with agreeing the designs which this year included sponsor logos on the back of the clothing items as well as the logo on the front. Everything after that is about raising awareness and promoting the swag.

And what has been your greatest success in the last three months?

So far this year Swag Shop has driven over $700 directly to Drupal Association which will rise even higher after all the DrupalCon swag sales filter through. The range of products on offer is growing and perhaps most importantly, we have proved we can create a sustainable, community managed swag shop that both promotes Drupal and drives revenue for DA.

What has been your greatest challenge in the last three months?

Marketing and awareness is still our greatest challenge, which is funny because when we started, we thought that would be the easy bit! We also want to engage more DrupalCamps to see if we can work with them to provide the swag for their activity like we did for NEDCamp 2020 last year.

Do you have a "call to action" you want to make to the Drupal Community?

The main areas we need help with are:

  1. Marketing: help us promote Swag Shop - at least 10% of all sales goes directly to DA
  2. If you are organising a Drupalcamp or community event - get in touch, we would love to work with you to provide the swag for your event
  3. Got a design idea for our swag? Create an issue and once it gets approved by the Drupal Association, DA we will get it on sale right away!

Discover Drupal Initiative, by Angie Sabin

What have been your priorities in the last three months?

We’ve been working to get all materials organized, launch pages on Drupal.org, align and finalize schedules for trainings, and spread the word to potential students and sponsors.

And what has been your greatest success in the last three months?
We are optimistic about the excitement building for the program. The community members that have learned about Discover Drupal are enthusiastic about it.

What has been your greatest challenge in the last three months?

Pulling the schedules together and finding alignment on how the program should function was a challenge. Every trainer has a slightly different approach so we had to find a system that would work well with their existing content rather than creating new content. For now, we determined that each trainer should take one student pathway (site-builder, front-end developer, or back-end developer) so that a group of students within a specific pathway would have the same experience.

Do you have a "call to action" you want to make to the Drupal Community?

We need mentors for our upcoming cohort that starts in July! The application for becoming a mentor is open now through May 31. Mentoring is an opportunity to inspire a new generation of Drupal contributors and bring new perspectives to the Drupal project. You can apply here: https://www.drupal.org/association/discover-drupal/become-a-mentor
We also need financial support. You can become an individual sponsor or ask your organization to support Discover Drupal as a sponsor. https://www.drupal.org/association/discover-drupal/support-the-program
Finally, if you have a laptop you want to donate you can reach out to us: [email protected]

Local Drupal Associations, by Leslie Glynn

What have been your priorities in the last three months?

To become more familiar with the Local Drupal Associations across the globe and to start to lay out objectives for how we can increase communication and sharing of content across the Local Associations.

And what has been your greatest success in the last three months?

DrupalCon and DrupalFest have provided opportunities for Local Drupal Associations to share information about their groups to the global Drupal Community. DrupalFest presented a great opportunity to host events in local time zones and in languages used locally.

I attended the Drupal Africa Meet and Greet on April 22nd with folks from across Africa and around the globe. Other DrupalFest events outside North America included: Drupal 20 years Mexico, Drupal DACH Online Meetup, DrupApero, Drupal Buenos Aires, Drupal Austria Remote Drinkup, Drupal CS Meetup, Drupal Israel April Meetup, DrupalPeru Meetup, Drupal Chile, and Meet Drupal France. Based on the number of global events, DrupalFest was embraced by Local Drupal Associations and was a great success.

What has been your greatest challenge in the last three months?

The greatest challenge continues to be determining a strategy for increasing both communication and sharing of content across all of the Local Drupal Associations, both new groups and those that have been around for many years.

Do you have a "call to action" you want to make to the Drupal Community?

It would be great if each of the Local Drupal Associations could:

  1. Attend an Event Organizer Working Group meeting - We meet on the 2nd Tuesday of every month. Times alternate between 12 pm UTC and 12 am UTC. The May 2021 meeting is at 12am UTC
  2. Add their local events (camps, trainings, meetups) to the new Community Events page on Drupal.org (https://www.drupal.org/community/events)
  3. Join the Drupal Camp Organizers slack channel (drupalcamporganizers.slack.com)

What have been your priorities in the last three months?

Our priorities have been to support Olivero in passing all critical accessibility gates and to get more people looking at the backlog of accessibility issues in the queue.

And what has been your greatest success in the last three months?

Olivero has made significant advances, and the maintainers office hours are starting to see more attendance. We've even begun to see more community members contributing accessibility-focused modules, such as @itmaybejj's EditorA11y.

What has been your greatest challenge in the last three months?

We continue to struggle with having enough time and people to make the kind of progress we want (and frankly, need) to make in order for Drupal to be the accessible framework it aims to be. Additionally, given this complexity, staying up to date with Slack conversations has been complicated, and we hope that we can find a better and more accessible communication method moving forward.

Do you have a "call to action" you want to make to the Drupal Community?

Join us! Become an accessibility champion. If you have something you'd like to present at office hours, sign up on the agenda or contact Rain Breaw.

If you have time to pick up a core issue tagged with accessibility, please do, and consider bringing your work to office hours to review or discuss.

We also need help with our Accessibility Contrib Guide, if anyone who is strong with documentation has time to pick this up.

Finally, you do not need to be an expert to help make this happen. We have plenty of experts who can answer questions along the way.

May 05 2021
May 05

Selecting a CMS for a university can be a challenging decision. There are so many needs and nuances to consider - costs of implementation and maintenance, a wide range of technical ability among site administrators, developers, and content editors, a variety of end-users looking for different information...and the list goes on and on. While your answer likely isn’t as easy as, “let’s just do what everyone else is doing,” better understanding why other universities made the choice they did can shed light on your decision-making process. 

Drupal is far and above the most used CMS in higher education - 26% of all .edu domain sites are in Drupal, including 71 of the top 100 universities. 

So why are universities like MIT, Georgia Tech, Butler, Stanford, Harvard and the rest of the Ivy League universities choosing Drupal? 

Simply put, Drupal makes good business sense, especially with the added benefits of Drupal 9. At Mediacurrent, we believe your website is your greatest digital asset and can be leveraged to accomplish organizational-wide goals. Drupal makes that possible. Here’s how:  

Communicate With All Students - Prospective, Current, and Alumni 

If you want to reach your full recruiting and fundraising potential, you need to communicate with your entire audience. There are a variety of Drupal features that ease the stress of common communication challenges. 

Language

Not only are their multiple languages spoken within the U.S., but our country hosts over a million international students. Drupal makes creating a multilingual digital experience simpler. Native language handling is built directly into Drupal 8 and 9 core APIs, giving you over 100 languages to choose from. With that functionality, it is easier than ever to engage with prospective students across the globe in a meaningful way.

Accessibility

The CDC estimates that 20% of U.S. adults identify as having a disability. These disabilities often hinder people’s ability to interact with the average website. Drupal is an inclusive community and has committed to ensuring that all features of Drupal conform with w3C and WCAG 2.0. Pair Drupal’s built-in accessibility tools with a strong higher-education-focused accessibility strategy and your potential audience could grow by 20%. The Siteimprove Drupal module can help you keep a close and proactive eye on your overall web accessibility. 

Technology

 According to the College Explorer Market Research Study, the average college student owns 5.6 devices and spends 137+ hours on them! This may seem like common sense now, but if you want to engage with students, you need to account for a variety of screen sizes. Thankfully, Drupal 8 was designed with a mobile-first mentality and includes out-of-the-box responsive functionality.  And that mobile mindset continues with Drupal 9. Features like editorial workflows, Layout Builder, and media management can support content delivery that is optimized for mobile access.  

Personalization

 Universities face added complexity when it comes to digital strategy due to the broad audiences they appeal to. With so many unique people coming to the same pages, content strategy, conversion path mapping, and optimization, and defining strong calls to action can be a struggle. By incorporating personalization into your content strategy, whether that is personalized based on user authentication or by integrating tools like Acquia Personalization or Salesforce Marketing Cloud, you can speak to the masses but make them feel like you’re speaking specifically to them. 

Reduce Overhead Costs + Increase Operational Efficiencies with Drupal

Drupal can have a dramatic impact on reducing overhead costs and increasing operational efficiency. Universities have a big need for multiple websites: departments, colleges, libraries, and student organizations all want their own website. The direct cost of supporting this many sites along with resourcing the training and support is expensive and encourages unnecessary technology sprawl. As an open source technology (no licensing fees!) along with the multisite feature, creating sites for these different groups is exponentially easier, more cost-effective, and ensures brand consistency. 

You can also increase efficiency, ensure content consistency and improve the user experience by creating a “source of truth”.

Write content once and publish it anywhere it’s relevant.

Having to update content such as a curriculum or an academic calendar on multiple pages is inefficient and unnecessary. Write once, publish everywhere, save time. 

Improve Brand Equity + Amplify Digital Strategy

As a university, your brand is a powerful asset. You spend significant energy and resources on building loyalty to bolster several organizational goals from recruiting efforts, engaging current students on campus, and fundraising among alumni.

With your website being the hub of your marketing strategy, it is critical for your CMS of choice to play nice with your marketing efforts.

Drupal is very SEO-friendly out of the box. There are also advanced configuration options available to support a more sophisticated SEO strategy. You can amplify your digital strategy by integrating your marketing tools and communication platforms directly with Drupal. And the 26% percent of other .edu sites using Drupal make integrating your university-specific tools to your website easier. 

Reduce Risk

I’d be remiss without mentioning open source security and GDPR compliance. As a university, you hold sensitive information about the students who have attended your school and they are trusting you to keep that secure.

The Drupal community is passionate about security and has an industry leading global security team to ensure your site is protected.

Additionally, as the landscape of privacy rights changes around the world, it’s in your best interest to stay on top of it and reduce the risk of being penalized for data collection practices. 

Speed up Your Time to Launch 

RainU logo

Drupal has a lot to offer to universities from the moment of install. We created RainU CMS to bring that out-of-box experience to the next level with a tailored approach. RainU is Drupal-based development platform that helps colleges and universities accelerate the web development process. 

Have questions about how Drupal and RainU can benefit your university? Let us know. We’d be happy to chat. 

Editor’s note: This post was originally published on July 18, 2018, and has been updated for accuracy and comprehensiveness.

May 05 2021
May 05

For Part One, click here.

30 Years Of Linux

The Linux kernel is celebrating its thirtieth anniversary this year. In part two of our interview, we conclude our conversation with Linux creator Linus Torvalds. If you haven't already, check out part one to learn all about Linux kernel development and the creation of the Git version control system.

In this second part, Linus offers insight and perspective gained from managing a large open source project for three decades. He also talks about his employment at the Linux Foundation, and describes what he does with his spare time when he's not focused on kernel development.

As to what makes an open source project successful, Linus admits, "I don't really know what the key to success is. Yes, Linux has been very successful, and clearly Git too started on the right foot, but it's always very hard to really attribute that to some deeper cause. Maybe I've just been lucky?" He goes on to offer three practical recommendations he's followed himself: be there for other developers, be open, and be honest.

When he first started the project, Linus wrote every line of code. "I still remember the very early days, when people would send me patches, and I'd not actually apply them as patches, but I'd read them, figure out what people wanted to do, and then do that myself. Because that was how I had started the project, and that was how I felt more comfortable, and that way I knew the code better." He explained that it was important to learn to delegate, "I stopped doing it fairly quickly, because I'm just fundamentally lazy. I got really good at reading patches and understanding what they did, and then I'd just apply them."

Tux Turns 30

Linus has also worked to stay impartial as Linux has grown and become more successful, "I very consciously didn't want to work for a Linux company, for example. I maintained Linux for the first decade without it being my job. That's not because I think commercial interests are wrong, but because I wanted to make sure that people saw me as a neutral party, and never felt like I was 'the competition'."

On the question of whether or not open source is sustainable, Linus replied, "Yes. I'm personally 100% convinced that not only is open source sustainable, but for complex technical issues you really need open source simply because the problem space ends up being too complex to manage inside one single company. Even a big and competent tech company."

Finally, as for how long he'll continue working on Linux, Linus said, "I do enjoy what I do, and as long as I feel I'm actually helping the project, I'll be around." Read on for the full interview.

Translations: [Korean], [Vietnamese] (Contact us if you'd like to translate this interview into another language.)

Managing Open Source Projects

JA: We recently spoke with Drupal creator Dries Buytaert, and he credited you with much inspiration and the occasional mentorship and advice over the past twenty years that he's been maintaining the popular Drupal CMS. Do you frequently communicate with maintainers of other open source projects, either offering mentorship or just sharing notes? How often do other open source maintainers reach out to you looking for advice or help?

LT: I don't know about others, but no, I don't personally tend to interact all that much with other open source projects, simply because I tend to be a pretty "one-track mind" person. I think that's why I still do kernel maintenance three decades later: some people flit from project to project, while others (like me), end up being fairly focused on one thing for longer time periods.

That said, there is often a fair amount of overlap in developers, with lots of developers working on more than one open source project. And different projects obviously affect each other, with all the common infrastructure. So you do have that kind of cross-pollination, and you end up having people meet at the same conferences (back when those happened) etc.

JA: As the maintainer of an open source project, what are some of the key lessons you have learned that would help others manage their projects more successfully?

LT: This is a hard one to answer, because I don't really know what the key to success is. Yes, Linux has been very successful, and clearly Git too started on the right foot, but it's always very hard to really attribute that to some deeper cause. Maybe I've just been lucky?

Because luck and timing, and being in "the right place at the right time" really is important. I think for both Linux and Git, the projects I started ended up being projects that a lot of people needed, even if they didn't necessarily even know they needed them. Was that just luck? Maybe. Or was it that of all that mass of people who needed those projects, I was the one who uniquely stepped up and did the work, and got the ball rolling?

My ego prefers the latter, but honesty forces me to say that you really do want luck too, and you do need to pick the right project. The one that people really need.

But if we ignore those kinds of "big questions", I do have a few practical and down-to-earth things that I personally think are important if you are an open source maintainer.

The big one is that you have to be there. You have to stay around, you have to be there for other developers, and you have to be there ALL THE TIME. You will hit technical problems, and it will be frustrating. You'll work with people who may have very different ideas of how to solve those technical problems. And the technical problems are in some ways the easy part, because they usually do have technical solutions, and you can often fairly objectively say "this is better/faster/simpler/whatever".

The harder part can be that you'll end up interacting with people who you don't like, or with people who don't like you, and there will be personality conflicts. Then you can't fall back on "show me the numbers" - people just don't always get along, and it's not a numbers game. You'll have bad days, and the people you work with will have bad days. And you'll have to work through it all.

That's not to say that you can't take a break. I do that all the time. If I get frustrated, I just leave the computer, and I will go read a book or something. Trying to force some productive work (or discussion) when you're frustrated and angry is not great. And I clearly have not always done great on this, and I've pissed people off and used too strong language. I think I'm doing better on that, but one way I'm doing better on that is by literally walking away more - trying to actively notice "I'm in a bad mood" and just stepping away from the computer.

So you don't have to be there "all the time" in the sense that it has to be constant. Taking the day off is fine. Taking a week off might mean that you need to let people know. Taking a month off? At that point you really have to have a maintenance plan, and in three decades that has happened exactly twice: once when kernel.org got broken into and people spent a lot of time making sure that everything was ok, and the second time when I took a break to try to make sure I had my behavior under control.

What I'm trying to say is that maintaining a big project is a fair amount of work, and it's something you need to keep on doing for a long time. It's not all fun. It's interesting. It's challenging in the best ways. I have not been bored being a kernel maintainer. But it's not all roses either. Not everybody wants to do that kind of thing.

The other big thing is that you have to be open. And I mean that in multiple ways. It's really easy to create some kind of "clique" of people, where you have an inner cabal that discusses things in private, and then you see really only the end result (or the fringe work) in the open, because all the important stuff happened inside a company or within a core group of people, and outsiders have a hard time breaking into that clique, and often have a hard time even seeing what is going on in that core group because it was so private and exclusive.

It's one of the reasons I really like open mailing lists. Not some "by invitation" list. Not something you even have to sign up to participate in. Really open. And pretty much all the development discussions should be there.

But "open" is important in another way too - be open to other people's solutions, and don't have this very clear and inflexible idea of how things should be done. I think one of the reasons Linux succeeded was exactly the fact that I actually did NOT have a big plan, and did not have high expectations of where things would go, and so when people started sending me patches, or sending me requests for features, to me that was all great, and I had no preconceived notion of what Linux should be. End result: all those individuals (and later big companies) that wanted to participate in Linux kernel development had a fairly easy time to do so, because I was quite open to Linux doing things that I personally had had no real interest in originally.

And finally, I think "open" is important in the sense of honesty. You don't want to play politics behind peoples back. Be open about your motivations, be open about why you do things and what you do. You don't have to like everybody you work with, and they don't have to like you, but if people are open about what they are aiming for and what they do, you don't necessarily have to always be best buddies - the most important thing is that you can trust each other.

Because trust matters. A lot.

JA: Beyond what you’ve already mentioned regarding less coding, and more communication and leading, were there specific skills you needed to learn that you found difficult? For example, delegating, being a better writer, and other non-coding skills — and if so, how did you learn to do this? Was it hands on, from books, or from other people? Is this something taught in school?

LT: So I'll just start off by saying that almost all of the process for me has been very much incremental and a learning experience. Three decades is a long time, and very few changes have been very sudden, and most of how we do things have grown in a very "organic" way.

In other words, it's very much not a result of planning ahead and reading management text-books etc. It has very much mostly happened on its own, and any structure we have now is not from some written-down org-chart, but from people simply "finding their places".

One skill that clearly some people find difficult is "letting go of control". I still remember the very early days, when people would send me patches, and I'd not actually apply them as patches, but I'd read them, figure out what people wanted to do, and then do that myself. Because that was how I had started the project, and that was how I felt more comfortable, and that way I knew the code better.

It turns out that for me, this was not a big deal in the end. I stopped doing it fairly quickly, because I'm just fundamentally lazy. I got really good at reading patches and understanding what they did, and then I'd just apply them. So my control freak days were fairly quickly over. I think I've been pretty good at finding people to trust, and then doing just that - trusting them and not micro-managing them overly much.

So delegating hasn't been a huge problem, but I know it has for other projects. Again, part of it is that whole thing where our maintainership model doesn't require some kind of absolute trust up-front, which really does make everything much easier.

Communication skills very much are important. I actually come from a family of journalists (both my parents were journalists, my uncle was one, my paternal grandfather was a poet and a journalist), so I grew up in a household where reading and writing was pretty much taken for granted from a very young age. And while English is my third language, it was a pretty strong language for me already by the time I started Linux, and communication wasn't a huge problem. But I realize that it very much can be a big issue, both for personal (perhaps personality) reasons and for language barrier reasons.

But in general, mostly I did learn by doing. Again, remember - none of Linux happened overnight. The project it was thirty years ago is very different from the project it is today.

JA: While open source has been hugely successful, many of the biggest users, for example corporations, do nothing or little to support or contribute back to the very open source projects they rely on. Even developers of surprisingly large and successful projects (if measured by number of users) can be lucky to earn enough to buy coffee for the week. Do you think this is something that can be solved? Is the open source model sustainable?

LT: I really don't have an answer to this, and for some reason the kernel has always avoided the problem. Yes, there are companies that are pure "users" of Linux, but they still end up wanting support, so they then rely on contractors or Linux distributions, and those obviously then end up as one of the big sources of kernel developer jobs.

And a fair number of big tech companies that use the kernel end up actively participating in the development process. Sometimes they end up doing a lot of internal work and not being great at feeding things back upstream (I won't name names, and some of them really are trying to do better), but it's actually very encouraging how many big companies are very openly involved with upstream kernel development, and are major parts of the community.

So for some reason, the kernel development community has been pretty successful about integrating with all the commercial interests. Of course, some of that has been very much conscious: Linux has very much always been open to commercial users, and I very consciously avoided the whole anti-corporate mindset that you can most definitely find in some of the "Free Software" groups. I think the GPLv2 is a great license, but at the same time I've been very much against some of the more extreme forms of "Free Software", and I - and Linux - was very much part of the whole rebranding to use "Open Source".

Because frankly, some of the almost religious overtones of rms and the FSF were just nutty, and a certain portion of the community was actively driving commercial use away.

And I say that as somebody who has always been wary of being too tainted by commercial interests. I very consciously didn't want to work for a Linux company, for example. I maintained Linux for the first decade without it being my job. That's not because I think commercial interests are wrong, but because I wanted to make sure that people saw me as a neutral party, and never felt like I was "the competition".

But I do think that some projects may have shot themselves in the foot by being a bit too anti-commercial, and made it really hard for companies to participate.

And no, it's not always easy working with companies. We have several kernel maintainers that have been very active in trying to help teach companies how to work with open source: it's one of the things the Linux Foundation does (not just on the technical side: there's teaching about the legal issues etc), and apart from being one of the main kernel maintainers, Greg KH is very active on that front. So it does take some effort.

But is it sustainable? Yes. I'm personally 100% convinced that not only is open source sustainable, but for complex technical issues you really need open source simply because the problem space ends up being too complex to manage inside one single company. Even a big and competent tech company.

But it does require a certain openness on both sides. Not all companies will be good partners, and some developers don't necessarily want to work with big companies.

JA: A common theme we've found in talking to long-term open source maintainers is burn out, in part due to the constant pressures of maintaining projects so publicly, and constant demands from users as if you owe them something. Have you experienced this? How do you deal with this, and avoid burnout? Have you ever considered walking away from kernel development?

LT: Well, I kind of alluded to this issue earlier in your "key lessons" question.

Because yes, it's a pressure. And yes, I've been fed up too at times.

At the same time, at least for me personally, my bouts of "Ok, that's enough" have generally been very much "That's enough for TODAY". I get stressed out, I get annoyed. I've obviously exploded at people at times, and it's not pretty when it happens (and I really have been actively trying to make sure it doesn't happen again). And you obviously don't see the cases where I just walk away pissed off about something or somebody.

But.

I go off, read a book, maybe drive around a bit if it's nice outside, take a break. And I get over it. And I'm back the next day, because in the end, I really enjoy what I do. I'd be bored to tears without kernel development.

So even when I take a vacation (I try to go scuba diving a couple of times a year, although the pandemic obviously means that that hasn't happened the last year), I take a laptop with me so that I can keep up. I let people know that I'm not as available as usual, but particularly when I can time it to the end of the development window, it's usually not a big deal. I'm very seldom entirely off the grid, although that has happened a couple of times too (again - scuba diving sometimes means "exotic location without internet" even these days), so I've been entirely incommunicado for a week due to that a couple of times.

And I do love being on a liveaboard, doing five dives a day for a week, and literally not even able to read email. I've managed that three times in the last five years, I think. It's lovely.

But then I get back, and I'm really happy to be back too.

JA: 30 years is a long time, and while I understand it's impossible to predict the future, I'd still like to ask: Where do you see Linux in another 30 years? And what do you envision as your roll at that time?

LT: So this is a question that I can't answer, and it's not because I'm trying to weasel out of it, but simply because it's not how I work, and not how I think about the project.

I don't have a "30-year plan". I don't even have a 5-year plan. In fact, I don't plan ahead more than a release or two (which is obviously just a few months).

As an engineer, I have this strongly held opinion that "details matter". Details are almost the only thing that matters. If you get the details right, the rest will follow.

That may sound a bit odd, considering that I did already talk about "good taste", and I'm certainly very much on the record as saying that the unix philosophy ("everything is a file" being one of the core pillars) is the right one.

And in Git, I very much wanted to have some overall "design" too, and there's very much a couple of overarching big concepts in Git too ("everything is an immutable object" is perhaps the Git equivalent of the Unix one).

But those kinds of "high level design" things are great mainly to give you a kind of cohesive end result, and give the community a kind of "design compass". They aren't really the most important thing in the end. Reality is complex and often ugly, and the high-level big design cannot stand in the way of details, and all the special cases that you actually need in reality.

So I just like to say that I'm a "plodding engineer". I look at what's going on right now, I look at the problems we have now, and I don't really plan for the future outside of just knowing that "I have to maintain the end result", so I do want to make sure that the work we do today won't be a huge problem tomorrow.

That kind of answers the last part of your question: I do see myself as being around. Not for another 30 years, but I do enjoy what I do, and as long as I feel I'm actually helping the project, I'll be around.

JA: Do you have any advice for open-source developers that are looking to raise money to support their open source development efforts?

LT: This is the first question that I really don't have any answer to at all.

I started out thinking of Linux purely as a hobby for the longest time, and never thought it would actually be my job. My first industry job (outside of academia, where the first few years happened) was non-Linux-related, and I made my contract explicitly say that my Linux work was not company work (Transmeta did use Linux, but that wasn't really my job, even if I ended up also working on some Linux issues that Transmeta had internally - mainly early SMP problems).

In fact, to me Linux was so much non-work that I was planning to take an unpaid year off from Transmeta to get the Linux 2.6 release out (ok, to be honest, now I'm unsure what the exact version was, it's a long time ago. I think it was when there was some stress during the 2.5.x days, and I felt I needed to concentrate full-time to get to 2.6). That's when OSDL came in ("Open Source Development Labs" - later to become Linux Foundation), so that I could actually get paid to do Linux without working for a commercial Linux company.

So for me, the first decade of Linux I never felt like raising money was an issue - I did it on the side. And ten years into it, when I felt I had to work on it more full-time, it had become big enough that it "just happened".

But I realize this is really really unusual, and I simply do not have an answer to your question in general.

Unless the answer is then exactly "plan on it being a hobby for a decade, and if it grows so big that it cannot be a hobby any more, you've likely already solved the funding problem".

NOTE! This is the point where I would like to just say how lucky I was to grow up in Finland. With an education system that is completely free, and one of the best in the world, I simply came from a background where it was entirely sensible to treat Linux as a hobby, and just know that I can make money as a commercial software developer. I came out of 8 years of world-class university studies with something like $7k of student debt - not exactly worth worrying about in the world of high tech.

I very much realize that a lot of people in the US don't really understand the kind of freedom that gives you in life. You really can choose to just do what you love to do, because you can afford to.

The Linux Foundation

JA: How involved were you in the creation of the Linux Foundation? What is your role? Has the Foundation impacted the kernel beyond allowing you to get paid without working for a commercial Linux company?

Linux Foundation logo

LT: I have had nothing at all to do with creating OSDL (and then the Linux Foundation). I'm literally just an employee, although a high-profile one with the title of "fellow".

OSDL started out as a non-profit industry consortium for companies to do things together - particularly collaborate on enterprise capabilities - and with an original emphasis on having a machine farm that was available to developers (eg the kind of hardware that developers wouldn't have otherwise had access to). This all started before I was employed by them, and entirely independently of that.

It then became the Linux Foundation when OSDL merged with another non-profit industry consortium: the Free Standards Group. The hardware lab side part fell to the wayside, and the "industry collaboration" part became the main thing. Again, while I was by then employed by them, this was not something that I was personally part of: I have very consciously stayed very much focused on just the technical kernel development side.

LF does a lot of other things than just support a few key developers like me and Greg KH. In fact, it does so much that you really are better off looking up the LF web site (or the Wikipedia one). LF ends up doing a lot of infrastructure of various kinds: some technical (like kernel.org), but a lot of it other "support" - organizing conferences, having lots of working groups for industry partners, things like that.

So LF is basically support infrastructure and a lot of different projects for various things around Linux. And I'm just an employee with a fairly unusual employment contract that basically says that everything I do has to be open source, and that LF can't tell me what to do with Linux. I'm happy, and it turns out that the member companies seem to be happy too, because they all know that I'm entirely outside all of the company politics.

Other Interests

JA: What brought you to the US? Do you miss, and have you considered returning to Finland, or elsewhere?

LT: So I moved to the US in '97, and part of that was that I was fairly young, and I got an offer from a startup that did very interesting things in an area that I was very familiar with (ie the somewhat odd 80386 architecture - exploring it was why Linux got started in the first place).

And Finland at the time was very much about high tech, but it was dominated by cellphone technology (Nokia is Finnish, and at the time was the biggest cellphone company in the world, and the biggest company in Finland by quite a big margin).

I wasn't interested in phones (this was before they grew up and became small computers - people actually used those things to talk to each other, if you can believe it). And the US seemed interesting, and I moved here with my wife and our (at the time) 10-week old daughter.

Moving to another country when you just had your first child, and you have no other family around to support you may not be the smartest thing to do. But hey, we were young, we took a "let's try it" approach to things, and it all worked out. I still remember how we moved in February, and it was cold (about -20°C, so about 0 F) in Helsinki when we left, and we walked off the plane and it was sunny and a nice balmy 70°F when we arrived at SFO.

It's been interesting. The US is home these days, and yes, I miss some parts of Finland. The US education system is a disaster. You have to move to the right area to get a good grade school or highschool, and you have to pay insane amounts of money for a good college. It's a disgrace. So is the healthcare system. And the political climate in the US has gone from "slightly strange" to downright scary. In Finland? Things mostly JustWork(tm).

But hey, there are advantages too, and it's not just the weather (yes, we then moved up to Portland, OR, and the weather here isn't as nice as it is in the Bay Area, but trust me - the weather is still a lot better than Finland). And we've been here so long that our kids don't speak Finnish (both me and my wife are from the Swedish-speaking minority group in Finland, so we speak Swedish at home), and we have friends and social ties here in the US. And you can largely ignore the failings of US society as long as you have a good job.

Did we consider moving back? Several times. First when the kids started school. Then when the kids started highschool. Then college. See a pattern? And then when it looked possible that Trump might get re-elected.

JA: Much of the world was carefully watching that election, and worried about what it would mean. And even yet, knowing that some 70 million Americans supported his re-election, there’s foreboding for the future. How do you handle conversations with people who supported Trump’s re-election?

LT: The US political system in general worries me, and the American exceptionalism and nationalism is sad and scary. Particularly when it is often by people who literally have no idea what they are talking about and have never lived outside the country.

The US is a lovely country in many ways, and it's also a very varied country with lots of different cultures and people (and nature), and I like that. In fact that would probably be the hardest part for me if I were to move back to Finland - Finland is a very nice, sane, and safe country, but it's also a very small one and very homogenous.

But the uneducated "Rah rah, America #1!" thing can be very annoying too. You see these huge trucks with American flags, and you just face-palm occasionally.

And sometimes it's even educated people. Before Trump was elected, I was talking to this perfectly nice medical doctor, who was absolutely convinced that the US health care system was the best in the world. He based this on having never lived anywhere else, and couldn't possibly admit that other countries actually have better healthcare - even when discussing it with somebody who actually has literally seen that better health care first hand. This is a highly educated person who went through many years of medical school, and still has that "America, f*ck yeah!" mentality.

And yes, he was a Trump supporter.

Don't get me wrong - nationalism exists everywhere, including Europe. Including even Finland. But the US version of it does seem to be pretty toxic.

And honestly, it's one of the reasons I live on the West coast. Oregon is mostly very liberal, at least in any population center (Eastern Oregon is very much different, but hardly anybody actually lives there - large in area, very small in population). So the area I live in, you certainly don't see the confederate flag (or the Trump flag) very often, although you do see that occasional big truck person who drove in from elsewhere.

That said, I do think the US is changing. We've lived here almost 25 years by now, and it feels like it has changed even during just that time. Religiosity is way down, although it's obviously still very much an issue about where you live. And in many ways the US has obviously shed a lot of socially repressive policies (ie the whole legalization of gay marriage, effectively the end of the war on drugs etc). So on the whole I'm fairly optimistic, and I do think that the Trump phenomenon is possibly (hopefully) just the result of those overall positive changes. Classic reactionary conservatism.

JA: What are your interests and hobbies outside of the Linux kernel? What do you do when you're not focused on kernel development?

LT: I've already mentioned the main two a couple of times: I end up reading a lot (nothing serious, it tends to be random fantasy or sci-fi off my kindle), and when I get to travel I try to do scuba diving as much as possible.

And I actually have a fairly normal family life. I've got three daughters, but they are older and have mostly flown away. The youngest is still in college and will come home for summer, the middle one is doing some graduate work and won't be home for summer, and the oldest is working on the other side of the country. We still try to do family vacations (but only the middle one ever got scuba certified - I tried with all of them, but it is what it is), but last year really was not great.

So these days, it's mainly me, my wife, our two dogs, and a cat. I've gotten my first vaccine dose, and am looking forward to trying to go back to a slightly more normal life in a couple of months.

JA: As an avid sci-fi reader always looking for new books, I'd be curious to know if there are any authors or series you've enjoyed above others? Are there any good books worth mentioning that you still think about from time to time?

LT: Honestly, I'm a "read-it-and-forget-it" kind of person - I read mostly very forgettable random stuff, definitely not things you think about afterwards. Things like the Miles Vorkosigan series by Lois McMaster Bujold are high-brow by my standards - I think I spent a year reading the free or 99¢ kindle scsi and fantasy novels by random unknown authors with editing problems. Probably mostly fantasy, largely because it's easier to find (and I find bad scifi much more annoying than bad fantasy).

On the "not trash" side of fantasy, I think Brandon Sanderson and maybe Robin Hobb stand out. But for every Sanderson, there's probably fifty forgettable random sword-and-sorcery coming-of-age stories.

There are a couple of things I come back to, and I think I've re-read the Dune saga about once per decade. It's one of (very few) things that have stood the test of time and actually aged well. I remember loving Heinlein as a teenager, and now I just cringe at it.

So no, don't take reading cues from me. To me, reading is something I do to relax, and is never high-brow or very serious.

JA: I found this earlier interview an interesting read, offering a lot of background into your diving. Do you still use and contribute to Subsurface, the divelog program you started?

LT: I still use it, although for obvious reasons I haven't used it in the last year. Subsurface is a bit like 'Git' in that it's not something I wanted to write, but that I wrote because I needed it. And like Git, I found a maintainer for it, and Dirk Hohndel has been maintaining it for a long time now and taken it to be something much more than it was for me (with support for Windows, MacOS, iOS and Android, not just my original Linux support).

And without any diving, I haven't been motivated to work much on it, although I've helped fix a few reported problems over the last year.

My second vaccine dose is a couple of weeks away, and I'll go diving again in a couple of months. So that might make me fix a few more issues.

JA: Thank you. My entire career has grown out of your “hobby”, both directly through usage of Linux, and indirectly through countless other projects that exist because of Linux and Git. I’m grateful to know that through all of this you’re still enjoying what you’re doing. I’m glad that you’re on your way to being fully vaccinated. And I’m sincerely appreciative that you’ve shared so much of your time to thoroughly and insightfully answer all of these questions! Again, thank you!

For Part One, click here.

May 05 2021
May 05
“Your time is limited. So don’t waste it living someone else’s life.”
Steve Jobs

A personification of this quote is Vinit Kumar, Technical Lead at OpenSense Labs. He had started off as an aeronautical engineer, but life took several turns and he somehow ended up being a Drupal developer! Let’s skim through a story that talks about conventions, ups and downs, and most importantly, following your dreams.

Q : Hi Vinit! So let’s start with talking about your education. Why did you choose aeronautical engineering in the first place? 

A : I had been pursuing aeronautical engineering more for its professional aspect, and always had more interest in graphic designing and consequently, web development. While still in college, I took up a course in graphic designing and also took up some commissioned projects that were offered to me by the teacher himself. While working on these projects, soon I realized that web development and graphics go hand in hand, and started trying my hands on development as well.

Q : I see. Had you ever developed interest in programming beforehand or was it a spur of the moment decision to transform? 
A : It was a rather extended transformation process. Pritam has been my friend since we were in school, and he was the one who was more into web development. During the years I spent being an aeronautical engineer, the aeronautics sector was not looking good - a big airline like Kingfisher was on the verge of closing. It was rather chaotic! Pritam knew about my interest in graphics and web development, and insisted that I pursue that instead - as the web was only going to expand in the upcoming years and moreover, it was something that had spiked my interest since the very beginning. Pritam helped with the basics when I was still a fresher in the domain. Subsequently, I joined a company in Hyderabad as a developer, but left within a month because my skills weren’t competent enough to keep up with the profile. After that, I brushed up my knowledge a bit and joined another company where I worked the night shift for a year. By the end of the year, due to consistent hard work and toil, I made a rather smooth switch and felt that my life was finally getting in order.
 

four pictures place side by side comparing Vinit Kumar's journey from being an aeronautical engineer to a Drupal developer


Q : As you said, you left the first company that you had joined within a month as you lacked the skills to keep working. What made you pursue the field despite not finding success anytime sooner?
A : I think that the approach makes all the difference. I wasn’t bummed by the fact that I didn’t have the supposed skillset but rather took all that as a part of my learning curve. I appeared for numerous interviews and all the questions that I couldn’t answer, I would go home and prepare those. Hence, I accepted that I was still learning. The added bonus was that web development was something that I actually wanted to do - so work never felt like work. It felt like a yearlong training procedure.

Q : That is such a great perspective! While we’re still in the topic of transformation, was there a moment of instigation that prompted you to switch from aeronautical engineering to Drupal development? How easy/difficult was it to take the call?
A : Sometimes, an outsider’s perspective is required to make the picture clearer. While working in Bangalore in a job dealing in technical publication (publication of different kinds of user manuals for the maintenance of aircrafts), I occasionally used to skip going to work as an engineer to stay home and create websites. On one such event, one of my friends realised that I had been sitting creating the website since early morning when he was going to work - and was still on it by the time he returned in the evening. I still remember when he looked at me, a little astonished, and said that I should go on and pursue something that I really wanted to. It was a turning point in my life. I called Pritam that very day and left for Hyderabad - where I took up my first job as a developer.                                                                         

Q : So that is how it started! When I look back at your journey - that has eventually led you to becoming the tech lead, how has the experience with Drupal been so far?
A : It has been working pretty well for me. I still consider myself a student in the domain and focus on learning more than anything else. I have definitely grown a lot in the past 8 years and gathered knowledge in every step - and I want to keep it that way.

Q : Do you ever wonder how different life would have been had you remained an aeronautical engineer? 
A : Quite different. I don’t really have any regrets though, I followed my heart and landed in a decent place. I don’t think that any of the jobs I had taken up in aeronautical engineering really spoke to me as much as Drupal development has done so far. Not once in these years have I been prompted to go back to become an aeronautical engineer.

Q : Glad it worked out for you. Lastly, what would you tell someone who wants to make a career switch like you did?
A : I’ll tell them the same thing that my friend had told me - follow your heart and it will lead you to the right path.

While we may believe that following conventional guidelines is the ‘safer’ way to live life, Vinit also highlights that we can step onto different boats and examine the best one for ourselves while we are still young and agile. There’s a time suitable for experimenting as well - we just need to identify it!

For more such interviews on The Unlikely Drupalists series, read about a/an:

DJ who became a web developer

Electrical engineer who became a frontend developer

May 05 2021
May 05

In addition to the annual DrupalCon NA, April featured a lot of great Drupal articles. Here’s a recap of some of our top picks.

State of Drupal presentation (April 2021)

We’re starting off with Dries’s recap of his traditional State of Drupal presentation from the DrupalCon. As usual, he also includes a video of the keynote, so you can watch the whole thing here if you happened to miss it during the DrupalCon week.

He begins with an update on the state of Drupal 9 and the readiness for Drupal 10. After that, he introduces Project Browser, a new initiative intended to improve the site building experience of users of Drupal by facilitating the discovery and installation of modules.

Another important announcement that Dries makes is the Drupal Association modernizing their collaboration tools with GitLab; if anyone would like to participate in this, reach out to Heather Rocker.

Read Dries Buytaert’s State of Drupal recap

Drupal, Identity, and the Road Ahead

This next post, written by well-known community member Adam Bergstein aka n3rdstein, also relates to Dries’s keynote and the Drupal’s project’s recent 20-year anniversary. Adam’s post takes a look at both the 20-year evolution of Drupal and what the next 20 may bring, also recapping some of the most important points from the Driesnote.

As Adam states, despite technologies rapidly evolving and new ones constantly sprout up, Drupal is not falling behind and is perhaps more relevant and user-focused than ever before. He believes the developments going on now will enable Drupal to compete not only with other open-source solutions, e.g. WordPress, but also with big proprietary platforms.

Read Adam Bergstein’s takeaways from the Driesnote

Core Web Vitals in Drupal

Next up, we have a blog post not related to the DrupalCon, coming from Rob Browning of Third and Grove. He takes a look at a newly introduced Google ranking factor, Core Web Vitals, and how they’re handled in Drupal. 

Core Web Vitals comprise three different elements. The first is Largest Contentful Paint (LCP), where Drupal is ranked as the best of all the CMS analyzed. 

It didn’t score as well in the second one, First Input Delay, which can be improved by minimizing or deferring JavaScript, removing non-critical 3rd party scripts, and using a browser cache. As for the third and final element, Cumulative Layout Shift, Drupal ranked first once again.

Read more about Core Web Vitals in Drupal

A personal site upgrade from Drupal 7 to Drupal 9: some migration tips

In the fourth post, Christian López Espínola aka penyaskito describes the process of upgrading his personal site from Drupal 7 to 9. One of the main changes was moving away from Drupal 7’s Blog module to use the “article” content type instead. Rather than going with a custom migration, he decided to try out Drupal Migrate UI.

Besides just new functionality, a website upgrade is also the perfect opportunity to remove unnecessary content, or “take the trash out”, as penyaskito puts it. He also recommends auditing your content after each migration to make sure everything has been moved properly, and he finishes with a key point: for multilingual content, set everything up before going ahead with the migration.

Read more tips for a Drupal 7 to Drupal 9 migration

The 4 Best Drupal 9 Features That Make Users’ Lives Easier 

We continue with a post by Taoti Creative’s Patrick Koroma in which he breaks down 4 of the top new features of Drupal 9 that improve the experience of all users. He kicks off with what we think is the most impactful of these - the smooth and effortless upgrade path which will also be adopted for all future Drupal versions. 

The other three features are a cleaner code base, improvements to the recently introduced Layout Builder and a modernized and more user friendly admin theme. What’s more, Drupal 8 is going end of life in November, and so upgrading to version 9 will soon not only be compelling but necessary.

Read more about the top features of Drupal 9

2021 Aaron Winborn Award Winner: AmyJune Hineline

For the 7th year in a row, the North American DrupalCon has also been the occasion for awarding the Aaron Winborn Award. The 2021 winner is Kanopi StudiosAmyJune Hineline, also known as volkswagenchick.

As community ambassador at Kanopi Studios, AmyJune has made enormous contributions to Drupal since joining the community in 2015; we actually dedicate a full question to this in our interview with Kanopi’s CEO Anne Stefanyk

Like every year, the award is definitely well deserved, and it’s great that the Drupal community honors contributors in this way. It was a pleasure revisiting AmyJune’s numerous contributions and reading the comments of everyone who nominated her.

Read more about the 2021 Aaron Winborn Award winner

The Drupal Community's Work With React

Another blog post from April which we enjoyed and wanted to mention is this one by Acro Media’s Josh Miller about the Drupal community’s work with React, specifically the Decoupled Menus initiative. Since menus are an ubiquitous website element, the Drupal community decided to have the initiative focused here. Its main goal is to publish the first Drupal core JavaScript package on npm. 

The initiative is already in full swing and has been featured both at DrupalCon NA and at DrupalCon Europe back in the fall. Josh also provides an overview of its development and the 2 current projects: a sort of documentation for Standalone JavaScript Packages, and a survey about decoupled usage.

Read more about the Decoupled Menus initiative

Decoupled Menus Survey Analysis

Speaking of Decoupled Menus - the final post we’re including this month comes from Baddý Breidert of 1xINTERNET who provides an overview of the results of the Decoupled Menus survey. 

The survey itself was pretty straightforward, with questions related to most used frameworks and tools. Unsurprisingly, the top front-end framework was React, npm was the top JavaScript tooling, and JSON:API was the top API thanks to its usage in Drupal.

Some of the takeaways from open-ended questions are particularly interesting, such as the evident lack of an established solution for managing menus with decoupled Drupal, and the huge demand for code snippets in decoupled CMS documentation.

Read more about the results of the Decoupled Menus survey

Natural stone pier

We hope you enjoyed our selection for April. Make sure to tune in again next month when we’ll be doing a recap of this month’s articles.
 

May 05 2021
May 05

The N1ED module works as a bridge between Drupal 8 and 9 versions, and the N1ED library – which is a multi-plugin for CKEditor, the basic text editor in this system. The library itself is built based on Bootstrap and its classes. In this text, we'll take a look at it and at the module itself.

N1ED library

The N1ED library is available in free and paid versions. The former has limited functionalities, but you can use options such as:

  • full screen text input,
  • widgets which introduce new buttons to the editor, such as Font Awesome icons, easy table insertion and HTML code insertion,
  • easy addition of headings and paragraphs, which allows you to better control the entered text.
Full screen version of the free editor version

Full screen version of the free editor version

Apart from the free version of the library, there are three different paid plans that provide additional functionalities. The most interesting of them is Bootstrap Editor, thanks to which you can easily design what the website will look like in the desktop or mobile version.

N1ED module

Before the installation, you can see and feel for yourself how the N1ED module works and only then decide if it's worth starting to use it on your own website.

Dates

It's a relatively young module. It appeared on Drupal.org in early 2019, but it's already a stable version monitored by the Drupal Security Team. The first release of the N1ED library was earlier – on 18 December 2018.

Popularity of the module and the library

According to the official statistics, the module is being used by more than 150 websites. The library itself can be used in any system that uses CKEditor or TinyMCE, for example in Symfony, Laravel or Magento.

Configuration and use

Download the N1ED module from Drupal.org. The module is installed in the typical way:

composer: composer require drupal/n1ed

drush: drush dl n1ed

drupal console: drupal mod n1ed

After executing the command you need to enable N1ED on the page with modules using Drush or Drupal Console. Support for a new plugin in CKEditor is automatically enabled for the Full HTML filter.

Support for N1ED plugin in CKEditor

N1ED can be enabled for any text format. Just set the switch to the desired position in the text format edit options.

In the same place, you can set your own API key which is required for the plugin to work. After the installation, you use the default key provided with the module which gives basic free functionality.

Setting your own API key for the N1ED module

 

Summary

Although the free version of the library is very limited in terms of available functionalities, it provides a new look and feel for adding content in Drupal. In addition, it helps you to control the text and elements you enter. We use both the library and the N1ED module as part of our Drupal development services.

May 05 2021
May 05

I proposed this session to DrupalCon, but it was not selected. I think that is good. I have had my fair share of stage time in DrupalCons in the past, new contributors should take the lead. However, I still did the work of creating the presentation, then recorded myself giving the talk.

This is a re-post of the article on the Lullabot blog.

Slides available here.

Drupal projects can be challenging. You need to have a lot of framework-specific knowledge or Drupalisms. Content types, plugins, services, tagged services, hook implementations, service subscribers, and the list goes on. You need to know when to use one and not the other, and that differs from context to context.

It is this flexibility and complexity that allows us to build complex projects with complex needs. Because of this flexibility, it is easy to write code that is hard to maintain.

How do we avoid this? How do we better organize our code to manage this complexity?

Framework logic vs. business logic

To start, we want to keep our framework logic separate from our business logic. What is the difference?

  • Framework logic - this is everything that comes in Drupal Core and Drupal contrib. It remains the same for every project.
  • Business logic - this is what is unique to every project—for example, the process for checking out a book from a library.

The goal is to easily demarcate where the framework logic ends and the business logic begins, and vice-versa. The better we can do this, the more maintainable our code will be. We will be able to reason better about the code and more easily write tests for the code.

Containing complexity with Typed Entity

Complexity is a feature. We need to be able to translate complex business needs to code, and Drupal is very good at allowing us to do that. But that complexity needs to be contained.

Typed Entity is a module that allows you to do this. We want to keep logic close to the entity that logic affects and not scattered around in hooks. You might be altering a form related to the node or doing with access or operate on something related to an entity with a service.

In this example, Book is not precisely a node, but it contains a node of type Book in its $entity property. All the business logic related to Book node types will be contained in this class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
final class Book implements LoanableInterface {
  private const FIELD_BOOK_TITLE = 'field_full_title';
  private $entity;

  public function label(): TranslatableMarkup {
    return $this->entity
      ->{static::FIELD_BOOK_TITLE}
      ->value ?? t('Title not available');
  }

  public function author(): Person {...}
  public function checkAvailability(): bool {...}

}

Then, in your hooks, services, and plugins, you call those methods. The result: cleaner code.

1
2
3
4
5
6
7
8
9
10
11
12
13
// This uses the 'title' base field.
$title = $book->label();

// An object of type Author.
$author = $book->owner();

// This uses custom fields on the User entity type.
$author_name = $author->fullName();

//Some books have additional abilities and relationships
if ($book instanceof LoanableInterface) {
  $available = $book->checkAvailability() === LoanableInterface::AVAILABLE;
}

Business logic for books goes in the Book class. Business logic for your service goes in your service class. And on it goes.

If you are directly accessing field data in various places ($entity->field_foo->value), this is a big clue you need an entity wrapper like Typed Entity.

Focusing on entity types

Wrapping your entities does not provide organization for all of your custom code. In Drupal, however, entity types are the primary integration point for custom business logic. Intentionally organizing them will get you 80% of the way there.

Entities have a lot of responsibilities.

  • They are rendered as content on the screen
  • They are used for navigation purposes
  • They hold SEO metadata
  • They have decorative hints added to them
  • Their fields are used to group content, like in Views
  • They can be embedded

Similar solutions

This concept of keeping business logic close to the entity is not unique. There is a core patch to allow having custom classes for entity bundles.

When you call Node::load(), the method will currently return an instance of the Node class, no matter what type the node is. The patch will allow you to get a different class based on the node type. Node::load(12) will return you an instance of the Book class, for example. This is also what the Bundle Override module was doing.

There are some drawbacks to this approach.

  • It increments the API surface of entity objects. You will be able to get an instance of the Book class, but that class will still extend from the Node class. Your Book class will have all of the methods of the Node class, plus your custom methods. These methods could clash when Drupal is updated in the future. Unit testing remains challenging because it must carry over all the storage complexity of the Node class.
  • It solves the solution only partially. What about methods that apply to many books? Or different types of books, like SciFiBook or HistoryBook. An AudioBook, for example, would share many methods of Book but be composed differently.
  • It perpetuates inheritance, even into the application space. Framework logic bleeds into the application and business logic. This breaks the separation of concerns. You don’t want to own the complexity of framework logic, but this inheritance forces you to deal with it. This makes your code less maintainable. We should favor composition over inheritance.

Typed Entity’s approach

You create a plugin and associate it to an Entity Type and Bundle. These are called Typed Repositories. Repositories operate at the entity type level, so they are great for methods like findTaggedWith(). Methods that don’t belong to a specific book would go into the book repository. Bulk operations are another good example.

Typed Entity is meant to help organize your project’s custom code while improving maintainability. It also seeks to optimize the developer experience while they are working on your business logic.

To maximize these goals, some tradeoffs have been made. These tradeoffs are consequences of how Drupal works and a desire to be pragmatic. While theory can help, we want to make sure things work well when the rubber meets the road. We want to make sure it is easy to use.

Typed Entity examples

Your stakeholder comes in and gives you a new requirement: “Books located in Area 51 are considered off-limits.”

You have started using Typed Entity, and this is what your first approach looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/**
 * Implements hook_node_access().
 */
function physical_media_node_access(NodeInterface $node, $op, AccountInterface $account) {
  if ($node->getType() !== 'book') {
    return;
  }

  $book = \Drupal::service(RepositoryManager::class)->wrap($node);
  assert($book instanceof FindableInterface);
  $location = $book->getLocation();
  if ($location->getBuilding() === 'area51') {
    return AccessResult::forbidden('Nothing to see.');
  }

  return AccessResult::neutral();
}

You already have a physical_media module, so you implement an access hook. You are using the global repository manager that comes with Typed Entity to wrap the incoming $node and then call some methods on that Wrapped Entity to determine its location.

This is a good start. But there are some improvements we can make.

We want the entity logic closer to the entity. Right now, we have logic about “book” in a hook inside physical_media.module. We want that logic inside the Book class.

This way, our access hook can check on any Wrapped Entity and not care about any internal logic. It should care about physical media and not books specifically. It certainly shouldn’t care about something as specific as an “area51” string.

  • Does this entity support access checks?
  • If so, check it.
  • If not, carry on.

Here is a more refined approach:

1
2
3
4
5
6
7
8
9
10
11
12
function physical_media_node_access(NodeInterface $node, $op, AccountInterface $account) {
  try {
    $wrapped_node = typed_entity_repository_manager()->wrap($node);
  }  
  catch (RepositoryNotFoundException $exception) {
    return AccessResult::neutral();
  }

  return $wrapped_node instanceof AccessibleInterface
    ? $wrapped_node->access($op, $account, TRUE)
    : AccessResult::neutral();
}

If there is a repository for the $node, wrap the entity. If that $wrapped_entity has an access() method, call it. Now, this hook works for all Wrapped Entities that implement the AccessibleInterface.

This refinement leads to better:

  • Code organization
  • Readability
  • Code authoring/discovery (which objects implement AccessibleInterface)
  • Class testability
  • Static analysis
  • Code reuse

How does Typed Entity work?

So far, we’ve only shown typed_entity_repository_manager()->wrap($node). This is intentional. If you are only working on the layer of an access hook, you don’t need to know how it works. You don’t have to care about the details. This information hiding is part of what helps create maintainable code.

But you want to write better code, and to understand the concept, you want to understand how Typed Entity is built.

So how does it work under the hood?

This is a declaration of a Typed Repository for our Book entities:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/**
 * The repository for books.
 *
 * @TypedRepository(
 *    entity_type_id = "node",
 *    bundle = "book",
 *    wrappers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\WrappedEntities\Book",
 *      variants = {
 *        "Drupal\my_module\WrappedEntities\SciFiBook",
 *      }
 *    ),
 *   description = @Translation("Repository that holds business logic")
 * )
 */
final class BookRepository extends TypedRepositoryBase {...}

The “wrappers” key defines which classes will wrap your Node Type. There are different types of books, so we use ClassWithVariants, which has a fallback that refers to our main Book class. The repository manager will now return the Book class or one of the variants when we pass a book node to the ::wrap() method.

More on variants. We often attach special behavior to entities with specific data, and that can be data that we cannot include statically. It might be data entered by an editor or pulled in from an API. Variants are different types of books that need some shared business logic (contained in Book) but also need business logic unique to them.

We might fill out the variants key like this:

1
2
3
4
5
variants = {
  "Drupal\my_module\WrappedEntities\SciFiBook",
  "Drupal\my_module\WrappedEntities\BestsellerBook",
  "Drupal\my_module\WrappedEntities\AudioBook",
}

How does Typed Entity know which variant to use? Via an ::applies() method. Each variant must implement a specific interface that will force the class to implement ::applies(). This method gets a $context which contains the entity object, and you can check on any data or field to see if the class applies to that context. An ::applies() method returns TRUE or FALSE.

For example, you might have a Taxonomy field for Genre, and one of the terms is “Science Fiction.”

Implementing hooks

We can take this organization even further. There are many entity hooks, and Typed Entity can implement these hooks and delegate the logic to interfaces. The logic remains close to the Wrapped Entity that implements the appropriate interface.

The following example uses a hypothetical hook_entity_foo().

1
2
3
4
5
6
7
8
9
10
11
/**
 * Implements hook_entity_foo().
 */
function typed_entity_entity_foo($entity, $data) {
  $wrapped = typed_entity_repository_manager()->wrap($entity);
  if (!$wrapped instanceof \Drupal\typed_entity\Fooable) {
    // if the entity not fooable, then we can't foo it
    return;
  }
  $wrapped->fooTheBar($data);
}

This type of implementation could be done for any entity hook.

Is this a good idea? Yes and no.

No, because Typed Entity doesn’t want to replace the hook system. Typed Entity wants to help you write better code that is more efficient to maintain. Reimplementing all of the hooks (thousands of them?) as interfaces doesn’t further this goal.

Yes, because you could do this for your own codebase where it makes sense, keeping it simple and contained. And yes, because Typed Entity does make an exception for hooks related to rendering entities.

Rendering entities

The most common thing we do with entities is to render them. When rendering entities, we already have variants called “view modes” that apply in specific contexts.

This is starting to sound familiar. It sounds like a different type of wrapped object could overlay this system and allow us to organize our code further. This would let us put everything related to rendering an entity type (preprocess logic, view alters, etc.) into its own wrapped object, called a renderer. We don’t have to stuff all of our rendering logic into one Wrapped Entity class.

Typed Entity currently supports three of these hooks:

  • hook_entity_view_alter()
  • hook_preprocess()
  • hook_entity_display_build_alter()

Renderers are declared in the repositories. Taking our repository example from above, we add a “renderers” key:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/**
 * The repository for books.
 *
 * @TypedRepository(
 *    entity_type_id = "node",
 *    bundle = "book",
 *    wrappers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\WrappedEntities\Book",
 *      variants = {
 *        "Drupal\my_module\WrappedEntities\SciFiBook",
 *      }
 *    ),
 *    renderers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\Renderers\Base",
 *      variants = {
 *        "Drupal\my_module\Renderers\Teaser",
 *      }
 *    ),
 *   description = @Translation("Repository that holds business logic")
 * )
 */
final class BookRepository extends TypedRepositoryBase {...}

If you understand wrappers, you understand renderers.

The TypedEntityRendererBase has a default ::applies() method to check the view mode being rendered and select the proper variant. See below:

These renderers are much easier to test than individual hook implementations, as you can mock any of the dependencies.

Summary

Typed Entity can help you make your code more testable, discoverable, maintainable, and readable. Specifically, it can help you:

  • Encapsulate your business logic in wrappers
  • Add variants (if needed) for specialized business logic
  • Check for wrapper interfaces when implementing hooks/services
  • Use renderers instead of logic in rendering-specific hooks
  • Add variants per view mode.

All of this leads to a codebase that is easier to expand and cheaper to maintain.

Photo by James Harrison on Unsplash

May 04 2021
May 04
easy redirect module install and configuration

https://www.drupal.org/project/redirect

Credits & Thanks

Thank you to:

About the Redirect Module

The Redirect module redirects visitors from old URLs to new URLs. When you move a piece of content to another section of your site or inadvertently change the URL, this module can really help.
    
The Redirect module creates 301 redirects which help your SEO by making sure that any URL that ranks in Google will still resolve when a visitor arrives. If you don’t install this module, you will have to regularly look for any URLs that have changed and fix them.

This module highlights the power of Drupal, automating what was once an arduous and ongoing SEO chore. Thanks to the power of Drupal and the Redirect module, fixing links is a much less frequently needed task.

Install and Enable the Redirect Module

  1. Install the Redirect module on your server. (See this section for more instructions on installing modules.)
     
  2. Go to the Extend page: Click Manage > Extend (Coffee: “extend”) or visit https:///admin/modules.
     
  3. Search for the Redirect module in the search field:

    redirect module install for drupal 9
     

  4. Select the checkbox next to “Redirect” and "Redirect 404" and click the Install button at the bottom of the page.

Permissions

If necessary, give yourself permissions to use the Redirect module.

redirect module permissions for drupal 9
  1. Click Manage > People (Coffee: “people”) and click into the Permissions tab. Or go to https:///admin/people/permissions.
     
  2. Select the appropriate checkboxes under the “Redirect” section.
     
  3. Click the Save permissions button at the bottom of the page.

Configure the Redirect module

  1. Visit the Redirect Admin page: Click Manage > Configuration > Search and metadata > URL redirects > Settings (Coffee: “url redirect” then click the Settings tab) or visit https:///admin/config/search/redirect/settings.

    redirect module configuration settings drupal 9
     

  2. The default settings are usually adequate, so make sure your settings match the image above.
     
    1. Select the appropriate checkbox next to Automatically create redirects when URL aliases are changed.
    2. Select the checkbox for Retain query string through redirect.
    3. Select “301 Moved Permanently” from the Default redirect status drop-down.
    4. Select the checkbox Enforce clean and canonical URLs.
    5. Select “10000” from the 404 error database logs to keep drop-down
  3. If you changed anything, click the Save configuration button at the bottom of the page.

How to create a manual redirect

The Redirect module also allows you to create manual redirects. If you move content, put the wrong URL on some printed advertising, or you’re migrating content, this is an invaluable function to understand.

Note: Creating a manual redirect isn’t necessary right now. However, it’s an essential skill for a growing site, so I’m covering it here.

  1. Go to the URL Redirects page: Click Manage > Configuration > Search and metadata > URL redirects (Coffee: “redirects”) or visit https:///admin/config/search/redirect.
     
  2. Click the +Add redirect button.

    add a rule redirect in drupal 9
     

  3. Enter the From and To URLs in their respective fields:
     
    1. Path is the old URL that is broken.
    2. To is the new URL. If it’s a link on your site, you can use just the path beginning with the /. For example: /your/path/here. If it’s an external URL, put the entire URL including the https://.
       
  4. Select “301 Moved Permanently” (or one of the other options as suited to the situation) from the Redirect status drop-down menu.
     
  5. Click the Save button.

Now, when someone visits the old URL, they’ll be automatically redirected to the new one.

Did you like this walkthrough? Please tell your friends about it!

facebook icon twitter social icon linkedin social icon

May 04 2021
May 04

The world we live in is pretty dynamic, it keeps evolving. Talking strictly in the technological sense, things that enjoy immense popularity today stand a chance of being considered obsolete tomorrow. Then there are the advancements in popular trends, which happen to be eminent today and tomorrow, but the eminence is enjoyed by its newer version.

This isn’t necessarily a bad thing, if you don’t consider the acclimation period, which can become tiring, but once that is out of the way, we are almost always thankful for the change. 

Take Drupal 8, for instance, upgrading from its previous version was a massive undertaking, and the fact that Drupal 8 was a whole other ball game than Drupal 7, made the acclimation process quite difficult to be honest. However, D8’s new features and capabilities made the difficulties worth it. Having worked on D8, I am speaking from personal experience. 

2021 saw the emergence of some of the most astounding technological advancements that deserve to be awed at. So, today we would be discussing some of these that come as popular trends in technology and change the way we do and see things. Being from the Drupal community, I would also be co-relating these advancements with Drupal and see how we can amalgamate the trend and the CMS and make it work for us, as Drupalists. Drupal, being an open source software, is extensive by nature and making it scope wider to align with the latest trends is a challenge that not many would be opposed to. So, let’s begin getting familiar with the trends and see if Drupal can be used to capitalise them. 

The Remote Environments’ Charm 

The first trend I will be talking about is one that has affected all of us. The phrase ‘remote working’ used to seem like a far-fetched idea in the pre-pandemic times, but now it has become a reality, a reality that would be here to stay for much longer than we anticipated.

Remote environments have become the trend in the tech industry and the fact that these are beneficial to everyone involved in working, the boss, the employee and the customer, is the reason for its longevity. Collaboration strategies massively change in remote environments and work pretty well for the organisation as it improves productivity.

Let’s start with the bosses of the tech industry, the first hard hitting fact of the pandemic for this sector was the realisation of the inadequacies of its digital infrastructure. The initial phase of remote working saw the employers rushing to provide even the most basic of infrastructural needs. The digital cracks that were hidden in the past became quite blatant in the pandemic. From dealing with heightened consumer traffic online through scaling and building resilience to adding features and getting them into production, every business aspect has been made possible through remote environments. 

Employees are happy that they are able to avoid the hour-long commute, as many as 70% will continue to work remotely on a permanent basis.

As for the clients, they are able to reap the benefits of the global technological network from their homes. The barriers for digitally gaining access to industry experts are no longer visible and the customers are capitalising on that. Getting an expert on a virtual call is so much more convenient for both parties than a physical meeting, the chances of which would have been slim, regardless of the pandemic.

Then there are the virtual tech conferences that are a win for everyone, the consumers, the employees and the bosses. Talking from personal experience, I was pretty upset when I couldn’t go to the DrupalCon Amsterdam 2019. So, when the first ever virtual DrupalCon was announced in July 2020, in the midst of the pandemic, I was beyond thrilled, because I was able to take part in it from my home. 

Of course, there are also the environmental and social benefits of remote working. Less carbon emissions, more renewable resources, less traffic and consequently less number of road accidents, all say that remote environments’ charm cannot be taken lightly.

In MIT Professor Tom Malone’s words, 

The current crisis has accelerated us forwards a decade in terms of acceptance of remote working, and there is no going back.

What’s Next in Cloud?

Cloud isn’t a new trend in the market. AWS, Azure, AliCloud and GCP have been the flag bearers in this domain, making the transition to cloud quite seamless. Servers, storage, databases, networking, software, analytics and intelligence, everything is provided for on the cloud. With a lion’s share of organisations using cloud based services, its eminence is staring at us glaringly. 

There are two trends in the cloud domain that deserve attention. 

Infrastructure-as-code 

To define infrastructure-as-code simply would come out as the automation of infrastructure and the consequent management of the said automation. In a broader sense, it would be defined as the practice of configuring and managing infrastructure such as networks or machine readable files. 

Through this concept, developers are able to supply IT environments with multiple lines of code and also gain the ability to deploy in minutes, rather than the ages it used to take manually. 

With recent improvements in IaC, it is more likely to deliver better outcomes as its ecosystem is growing. However, being a relatively new technique, it has certain disadvantages including inconsistencies in its tooling along with paradoxical approaches. New ideologies are still surfacing around it, infrastructure as software by Pulumi or infrastructure as data by Hightower are two of them. The way IaC will come out in the future is highly anticipated. 

Pipeline-as-code

Coming to pipeline-as-code, which essentially means defining the deployment pipeline through code, rather than the configuration of a running CI/CD tool. With organisations moving towards automation in all across their environments, especially the development infrastructure, pipeline-as-code would become a need. 

LambdaCD, Drone, GoCD and Concourse act as resources to make pipeline-as-code work for you. 

I’ll culminate this trend with Drupal. In DrupalCon Vienna 2017, a session took place that talked about using Drupal to capitalise on infrastructure as code as well as pipeline as code. In a session during the event, the implementation of Continuous Delivery pipelines in immutable infrastructure was discussed. DevOps and general tools like Docker, Packer, Terraform and Ansible amongst others can make that possible. And all of this can be achieved by extending Drupal. You will find a lot of interesting details in this video.

[embedded content]


The Realisation of Data and Analytics

Data has become one of the most important commodities for businesses and the analytics to understand and generate lucrative insights from that data is even more important. 

When we consider data and analytics, predictive analytics is often an integral part of it. Building websites that are able to capitalise the notion and create dynamic content which operates on the user's browsing history and site relative behaviour is garnering a lot of interest. The thing is building such a site requires a host of software to work together. R, Google Analytics along with Drupal can make that happen. For ‘the how,’ you would have to watch this video.

[embedded content]


With a majority of the CIOs believing that data and analytics will start shaping their business in the future, it won’t be wrong to believe that many trends of today are also in line with this concept. Big Data and AI have become crucial for sectors like finance, wherein the assessment of potential loans and investment is done through the analytics. It is suffice to say that today, businesses, from private to government, are becoming more data-driven by the day. To make data safer, data residency, privacy and its usage are accounting for a regulatory environment that is both dynamic and complex in its mandates, making organisations steer in the right direction. 

Associated with data and analytics are concepts of surveillance capitalism and surveillance state, which use surveillance and manipulation to drive power and profits. With COVID-19, such surveillance technologies have been adopted by many countries including China and Israel. Once the pandemic is clear, there is a high chance that these emergency measures will remain. Learn more on how better data strategies can help capitalise on consumer behaviour.

The Modern and Updated Core 

At the heart of a consistent output of every business, you’d find its core processes. So, saying that they are important would be understating them. Having core processes that fairly rudimentary is not going to be enough in 2021. 

With a heightened level of digital transformation, more expectations from our users and an increased use of data-intensive algorithms being implemented in the core systems everywhere, be it the front, middle or backend, there is progression towards uplifting the core from being basic. 

Core modernisation is quite discernible as a trend in 2021, and the development and delivery of the advanced ERPs and legacy programs is its proof. To further substantiate it, think of the kind of interactions the consumers want, instantaneous and tailored would be the words used to describe them. That is why core modernisation has become a need, not only for consumer relationships, but also for digital finance and real-time supply chains. Refreshing and reengineering ERP and legacy systems are the first step towards achieving this. Doing this would allow you to get to new levels of agility, automation, scalability and security.  

The Rise of Digital Reality Technologies

Digital technologies are becoming more real with the passing time. AR/VR, voice interfaces, speech recognition, ambient computing, 360° video along with immersive technologies have enabled businesses to provide a more real user experience. 

Terms like natural, intuitive and imperceptible are used to describe these technologies and their consequent engagement with the users.

Being able to experience a situation without actually being in that situation has become possible through virtual reality. The Massachusetts State University’s VR tour is one example of virtual reality and Drupal combo. A react front-end, Drupal backend and JSON API made that possible. Look for yourself.

[embedded content]


The same can be done for your employees and workers, wherein AR and Drupal can provide the workers a 3D view of the procedure, leading to an elevated level of productivity. Imagine a shopping application that becomes your assistant inside the store, from telling the route to reaching the products you want to scanning them and telling you the price, that’s augmented reality in its prime. With Drupal 8, building that application becomes a possibility.

[embedded content]


The reality of digital experience in 2021 is deepening with emotional connections with consumers and employees alike. This brings to the next  trend, which is the human-factor of these experiences achieved through AI.

The Humanness of Artificial Intelligence

The term artificial intelligence is not something that many of us haven’t heard or even experienced ourselves. It has been a concept that has been around for a while and we have seen its marvels and have been impressed by them. 

In the context of Drupal, the digital sphere has numerous plausibilities with regards to Artificial Intelligence.

[embedded content]


And there is more; 

But what’s more? These aspects, although impressive, don't excite us or our consumers anymore. To bring back the excitement, the concept of driving human emotions, feelings and moods into AI has become a trend. 

This AI approach emphasises on designing for humans, meaning the focus would be one human and emotion-led experiences, which would then be curated through AI technologies; a total 180 from the traditional designs. Human emotions like empathy, trust and feeling complex emotions would be the star of human experiences. 

For this, 

  • Neuroscientific research would be conducted, including EEG, eye tracking, facial coding and implicit association testing amongst others.
  • Human centred design would be implemented, which would focus on the human, his beliefs, values, feelings and ambitions along with ethnographic research and neuroscience to understand the human’s needs and wants on a deeper level.
  • Cognitive and affective technologies would come to play, to stress ethical considerations of the design and align it with the organisation’s values.

Vision systems, voice recognition, natural language generation, natural language processing, voice stress analysis and sentiment analysis are some of the AI technologies being used to deliver human experiences. With these at work, a phone call to the automated-caller would only placate the consumer/employee and not agitate him/her further. 

The Next Gen of User Experience

When a user interacts with us, there are certain actions that make it possible. Clicking, pointing, swiping and scrolling are some of them. As you may have experienced yourself, these mediums of interactions are evolving. A user can experience what you want him to without these actions, speaking and gesturing are what I am referring to. And with advancements, thinking would become a part of it too. This technology is referred to as ambient user experience. 

It is when technology is used in accordance with consumer data to provide a seamless interaction for the user, which may not be dependent on human touch. 

With new and improved devices being launched every second, the user has become somewhat dependent on them. This dependence would only grow with time and devices would have to provide more. 

  • The future would look something like this; 
  • More prominence would be given to technology, all the while making devices smaller, yet more powerful. 
  • Proactiveness would signify all consumer interactions. 
  • Neurofeedback technology would become ubiquitous, making direct brain and neural interactions an everyday occurrence.
  • Devices in general would be more connected and context-aware at home, office and everywhere else we go.

The Transactional Blockchain 

In 2021, blockchains and their use is going to gain traction. The reason being the numerous benefits these digital ledgers come with.

Improved transparency; 
Better security; 
Accurate traceability; 
Reduced costs;
And enhanced speed being just a few of them.

Blockchain initiatives are advancing in every sector of the business world. It is not just limited to financial services and fintech companies anymore, rather from government to life sciences and healthcare, from technology to media and telecommunications, every major sector is trying to lead in blockchain development. 

Blockchain are usually fully decentralised p2p architectures, however, there is another architecture that is being explored. A semi-decentralised architecture, with the same benefits of trusted transactions can be built. Here Drupal can provide assistance, its User Accounts can be used for that.

Talking further about Drupal, its Ethereum Blockchain Module that integrates with Ethereum, an open source blockchain platform programmable through smart contracts, has made the CMS leverage this technology. Watch this video to get more insights on both the Drupal aspects in Blockchain technology.

[embedded content]


The Method of Agile and DevOps

The way businesses operate is also changing and 2021 is bringing with it the convergence of technology and business strategies. This has brought on the trend of development methodologies like Agile and DevOps. 

Today, providing operational excellence has become equivalent to driving value creation. Businesses are doing one to achieve the other. There is a tangible shift in priorities from delivering projects to the results that project would bring. Hence, the adoption of methods like Agile and DevOps has become pivotal.

Version control, automation and testing tools, backup and disaster recovery along with sound security practises are just a few of them. All of these make the management of servers and other infrastructure pretty convenient as part of an organisations daily operations.

The best part about these DevOps techniques is that they can be used with other web applications and Drupal is one of them. If you are looking to widen your knowledge span of DevOps and its use alongside Drupal, this video would be the one to watch.

[embedded content]


The Physics of Quantum

Quantum is not just a physics concept anymore, it is being used everywhere or more like quantum computing is being leveraged in every corner of the business world. 

Be it producing breakthroughs in science; 
Be it implementing machine learning to get to illnesses sooner; 
Be it creating devices and structure that are far more efficient than in the past; 
Be it promoting financial strategies that will be helpful until a person retires; 
Or be it generating algorithms that would enable the resources to at our disposal quickly; 

Quantum computing is becoming omnipresent and its ability to process information and execute computations that are not only unhackable, but also have the ability to concentrate tech is probably the reason for it being in vogue.

With quantum computing, there won’t be any technical constraints that often hold back both data and material scientists. Unlike the traditional computing’s use of 0s and 1s, quantum computing relies on its own quantum bits to propel change through manipulation of single particles, which would have the potential of solving highly complex problems. 

The Accessible Version of Programming 

Let’s look at numbers, there would be billions of people using the web and all of its offerings and by offerings I mean the numerous websites and applications we, as users, use. Now, what do you think would be the number of expert developers and programmers making these experiences for the user? That number would be much-much lower than users. 

The talent pool required to build programs is scarce to be honest. If we were to be dependent on it to create everything we have on the web, we might not actually have it. So, how come we do? The answer lies in accessible programming. 

If you have heard of spreadsheets and low-code platforms, you will have a fair idea of what I am going to say. These are means for novice programmers or even non-programmers to create, store and manipulate data without the need for a long development process involving the scarce talent pool I just mentioned. The pre-built components and configurations help in future accelerating the development process without the need of coding. 

You might think that this is a great new trend, empowering non-programmers to tap into the programming world and create something on their own, and it truly is; however, I wouldn’t say that it is a new trend. 

  • Back in the 60s, when COBOL was created as a programming language, it was made to resemble the english so that the non-programmers could work with it. 
  • Then there is Drupal with its D7AX, which is a community of sorts, wherein developers pledge to create modules that adhere to accessibility standards and by simply installing them, you can create a truly accessible web experience. Learn more on Drupal’s web accessibility provisions here.

Although these two examples are fairly different in regards to accessibility, they do promote it in their own way, making the work of programmers and developers easy. 

With increasing awareness about this, this trend of making development accessible is sure going to pick up pace in 2021. 

The Reign of Programming Languages

Programming languages are the sole connection between the computers and the programmers, making both understand each other. To make that understanding as seamless as possible, there are tons of programming languages available, some more advanced than others and some more convenient than others. There are the ones that we, as developers and programmers, use everyday and then there are the ones we wish to use. We’ll talk about both. 

A raph shows the list of languages that are the most loved by programmers; Rust tops the list, making it one of the popular macro trends.Source: Stack Overflow

At the top of the most loved languages since the past half decade is Rust and with good reason. It is a language that delivers an impressive performance and is memory safe along with that its robust expressiveness also works in its favour. The fact that it is being used for big data and machine learning further adds to its lovable attributes. 

Talking about the language that the programmers covet would be Python, followed by JavaScript and Go. While Rust lands at number 4 in this category, it does show that the loved language is garnering more and more interest each year. 

A graph shows a list of languages based on their desirability for the programmers.Source: Stack Overflow

The Pivot Towards Visualisation Tools

It’s safe to say that a good picture can speak with more clarity than words ever could. Perhaps that is why visualisation tools, that equip the developers with the ability to create good images, are becoming prominent. These images are concentrated in every realm of web building from architecture to code complexity and up to system performance, visualising data and making your work easy. 

Frameworks like Tableau, IBM Cognos Analytics and Microsoft Power BI are the front-runners in this domain, becoming feature packed data studios in themselves. However, this year there has been an emergence of up-and-coming visualisation tools that have proven to be as good as the rest. Dash, Streamlit, Sisense, Kiali and Infogram are some of them.

From providing custom reports and dashboards for machine learning apps to observation tools and capturing distributed traces and metrics, these visualisation tools will remain in vogue because they take data and make it seem simple enough to explore your own health and structure as well as provide flexibility, customisation, version control on top of automated deployment.

The Browsers Going On to Full-Blown Applications?

A browser can do a number of things. If we compare an app to its browser site, you’d find a lot of similarities. Chrome and the Google app have that. At my work, I have Google Docs, Gmail, Slack and Zoho all working throughout the day on Google Chrome with Zoom calls popping in a couple of times, so, yes the browser can achieve some semblance of the functionality an app can achieve. But was it meant to be? Was a browser supposed to become an equivalent of an app? 

There isn’t an accurate answer to this question. Maybe it was the addition of HTML 2.0 and instigated the browser war between Microsoft and Netscape or maybe it was just coincidence. Nonetheless, nobody can deny that the browser has become a more complex and versatile platform with an ecosystem of its own. With polyfills and a JavaScript ecosystem make it both easy and complex for the developers to navigate through the browsers. 

Yes, browsers have transcended the expectations their users once had of them, but they still have a certain way to become a full-blown application. Take automated testing for instance, the tools browsers have for that are as good as ancient in comparison to the applications, which have the same as a first-grade objective. 

Despite this fact browsers are and will continue to evolve, this is true because browsers as code platforms are gaining traction and the tech community is making strides towards improving the overall browser experience. And to think all of this started with the addition of the ‘submit forms’ features, kind of surreal, isn’t it?

Conclusion 

And there you have it, all the popular technology trends that 2021 has to offer us. Many of them are not new of us in the tech industry, but the advancements being made in them called for their mention. Be it cloud technologies or the visualisation tools or even the ambient experience, every macro trend in 2021 is unique in itself and its outcome and that is what I think will make this year unique too.

As for Drupal, it is an old CMS, yet at 20, it’s still going strong. The most discernible reason for that is its versatility. Drupal has many out-of-the-box features that make it great, however, it hardly had any that I mentioned above. Despite that fact, it is able to provide its users the benefits of using these macro trends because it is extensible. Drupal can be used and integrated with the technologies that you want it to work with and that makes the CMS advance right along with the changing macro trends each year. 

In the end, I hope this article is as enlightening for you as it was for me. Good luck following trends and making new ones! 

May 04 2021
May 04

04 May

Nick Veenhof

Drupal 9 came out late last year, Drupal 8 reaches its end-of-life in November 2021, and Drupal 7's life will end in November 2022 (the date was extended due to COVID-19 and the large number of active users). With a time frame of just under a year and a half to upgrade from 7 to 9? Is it time to make a plan, or seek other options if that timeline is not feasible. But what other options are there?

Today, we are excited to announce that Dropsolid has joined as one of the exclusive partners that will offer Extended Support for Drupal 7 up until November 2025. This means you get at least 3 extra years of full security coverage. This Extended support program is in collaboration with the Drupal Association and Drupal Security Team, and will give you official support on your Drupal 7 site for a whole extra 3 years. This will give you all the time you need to take the right steps towards a new Digital Experience Platform on Drupal 9 and beyond.

Last year, we encouraged our clients to begin planning their migrations to Drupal 8. Not unsurprisingly, as we will need to say goodbye to our friend, Drupal 7, soon.

Drupal 7 was released on January 5, 2011. Drupal 8 was released on November 19, 2015 and the latest and shiniest release came out a couple days ago with Drupal 9 on June 3, 2020. Drupal 7 is almost 10 years old, and will be almost 12 years old when it comes time to say goodbye in November 2022.

New version or next level?

Moving from Drupal 7 to Drupal 8 or 9 could be a great opportunity to upgrade your digital platforms to the next level instead of merely to the next version. Drupal 9 is more than a CMS, it’s a DXP. Turning your website into a personalized digital experience is not something you do overnight. You need to be well prepared, start from a digital strategy with clear goals. Taking time for that preparation now, will pay off later.

Drupal Extended Support can be a solution if you need more time to finish your digital transformation strategy before upgrading.

A smooth upgrade path for every business at its own pace

Some businesses might have other priorities right now or might need more time to prepare the migration to Drupal 8 or 9. This goes for both businesses that have been hit hard by COVID-19 as for businesses that have suddenly decided to accelerate their digital transformation and therefore need more time to draw up a strategic action plan.

At Dropsolid we want to be a reliable partner for all companies using Drupal: big or small, in fast growing industries or in sectors that got hit hard by Corona. We want to provide all businesses with the best possible transition path to Drupal 9, so that business owners can focus on taking the right steps in their digital transformation whenever and however it is strategically the best time for their business.

All customers of Dropsolid using Dropsolid Experience Platform will automatically benefit from highly critical Drupal 7 updates. All other updates require signing up for our Extended Support program.

Want to move your Drupal 7 site to the Dropsolid Experience Cloud and get a bit more time to prepare your next phase?
Get in touch with us!

More information about this program of the Drupal Association can be found at https://www.drupal.org/security-team/d7es
 

Drupal longterm extended partner dropsolid

May 04 2021
May 04

In today's world, images are an essential part of the modern web. It helps in engaging users as it is a universal language that speaks louder than words. Images are also more memorable than text content and are processed faster by the brain. A research shows that people tend to remember 80% of what they see (visuals) and only 20% of what they read (textual content).

But did you know that on an average, images take up almost 64% of a website’s total weight? If you don’t pay attention to image optimization, you might be losing out on potential customers because your website, although immersive, may be taking too long to load. Hence using optimized images is extremely important for a website. However, image optimization could seem like a nightmare to a content editor who needs to optimize (using external tools) and upload hundreds of images. Drupal’s ImageAPI Optimize module to the rescue! We will be explaining further on this topic and how you can use ImageAPI Optimize in Drupal to automate image optimization. The ImageAPI Optimize module is also compatible with Drupal 9!

ImageAPI

What is Image Optimization?

Image optimization is a process of delivering high quality images in the right format, dimension, size and resolution without compromising on the quality of the image. 

Why use ImageAPI Optimize?

  • Images can contain far more data than required to display. ImageAPI Optimize will provide optimization for those images.
  • The module provides integrations to the optimization tools and web services.
  • It defines the pipelines made up of configurable processors.
  • There are multiple processors available as a contributed module like: imageapi_optimize_binaries, imageapi_optimize_resmushit, imageapi_optimize_tinypng and kraken etc.
  • Each pipeline can be applied to an image to remove the extra metadata or to recompress.

Installing the ImageAPI Optimize module

Step 1 : Download the ImageAPI Optimize and ImageAPI Optimize reSmush.it module using composer.

  
$ composer require drupal/imageapi_optimize

  
$ composer require drupal/imageapi_optimize_resmushit

Note: ImageAPI Optimize reSmush.it is used for integrating the web service which will provide the free API for optimizing images with great quality. Click here to find more on the API. 

Step 2 : Enable the downloaded module using drush or from the drupal UI.

  
$ drush en -y imageapi_optimize

  
$ drush en -y imageapi_optimize_resmushit

Configuration

Creating a pipeline for ImageAPI Optimize:

Let’s first create a pipeline that can be used sitewide to optimize the images. To do that, go to Administration >> Configuration >> Media >> Image Optimize Pipeline

Pipeline Configuration

Configuring the installed processor for the pipeline:

Now let’s configure the installed processor for that pipeline we just created. Go to Administration >> Configuration >> Media >> Image Optimize Pipeline >> Your Pipeline. Now, edit the pipeline and select the processor installed. In this case we are using a Resmush.it processor as shown in the image.

Resmush

 

Allow image styles to use the created pipelines:

By default all the core image styles will be optimized from imageAPI optimize. For new image styles, go to Administration >> Configuration >> Media >> Image Styles >> Your Image styles. Edit the image styles and select the created pipeline.

Optimize Image

Results

ImageAPI optimize module in Drupal provides great compression without compromising on the quality. It provides up to 70% compression for images. Check out an example of the results of an image before and after optimization.

Original

File size : 1.4MB

Original Image

 

Optimized

File size : 116 KB

Optimized Image
May 03 2021
May 03

Hello friends,

We're passionate about coming together as a community, but due to the uncertainty of an in-person event this year and the fatigue that many of us have been experiencing from being online all the time, we've decided to not host a formal BADCamp in 2021.

Please stay tuned for information about special events! Trainings and summits may still happen in some form.

To keep the Drupal community spirit alive, we do have the San Francisco Drupal User Group (SFDUG) that holds meetups twice a month and consistently delivers high quality Drupal and Drupal adjacent content. We typically meet the second and fourth Thursdays of the month at alternating times to accommodate a wider audience who might be visiting us from other time zones. Join the meetup and follow BADCamp on Twitter to keep up to date.

We want to highlight other events in the Drupal community. The Drupal Association has put together a comprehensive event page that curates events from around the globe.

We look forward to connecting with you outside the virtual space soon.

Until next time, Happy Drupalling! 

With love, 
The BADCamp Organizing Collective
 

May 03 2021
May 03

SIREN (Social Interventions Research and Evaluation Network) at UCSF has a mission to, “Improve health and health equity by advancing high quality research on health care sector strategies to improve social conditions". This research initiative, supported by Kaiser Permanente and the Robert Wood Johnson Foundation, is a leader in the creation, curation and dissemination of research that explores the intersection between health and social determinants that often play a critical role in the health outcomes of individuals and communities. 

Chapter Three was recently brought in to help the SIREN team make improvements to their old Drupal 7 site, the most extensive technical work since the original site was built. Over the last several years, SIREN's research and publication efforts grew; they needed a refresh, both to modernize the look and feel and to streamline the user experience. Importantly, they also needed to migrate from Drupal 7 to Drupal 8. We managed to achieve all of these with an approach tailored to their needs and budget.

Outcomes: A Small Sites Approach for Migration and Redesign

Design

Most of the content on the SIREN website is relatively straightforward and didn’t require major restructuring. By taking a simplified approach that emphasized changes to several key pages and incorporating new design elements, we gave the site a more modern and updated look and feel without requiring a wholesale rethink of every page type. For smaller sites like this one, this approach can work wonders!

By focusing on the home page, resource library, (new) Coffee & Science podcast, and the News section, we were able to give SIREN a lot of value for the money. For page templates that weren't redesigned, the addition of new iconography, updates to the color scheme and other styles gave those pages a refresh without impacting the budget.

Figure: Old SIREN homepage with three separate "resource" headings
Old Homepage of SIREN at UCSF
Figure: New SIREN site with updates to imagery, featured content, and simplified navigation
New Homepage of SIREN at UCSF

 

User Experience

As the SIREN Network initiative grew, so did content and navigation, which had expanded out of a desire to expose more of that content directly through major navigation headings. This redesign presented an ideal opportunity to streamline the information architecture while still emphasizing access to SIREN’s priority content. We did this by providing strong design cues and prominent imagery for featured content and thereby reducing the need to have the navigation take on all the work of exposing content that might be nested as a subpage.

Figure: Old resource page with multiple "resource" navigation labels; where should users go?
Old Resource Library of SIREN at UCSF
 
Figure: New Resource Library provides updated visual treatment to better direct users to key content. All interaction with resources are done via the single Evidence & Resource Library heading.
New Resource Library for SIREN at UCSF

Conclusion

SIREN at UCSF has now migrated to Drupal 8 with a new design and UX that emphasized changes to priority content areas and streamlined navigation. The project approach is especially suited to smaller sites that are in need of a rethink; sites that have grown organically but now need a modest restructuring of content and design.

The team at Chapter Three is proud of our partnership with SIREN and their new site. Please have a look and get in touch with any questions.

May 03 2021
May 03

Drupal projects can be challenging. You need to have a lot of framework-specific knowledge or Drupalisms. Content types, plugins, services, tagged services, hook implementations, service subscribers, and the list goes on. You need to know when to use one and not the other, and that differs from context to context.

It is this flexibility and complexity that allows us to build complex projects with complex needs. Because of this flexibility, it is easy to write code that is hard to maintain.

How do we avoid this? How do we better organize our code to manage this complexity?

Framework logic vs. business logic

To start, we want to keep our framework logic separate from our business logic. What is the difference?

  • Framework logic - this is everything that comes in Drupal Core and Drupal contrib. It remains the same for every project.
  • Business logic - this is what is unique to every project—for example, the process for checking out a book from a library.

The goal is to easily demarcate where the framework logic ends and the business logic begins, and vice-versa. The better we can do this, the more maintainable our code will be. We will be able to reason better about the code and more easily write tests for the code.

Containing complexity with Typed Entity

Complexity is a feature. We need to be able to translate complex business needs to code, and Drupal is very good at allowing us to do that. But that complexity needs to be contained.

Typed Entity is a module that allows you to do this. We want to keep logic close to the entity that logic affects and not scattered around in hooks. You might be altering a form related to the node or doing with access or operate on something related to an entity with a service.

In this example, Book is not precisely a node, but it contains a node of type Book in its $entity property. All the business logic related to Book node types will be contained in this class.

final class Book implements LoanableInterface {
  private const FIELD_BOOK_TITLE = 'field_full_title';
  private $entity;

  public function label(): TranslatableMarkup {
    return $this->entity
      ->{static::FIELD_BOOK_TITLE}
      ->value ?? t('Title not available');
  }

  public function author(): Person {...}
  public function checkAvailability(): bool {...}

}

Then, in your hooks, services, and plugins, you call those methods. The result: cleaner code. 

// This uses the 'title' base field.
$title = $book->label();

// An object of type Author.
$author = $book->owner();

// This uses custom fields on the User entity type.
$author_name = $author->fullName();

//Some books have additional abilities and relationships
if ($book instanceof LoanableInterface) {
  $available = $book->checkAvailability() === LoanableInterface::AVAILABLE;
}

Business logic for books goes in the Book class. Business logic for your service goes in your service class. And on it goes.

If you are directly accessing field data in various places ($entity->field_foo->value), this is a big clue you need an entity wrapper like Typed Entity.

Focusing on entity types

Wrapping your entities does not provide organization for all of your custom code. In Drupal, however, entity types are the primary integration point for custom business logic. Intentionally organizing them will get you 80% of the way there.

Entities have a lot of responsibilities.

  • They are rendered as content on the screen
  • They are used for navigation purposes
  • They hold SEO metadata
  • They have decorative hints added to them
  • Their fields are used to group content, like in Views
  • They can be embedded

Similar solutions

This concept of keeping business logic close to the entity is not unique. There is a core patch to allow having custom classes for entity bundles.

When you call Node::load(), the method will currently return an instance of the Node class, no matter what type the node is. The patch will allow you to get a different class based on the node type. Node::load(12) will return you an instance of the Book class, for example. This is also what the Bundle Override module was doing.

There are some drawbacks to this approach.

  • It increments the API surface of entity objects. You will be able to get an instance of the Book class, but that class will still extend from the Node class. Your Book class will have all of the methods of the Node class, plus your custom methods. These methods could clash when Drupal is updated in the future. Unit testing remains challenging because it must carry over all the storage complexity of the Node class.
  • It solves the solution only partially. What about methods that apply to many books? Or different types of books, like SciFiBook or HistoryBook. An AudioBook, for example, would share many methods of Book but be composed differently.
  • It perpetuates inheritance, even into the application space. Framework logic bleeds into the application and business logic. This breaks the separation of concerns. You don’t want to own the complexity of framework logic, but this inheritance forces you to deal with it. This makes your code less maintainable. We should favor composition over inheritance.

Typed Entity’s approach

You create a plugin and associate it to an Entity Type and Bundle. These are called Typed Repositories. Repositories operate at the entity type level, so they are great for methods like findTaggedWith(). Methods that don’t belong to a specific book would go into the book repository. Bulk operations are another good example.

Typed Entity is meant to help organize your project’s custom code while improving maintainability. It also seeks to optimize the developer experience while they are working on your business logic.

To maximize these goals, some tradeoffs have been made. These tradeoffs are consequences of how Drupal works and a desire to be pragmatic. While theory can help, we want to make sure things work well when the rubber meets the road. We want to make sure it is easy to use.

Typed Entity examples

Your stakeholder comes in and gives you a new requirement: “Books located in Area 51 are considered off-limits.”

You have started using Typed Entity, and this is what your first approach looks like: 

/**
 * Implements hook_node_access().
 */
function physical_media_node_access(NodeInterface $node, $op, AccountInterface $account) {
  if ($node->getType() !== 'book') {
    return;
  }

  $book = \Drupal::service(RepositoryManager::class)->wrap($node);
  assert($book instanceof FindableInterface);
  $location = $book->getLocation();
  if ($location->getBuilding() === 'area51') {
    return AccessResult::forbidden('Nothing to see.');
  }

  return AccessResult::neutral();
}

You already have a physical_media module, so you implement an access hook. You are using the global repository manager that comes with Typed Entity to wrap the incoming $node and then call some methods on that Wrapped Entity to determine its location. 

This is a good start. But there are some improvements we can make.

We want the entity logic closer to the entity. Right now, we have logic about “book” in a hook inside physical_media.module. We want that logic inside the Book class.

This way, our access hook can check on any Wrapped Entity and not care about any internal logic. It should care about physical media and not books specifically. It certainly shouldn’t care about something as specific as an “area51” string.

  • Does this entity support access checks?
  • If so, check it.
  • If not, carry on.

Here is a more refined approach:

function physical_media_node_access(NodeInterface $node, $op, AccountInterface $account) {
  try {
    $wrapped_node = typed_entity_repository_manager()->wrap($node);
  }  
  catch (RepositoryNotFoundException $exception) {
    return AccessResult::neutral();
  }

  return $wrapped_node instanceof AccessibleInterface
    ? $wrapped_node->access($op, $account, TRUE)
    : AccessResult::neutral();
}

If there is a repository for the $node, wrap the entity. If that $wrapped_entity has an access() method, call it. Now, this hook works for all Wrapped Entities that implement the AccessibleInterface.

This refinement leads to better:

  • Code organization
  • Readability
  • Code authoring/discovery (which objects implement AccessibleInterface)
  • Class testability
  • Static analysis
  • Code reuse

How does Typed Entity work?

 So far, we’ve only shown typed_entity_repository_manager()->wrap($node). This is intentional. If you are only working on the layer of an access hook, you don’t need to know how it works. You don’t have to care about the details. This information hiding is part of what helps create maintainable code.

But you want to write better code, and to understand the concept, you want to understand how Typed Entity is built.

So how does it work under the hood?

This is a declaration of a Typed Repository for our Book entities:

/**
 * The repository for books.
 *
 * @TypedRepository(
 *    entity_type_id = "node",
 *    bundle = "book",
 *    wrappers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\WrappedEntities\Book",
 *      variants = {
 *        "Drupal\my_module\WrappedEntities\SciFiBook",
 *      }
 *    ),
 *   description = @Translation("Repository that holds business logic")
 * )
 */
final class BookRepository extends TypedRepositoryBase {...}

The "wrappers" key defines which classes will wrap your Node Type. There are different types of books, so we use ClassWithVariants, which has a fallback that refers to our main Book class. The repository manager will now return the Book class or one of the variants when we pass a book node to the ::wrap() method. 

More on variants. We often attach special behavior to entities with specific data, and that can be data that we cannot include statically. It might be data entered by an editor or pulled in from an API. Variants are different types of books that need some shared business logic (contained in Book) but also need business logic unique to them.

We might fill out the variants key like this:

variants = {
  "Drupal\my_module\WrappedEntities\SciFiBook",
  "Drupal\my_module\WrappedEntities\BestsellerBook",
  "Drupal\my_module\WrappedEntities\AudioBook",
}

How does Typed Entity know which variant to use? Via an ::applies() method. Each variant must implement a specific interface that will force the class to implement ::applies(). This method gets a $context which contains the entity object, and you can check on any data or field to see if the class applies to that context. An ::applies() method returns TRUE or FALSE. 

For example, you might have a Taxonomy field for Genre, and one of the terms is “Science Fiction.” 

Implementing hooks

 We can take this organization even further. There are many entity hooks, and Typed Entity can implement these hooks and delegate the logic to interfaces. The logic remains close to the Wrapped Entity that implements the appropriate interface.

The following example uses a hypothetical hook_entity_foo().

/**
 * Implements hook_entity_foo().
 */
function typed_entity_entity_foo($entity, $data) {
  $wrapped = typed_entity_repository_manager()->wrap($entity);
  if (!$wrapped instanceof \Drupal\typed_entity\Fooable) {
    // if the entity not fooable, then we can't foo it
    return;
  }
  $wrapped->fooTheBar($data);
}

This type of implementation could be done for any entity hook.  

Is this a good idea? Yes and no. 

No, because Typed Entity doesn’t want to replace the hook system. Typed Entity wants to help you write better code that is more efficient to maintain. Reimplementing all of the hooks (thousands of them?) as interfaces doesn’t further this goal.

Yes, because you could do this for your own codebase where it makes sense, keeping it simple and contained. And yes, because Typed Entity does make an exception for hooks related to rendering entities.

Rendering entities

The most common thing we do with entities is to render them. When rendering entities, we already have variants called “view modes” that apply in specific contexts.

This is starting to sound familiar. It sounds like a different type of wrapped object could overlay this system and allow us to organize our code further. This would let us put everything related to rendering an entity type (preprocess logic, view alters, etc.) into its own wrapped object, called a renderer. We don’t have to stuff all of our rendering logic into one Wrapped Entity class.

Typed Entity currently supports three of these hooks:

  • hook_entity_view_alter()
  • hook_preprocess()
  • hook_entity_display_build_alter()

Renderers are declared in the repositories. Taking our repository example from above, we add a "renderers" key: 

/**
 * The repository for books.
 *
 * @TypedRepository(
 *    entity_type_id = "node",
 *    bundle = "book",
 *    wrappers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\WrappedEntities\Book",
 *      variants = {
 *        "Drupal\my_module\WrappedEntities\SciFiBook",
 *      }
 *    ),
 *    renderers = @ClassWithVariants(
 *      fallback = "Drupal\my_module\Renderers\Base",
 *      variants = {
 *        "Drupal\my_module\Renderers\Teaser",
 *      }
 *    ),
 *   description = @Translation("Repository that holds business logic")
 * )
 */
final class BookRepository extends TypedRepositoryBase {...}

If you understand wrappers, you understand renderers.

The TypedEntityRendererBase has a default ::applies() method to check the view mode being rendered and select the proper variant. See below:

These renderers are much easier to test than individual hook implementations, as you can mock any of the dependencies.

Summary 

Typed Entity can help you make your code more testable, discoverable, maintainable, and readable. Specifically, it can help you: 

  • Encapsulate your business logic in wrappers
  • Add variants (if needed) for specialized business logic
  • Check for wrapper interfaces when implementing hooks/services
  • Use renderers instead of logic in rendering-specific hooks
  • Add variants per view mode.

All of this leads to a codebase that is easier to expand and cheaper to maintain.  

May 03 2021
May 03

Lynette has been part of the Drupal community since Drupalcon Brussels in 2006. She comes from a technical support background, from front-line to developer liaison, giving her a strong understanding of the user experience. She took the next step by writing the majority of Drupal's Building Blocks, focused on some of the most popular Drupal modules at the time. From there, she moved on to working as a professional technical writer, spending seven years at Acquia, working with nearly every product offering. As a writer, her mantra is "Make your documentation so good your users never need to call you."

Lynette lives in San Jose, California where she is a knitter, occasionally a brewer, a newly-minted 3D printing enthusiast, and has too many other hobbies. She also homeschools her two children, and has three house cats, two porch cats, and two rabbits.

May 03 2021
May 03

By Hansa Pandit

CSS frameworks are very popular these days. They give us a head start without having to write a lot of CSS. There is this new shiny thing that's gaining a lot of attention these days, TailwindCSS. Recently, I got a chance to try my hands on it and I am going to share a high-level overview about TailwindCSS that may help you to get started with it quickly.

What is TailwindCSS?

TailwindCSS is a utility-first CSS framework, which provides you with low-level utility classes to build your custom websites.

I will try to explain how to get into using TailwindCSS based on my personal experience. If you want to use TailwindCSS on your next project, this will be a quick guide for you, which I wish I had when I started working with TailwindCSS, it could have saved me a lot of internet-digging time.

Why use TailwindCSS and not the good old framework, which you are already familiar with?

Every website is unique and it should have its unique styles.

What I like the most about the TailwindCSS, is that it does not give me multiple CSS properties under one class name , so I can use its classes without having to override the other stuff, which most of the other frameworks fail to do, they give you pre-styled components that force you to override the styles.

TailwindCSS comes with a lot of low-level utility classes that you just have to add into your HTML element, to style your website. By low-level, I mean that TailwindCSS goes to the very basics and provides the classes that will not affect any other CSS property.

For instance, TailwindCSS has a class called “container”, the basic use of this class should only be to control the width of the block on various breakpoints, regardless of any other property. and YES, that's exactly what this class does in the TailwindCSS. So that way you do not have to override anything.

Let's see how the most basic example of HTML and CSS with the TailwindCSS classes will look like.

<form class="w-full">

  <fieldset class="p-8 w-full mx-auto">

  fieldset>

</form>

<button class="bg-black inline-block p-8 font-semibold text-white uppercase">

</button>

Below is how the above CSS will be rendered in DOM

.uppercase {
  text-transform: uppercase ;
}

.text-white {
  color: #fff ;
}

.p-8 {
  padding: 2rem ;
}

.font-semibold {
  font-weight: 600 ;
}

.inline-block {
  display: inline block ;
}

.bg-black {
  background-color: #000 ;
}

.w-full {
    width: 100% ;
}

.mx-auto {
  margin-left: auto ;
  margin-right: auto ;
}

The class names that you see in the above examples are the utility classes that come with the TailwindCSS, all you have to do is add the classes inside your HTML element and you are DONE! 

This could look very tempting and appealing at first but then you start to realize there could be some problems with this approach, few of them are:

  • The DOM looks very crowded, this is the first thing that bothered me when I started with the TailwindCSS.
  • This lacks the semantic approach.
  • For every similar element, you have to add the same bunch of classes, which is not something you would really want to do.

Apart from the above, I personally would not like the user to have a look at all the styling in the DOM itself. 

But as I mentioned, that was the most basic example and we do have tricks to fix the above problems.

Inherit the TailwindCSS way!

TailwindCSS provides us with a very cool feature to inherit its own classes using the @apply directive.

@apply is the magic word here, you can create any class name of your own choice, and within that class name, you can inherit the TailwindCSS classes. I will show you, how you can achieve that.

Below is an HTML element with the class names of my own choice.

<button class="button button--primary">
</button>

And this is how your CSS will look for the above HTML.

.button {

  @apply inline-block p-8 font-semibold uppercase ; // These are the classes that come with the TailwindCSS.

}

.button--primary {

   @apply bg-black text-white ;

}

This is how the above CSS will render

.button {

    display: inline block ;

    padding: 2rem ;

    font-weight: 600 ;

    text-transform: uppercase ;

}


.button--primary {

    background-color: # 000 ;

    color: #fff ;

}

So you see, now all the button elements will have the same CSS properties, without making the DOM look too busy.

Looks better, right?

Likewise, if you want all your section titles to be the same, you can simply inherit the TailwindCSS classes, like below

.section-title {

   @apply text-title font-semibold uppercase font-semibold text-2xl text-brand-black mb-5 leading-tight ;

}

In the above code examples, I have used a naming convention that makes much more sense and still it inherits all the CSS classes that come with the TailwindCSS. So practically, I did not have to add any new CSS property or override any existing CSS. Also, if you take a closer look, you'll find that TailwindCSS has a class for almost everything.

That's very cool, right?

So now, there is one very obvious question, what if you do not want the values ​​or colors that come with the TailwindCSS. Will you have to override the CSS? Or will you have to write your own CSS?

Answers to these questions take me to the part which happens to be one of my favorites about the TailwindCSS.

One file to rule them all - tailwind.config.js

This is the file where you can customize your entire theme, be it the theme colors or the font-family or the breakpoints, or anything that you want. 

Sounds interesting, right?

For instance, what if you do not want the width or the max-width values ​​to be in rem unit (that comes with TailwindCSS), rather you want them to be in percentage.

Any thoughts?

All you have to do is go to your tailwind.config.js file and just customize it. 

How?

Let's take a ride.

// File tailwind.config.js


module.exports = {

 theme: {

   extend: {

     colors: {

       'brand-black': '# 1b1b1b',

       'light-gray': '# e4e4e8',

       'gray': '# A6A6A6',

       'text-gray': '# CCC',

       'navy': '# 060B2B',

       'theme-blue': '# 4553ad'

     },

     maxWidth: {

       '1/4': '25% ',

       '2/5': '40% ',

       '1/2': '50% ',

       '3/4': '75% ',

       '4/5': '80% ',

       '9/10': '90% ',

      },

     fontFamily: {

       'display': 'roc-grotesque, sans-serif',

     },

     fontSize: {

       'title': '1.75rem',

       '4.5xl': '2.5rem',

       '5.7xl': '3.75rem',

     },

     boxShadow: {

       'form': '0px 1px 7px rgba (23, 25, 40, 0.13549)',

       'dropdown': '0px 4px 5px # 1b1b1b'

     },

     screens: {

       'mobile': {'max': '767px'},

       'tablet': {'min': '768px'},

       'desktopScreens': {'min': '1024px'},

       'wide': {'min': '1280px'},

       'belowRetina': {'max': '1920px'},

       'retina': {'min': '1921px'},

     }

   }

 }

}

In the above code, I have customized my theme colors, max-width property, font-family, font-sizes, box-shadow, and breakpoints.

Now, let's take a look at how these values ​​can be used in CSS.

.banner-title {

  @apply max-w-9/10 ; // This will apply max-width: 90% as we defined in tailwind.config.js

}

In the above CSS, max-w is the TailwindCSS class for max-width and 9/10 is my customized value for 90%.

In the DOM, it will render as:

.banner-title {

    max-width: 90% ;

}

Similarly, let's see one more example, this time for color.

.button - primary {

    @apply bg-brand-black text-white ; // brand-black is the color that I customized in my tailwind.config.js

}

and the next code example is how it will be rendered

.button - primary {

    background: # 1b1b1b ;

    color: #fff ;

}

This color that you defined in tailwind.config.js can be used with any property where you can apply colors. So if you want the text color to be brand-black then you will simply write:

.button - primary   {

   @apply bg-light-gray text-brand-black;   // light-gray is another color that we defined. 

}

Which, in the DOM, will render as

.button - primary { 

   background: # e4e4e8 ; 

   color: # 1b1b1b ; 

}

That was quick and easy, right?

I really enjoyed using TailwindCSS, it's simple, light, highly customizable, and so easy. I hope this post could give you all that you needed to know for building your project using the TailwindCSS.

For installation and checking out the more utility classes, you can follow the official documentation.

Good luck! :)

Written by Hansa PanditHansa Pandit
Frontend Engineer at Ramsalt Lab

May 02 2021
May 02

For years now, I've been using Toodledo as my task management tool, both for work and home. But work is increasing our security posture and has tightened down on "shadow IT", those systems and tools not owned and managed by the organization. That's a good thing... but Toodledo, and the Zapier integration I used to automate its interaction with GMail, aren't part of the allowed ecosystem. As a result, I find myself switching to Asana, which IT has sanctioned for use. I've just finished my first business week with Asana and here's what I'm encountering...

tl;dr -  Moments of 'this is nice' interspersed with stretches of stabbieness.

Foundationally, there's a difference in the primary use cases of the tools. Toodledo is a task manager first and foremost. There are business and collaboration features, but they were not part of the base of the product. Asana, in contrast, feels like a project management and collaboration tool first and that its task management feature grew out of that. For me, the direct manifestation of this is that Toodledo provides a lot of features to support a GTD workflow but Asana does not. I'm resistant to changing my tool set at all. So the idea of changing my whole task management approach (i.e., not using GTD) is just not viable for me. So I'm trying to hack GTD onto Asana... what I have so far is viable-ish but it's definitely a hack.

I'm going to assume if you're reading this you hava a familiarity with GTD but, in case you don't, the tl;dr is that it  centers around (IMO) two main principles: ubiquitous capture of tasks and a framework for deciding the most relevant task to work on at any given point in time. If you haven't read the book, do. A lot of the websites you'll read make it seem kind of cult-ish, but it works.

Capture

This is the the part of the process where Asana does the best. The mobile app is pretty user friendly, it has plugins for Slack and GMail (and a bunch of others that I don't really care about), there's even an Alfred plugin (though it might use some work).

Asana Slack integration screenshotAsana Slack integration

The downside is that GMail plugin is slow as hell compared to the Toodledo-Zapeir integration I used, which created Toodledo tasks when I starred an email in GMail. It's not awful but it is noticeably slower.

Asana GMail plugin screenshotThe Asana Gmail plugin

Task prioritization

Task prioritization is really where Asana falls down. As I said before, GTD is not an in-built consideration in Asana and the best approximation I've been able to achieve is to add sections to the My Tasks screen to mimic the main work statuses in GTD, which you can see in the screenshot below. There's not even a lot of good detail on how to implement GTD in Asana. The best likely guide I've found is a $10 guide published by the GTD association, but from the summaries I've been able to find on it they're pretty much doing what I have. What I have works, but it's a lot more manual movement of tasks around than I had to do with Toodledo.

Screenshot of My Tasks sections in AsanaThe sections I've set up in My Tasks

The Hotlist

A key feature of Toodledo is its Hotlist, a user-definable list of tasks that are the most important. For me, that list is populated with tasks that are due that day or that I have selected as things I'm going to do that day by starring them in Toodledo. But the important thing here is that Toodledo populates stuff into this for me automatically based on rules I set for it.

My Toodledo hotlist My Toodledo hotlist

In Asana, I'm having to move tasks into this manually. Asana has a Rules feature, but it's reallllly  limited in what it can do. The screenshot below shows the triggers you can use on tasks in Asana with custom rules, which they consider a Premium and higher feature. You'll notice some stuff about the available rules which are just plain odd. For instance, there's a "task is no longer blocked" trigger but not a "task becomes blocked" trigger. So I can, for instance, add a rule that moves a task into the Next Actions section when it becomes unblocked, but not a parallel rule that moves it to the Hold section when it becomes blocked.

Asana's rules triggers

Another thing I'd like to automate is moving a task to Delegated when I assign a user to it, but a) there's no assignment trigger in My Tasks like there is in Asana's projects, b) you can't wildcard the user in that rule, so you can only trigger the rule for a specific user, and c) a rule in a project apparently can't move a rule in the My Tasks list.

Finally, before I wrap this up, a few thoughts about dates in Asana (these are the things that make me the stabbiest, so far):

1 - Asana has task start and due dates and uses the due date for rules. But, the "due date is approaching rule" apparently only fires when the boundary threshold is crossed. So if you set it to move a task to your "hotlist" 3 days before the task it due, but you create a task that is due within 3 days, the rule never fires. If a dev working for me implemented a feature this way, I'd send it back to be reworked.

2 - Asana's start date is displayed on the calendar view of tasks, but doesn't impact it's visibility in the list view. So, for example, I have a Daily Review task in Toodledo that has a start and due date of that date and recurs daily from completion. When I complete today's review the start date for the next iteration is tomorrow and, because it starts in the future, I don't see it in my lists until tomorrow at which point it magically appears. Asana doesn't work like that. The same task set up the same way in Asana creates a new instance in my task list with a start ate of tomorrow even though there is by definition no way I can achieve that task until tomorrow.

May 01 2021
May 01

This month of April is celebrated as DrupalFest on Drupal completing its 20 years. With all the virtual events happening in the last month we have seen great blog posts from the Drupal community.

Here are 6 Drupal blog posts from April 2021 which you must read.

1. If I was to rebuild Drupal.org, what would I want it to look like?

Over the last few weeks, I have found myself thinking more and more about Drupal.org and the various websites and services that surround it, such as groups.drupal.org, events.drupal.org, git.drupal.org, api.drupal.org, jobs.drupal.org, Slack, drupalchat.me, drupical.com, drupal.tv, drupalcontributions.opensocial.site, and all the others that don’t immediately spring to mind!

Author: @rachel_norfolk

Read more: https://rachelnorfolk.me/2021/if-i-was-rebuild-drupalorg-what-would-i-want-it-look

2. How to Manage Multimedia in Drupal? Media Module

Author: @grzegorzbartman

Working with multimedia is one of the areas that large websites have to deal with. When multiple editors upload a large number of files, keeping your photos and videos in order can become difficult and time-consuming. Drupal has several proven recipes for managing the media library, which I'll present in this article.

Read more: https://www.droptica.com/blog/how-manage-multimedia-drupal-media-module

3. State of Drupal presentation (April 2021)

Last week, Drupalists around the world gathered virtually for DrupalCon North America 2021.

In good tradition, I delivered my State of Drupal keynote. You can watch the video of my keynote, download my slides (244 MB), or read the brief summary below.

I gave a Drupal 9 and Drupal 10 update, talked about going back to our site builder roots, and discussed the need to improve Drupal's contributor experience.

Author: @Dries

Read more: https://dri.es/state-of-drupal-presentation-april-2021

4. How to Organize Your Drupal Content With Taxonomies

Taxonomy is the process of classifying information. In Drupal, it’s also a powerful core module that allows you to assign labels to your content according to whichever patterns make the most sense for your site.

Let’s take a look at the different ways you can leverage taxonomies within a Drupal site and go over some best practices that’ll help you build your own system.

Author: @leighvryan

Read more: https://evolvingweb.ca/blog/how-organize-your-drupal-content-taxonomies

And now some Drupal 9 upgrade stories...

5. We upgraded to Drupal 9. Here’s how we did it

If you have a Drupal website you are no doubt aware that Drupal 7 & 8 will soon reach end-of-life. Part of your plan for the next 12 months needs to be getting your website migrated onto Drupal 9.

This might understandably feel a little daunting, but fear not, we can help! We have recently migrated our own website from Drupal 8 to Drupal 9, and while at times it was a little tricky, it wasn’t as big, scary, or painful as you might think. It of course helps when you have a skilful team of Drupal experts that researched, tested, de-bugged, and deployed this core versional upgrade. 

Ream more: https://www.equimedia.co.uk/resources/blog/we-upgraded-to-drupal-9

6. Our Drupal 9 Upgrade Story

The Drupal community is super excited for the Drupal 9 release, and so are we! 

In the spirit of our tradition of keeping the QED42 website up to date with the latest Drupal releases, we decided to celebrate the Drupal 9 release day by porting our website to Drupal 9. And we’d love to share our learnings with the Drupal community.

Author: @AshishVDalvi

Read more: https://www.qed42.com/blog/our-drupal-9-upgrade-story

Apr 30 2021
Apr 30

Drupal AssociationUpdate 3: The deadline for letters of interest has been made open-ended.

Update 2: The deadline for letters of interest has been extended to Wednesday, June 30th.

Update 1: The deadline for letters of interest has been extended to Friday, May 21st.

Drupal.org is the home of the Drupal community, and in its 20 year history Drupal.org has managed to coordinate and centralize the efforts of our contributors. As we look to recruit the next generation of contributors who will become project leaders in the next decade, we want to reduce the barriers to joining the community, and extend the reach of a user's Drupal identity.

As such, the Drupal Association is undertaking a new project to both allow account creation and authentication to Drupal.org using common existing accounts that new contributors may already have, and allow federation of the Drupal.org identity, so third party community services like DrupalCamp sites or chat services can allow users to log in with their Drupal identity.

Successful completion of this project will allow new contributors to Drupal to join the community in a single click(or as close as possible with Terms acceptance, etc), and will allow existing Drupal.org users to join community-built services with their existing Drupal.org identity. 

Scope

Project scope should include Discovery, Project Management, Development, Security Review, and Quality Assurance for the following key features:

  1. Migration of existing Drupal.org user identity management and federation between Drupal subsites with an industry standard identity solution, e.g: SAML + OAuth
  2. Replacement or update of the 'Bakery' module for managing login state and synchronizing user profile data across sub-sites of Drupal.org.
  3. Allow Drupal.org account creation and/or login using an existing identity provider, with all appropriate disclaimers about data sharing to comply with global regulation: 
    1. Required identity providers:
      1. GitHub.com
      2. GitLab.com
    2. Optional identity providers: 

      1. Atlassian ID
      2. Google ID
  4. Integrate solution for Spam account mitigation based on either existing Drupal.org account protection, or other method developed in collaboration with DA engineering staff.

  5. Enforce an additional terms of service acceptance during account creation with a third-party identity. 

    1. Enforce an admin-triggered re-acceptance in case of changed terms of service.
  6. Allow a method for a 3rd party site or service to offer 'Create account/login with your Drupal.org identity' 

    1. Enforce a requirement for only approved sites/services to be allowed to use this identity.
    2. Enforce a requirement that data shared is disclosed to users before account creation is confirmed. 

Technical constraints and additional requirements

The chosen solution must meet the following additional technical constraints and requirements: 

  • Strong preference for a self-hosted identity store, rather than a Saas/third party solution - if a Saas solution is proposed, privacy policy must be stronger-than industry standard, fully compliant with international regulation like GDPR, and data must be fully portable. 
  • After discovery interviews with the Drupal Association engineering team, we must decide which system is the source of truth for user account data: Drupal.org, a SAML database, or other.
  • Must support SSO from www.drupal.org to all sub-sites, and assistance with any necessary data migration.
  • Must support SSO for Drupal 7 and Drupal 9 - as Drupal.org sites will be migrated from version 7 to 9 one at a time. 
  • Must support SSO from www.drupal.org to our self-hosted GitLab instance, including assistance with any necessary migration of existing account data.

Vendor requirements

The Drupal Association will consider contracts from both individual developers and agencies.

An individual must: 

  • Be a member of the Drupal Association
  • Provide a portfolio of examples of prior identity and authentication projects

An agency must: 

  • Active Supporting Partner of the Drupal Association that qualifies for any level of the new Drupal Certified Partner Program
  • Provide a portfolio of examples of prior identity and authentication projects
  • Provide a statement or link that reflects your organization's commitment to Diversity, Equity, and Inclusion.

Other Considerations:

Please indicate if you’re willing to accept in-kind benefits if your bid comes in higher than our allocated budget. The cash portion of the budget should not exceed $28,000 USD.

The point person for this project at the Drupal Association is generally available between 4:00 PM - 11:00 PM UTC. We welcome global responses but we’d prefer meeting times to be within our standard business hours. We will make every effort to accommodate times outside of standard Pacific Time business hours.

Timeline

We would like the authentication and identity solution to be implemented no later than October 1st, 2021.

Individuals or Agencies who intend to participate should provide their bids and samples of portfolio work to the Drupal Association via email ([email protected]) no later than Friday, May 21st at 5pm U.S. Pacific. Respondents will be notified of the decision no later than June 15th.

Apr 30 2021
Apr 30

Drupal AssociationUpdate 3: The deadline for letters of interest has been made open-ended.

Update 2: The deadline for letters of interest has been extended to Wednesday, June 30th.

Update 1: The deadline for letters of interest has been extended to Friday, May 21st.

Drupal.org is the home of the Drupal community, and in its 20 year history Drupal.org has managed to coordinate and centralize the efforts of our contributors. As we look to recruit the next generation of contributors who will become project leaders in the next decade, we want to reduce the barriers to joining the community, and extend the reach of a user's Drupal identity.

As such, the Drupal Association is undertaking a new project to both allow account creation and authentication to Drupal.org using common existing accounts that new contributors may already have, and allow federation of the Drupal.org identity, so third party community services like DrupalCamp sites or chat services can allow users to log in with their Drupal identity.

Successful completion of this project will allow new contributors to Drupal to join the community in a single click(or as close as possible with Terms acceptance, etc), and will allow existing Drupal.org users to join community-built services with their existing Drupal.org identity. 

Scope

Project scope should include Discovery, Project Management, Development, Security Review, and Quality Assurance for the following key features:

  1. Migration of existing Drupal.org user identity management and federation between Drupal subsites with an industry standard identity solution, e.g: SAML + OAuth
  2. Replacement or update of the 'Bakery' module for managing login state and synchronizing user profile data across sub-sites of Drupal.org.
  3. Allow Drupal.org account creation and/or login using an existing identity provider, with all appropriate disclaimers about data sharing to comply with global regulation: 
    1. Required identity providers:
      1. GitHub.com
      2. GitLab.com
    2. Optional identity providers: 

      1. Atlassian ID
      2. Google ID
  4. Integrate solution for Spam account mitigation based on either existing Drupal.org account protection, or other method developed in collaboration with DA engineering staff.

  5. Enforce an additional terms of service acceptance during account creation with a third-party identity. 

    1. Enforce an admin-triggered re-acceptance in case of changed terms of service.
  6. Allow a method for a 3rd party site or service to offer 'Create account/login with your Drupal.org identity' 

    1. Enforce a requirement for only approved sites/services to be allowed to use this identity.
    2. Enforce a requirement that data shared is disclosed to users before account creation is confirmed. 

Technical constraints and additional requirements

The chosen solution must meet the following additional technical constraints and requirements: 

  • Strong preference for a self-hosted identity store, rather than a Saas/third party solution - if a Saas solution is proposed, privacy policy must be stronger-than industry standard, fully compliant with international regulation like GDPR, and data must be fully portable. 
  • After discovery interviews with the Drupal Association engineering team, we must decide which system is the source of truth for user account data: Drupal.org, a SAML database, or other.
  • Must support SSO from www.drupal.org to all sub-sites, and assistance with any necessary data migration.
  • Must support SSO for Drupal 7 and Drupal 9 - as Drupal.org sites will be migrated from version 7 to 9 one at a time. 
  • Must support SSO from www.drupal.org to our self-hosted GitLab instance, including assistance with any necessary migration of existing account data.

Vendor requirements

The Drupal Association will consider contracts from both individual developers and agencies.

An individual must: 

  • Be a member of the Drupal Association
  • Provide a portfolio of examples of prior identity and authentication projects

An agency must: 

  • Active Supporting Partner of the Drupal Association that qualifies for any level of the new Drupal Certified Partner Program
  • Provide a portfolio of examples of prior identity and authentication projects
  • Provide a statement or link that reflects your organization's commitment to Diversity, Equity, and Inclusion.

Other Considerations:

Please indicate if you’re willing to accept in-kind benefits if your bid comes in higher than our allocated budget. The cash portion of the budget should not exceed $28,000 USD.

The point person for this project at the Drupal Association is generally available between 4:00 PM - 11:00 PM UTC. We welcome global responses but we’d prefer meeting times to be within our standard business hours. We will make every effort to accommodate times outside of standard Pacific Time business hours.

Timeline

We would like the authentication and identity solution to be implemented no later than October 1st, 2021.

Individuals or Agencies who intend to participate should provide their bids and samples of portfolio work to the Drupal Association via email ([email protected]) no later than Friday, May 21st at 5pm U.S. Pacific. Respondents will be notified of the decision no later than June 15th.

Apr 30 2021
Apr 30

Drupal AssociationUpdate 2: The deadline for letters of interest has been extended to Wednesday, June 30th.

Update: The deadline for letters of interest has been extended to Friday, May 21st.

There are more than 1 million Drupal sites actively in use across the globe. At least one in 40 websites that a typical internet user visits is likely to be built on Drupal - however, the Drupal project leads and the Drupal Association do not necessarily have a direct line of communication to each of these Drupal site owners.

While the Association can reach Drupal users and site owners who visit Drupal.org, attend DrupalCon, or follow one of our social channels, there are many more users who do not engage with the Drupal community outside of their own Drupal instance control panel.

This initiative seeks to change that by adding a channel for important project and association messaging directly in Drupal Core's admin interface. 

Scope

The feature proposal has already completed the core idea queue process, and the initial prototype of the Project Messaging in Core feature has already been built as a contributed module. This existing work currently needs to be transferred into a true core feature request issue with added test coverage and any additional changes requested by the core team.

The work then needs to be backported to Drupal 7, and possibly to Drupal 8 after discussion with core maintainers.

  1. Full technical and security review of the existing prototype project.
  2. Remediation of any identified issues.
  3. Completion of test coverage. 
  4. Shepherding through the process of getting feedback from the appropriate core maintainers. 
  5. Implementing required feedback. 
  6. Once committed - move on to backports:
    1. Backport to Drupal 7
    2. Shepherd through the approval process with Drupal 7 maintainers. 
  7. Optionally - pending a discussion with core maintainers, consider porting to Drupal 8. 

Technical constraints and additional requirements

The chosen solution must meet the following additional technical constraints and requirements: 

  • The solution must be built to appropriately aggregate the feed of messages from Drupal.org, in collaboration with the Drupal Association engineering team.
  • The solution must be built according to core contribution standards. 

Vendor requirements

The Drupal Association will consider contracts from both individual developers and agencies.

An individual must: 

  • Be a member of the Drupal Association
  • Provide examples of their contributions to Drupal core, especially feature additions, rather than bug fixes.

An agency must: 

  • Active Supporting Partner of the Drupal Association that qualifies for any level of the new Drupal Certified Partner Program
  • Provide examples of their contributions to Drupal core, especially feature additions, rather than bug fixes.
  • Provide a statement or link that reflects your organization's commitment to Diversity, Equity, and Inclusion.

Other Considerations:

Please indicate if you’re willing to accept in-kind benefits if your bid comes in higher than our allocated budget. The cash portion of the budget should not exceed $30,000 USD.

The point person for this project at the Drupal Association is generally available between 4:00 PM - 11:00 PM UTC. We welcome global responses but we’d prefer meeting times to be within our standard business hours. We will make every effort to accommodate times outside of standard Pacific Time business hours.

Timeline

The project messaging feature has a hard deadline of October 1st, 2021 in order to be included in the last release of Drupal 9, and for a backported release to Drupal 7. Preference will be given to proposals that complete the work on a much shorter timeline in order to allow plenty of time for the core acceptance process.

Individuals or Agencies who intend to participate should provide their bids and samples of portfolio work to the Drupal Association via email ([email protected]) no later than Friday, May 21st at 5pm U.S. Pacific. Respondents will be notified of the decision no later than June 15th.

Apr 30 2021
Apr 30

Drupal AssociationUpdate 2: The deadline for letters of interest has been extended to Wednesday, June 30th.

Update: The deadline for letters of interest has been extended to Friday, May 21st.

There are more than 1 million Drupal sites actively in use across the globe. At least one in 40 websites that a typical internet user visits is likely to be built on Drupal - however, the Drupal project leads and the Drupal Association do not necessarily have a direct line of communication to each of these Drupal site owners.

While the Association can reach Drupal users and site owners who visit Drupal.org, attend DrupalCon, or follow one of our social channels, there are many more users who do not engage with the Drupal community outside of their own Drupal instance control panel.

This initiative seeks to change that by adding a channel for important project and association messaging directly in Drupal Core's admin interface. 

Scope

The feature proposal has already completed the core idea queue process, and the initial prototype of the Project Messaging in Core feature has already been built as a contributed module. This existing work currently needs to be transferred into a true core feature request issue with added test coverage and any additional changes requested by the core team.

The work then needs to be backported to Drupal 7, and possibly to Drupal 8 after discussion with core maintainers.

  1. Full technical and security review of the existing prototype project.
  2. Remediation of any identified issues.
  3. Completion of test coverage. 
  4. Shepherding through the process of getting feedback from the appropriate core maintainers. 
  5. Implementing required feedback. 
  6. Once committed - move on to backports:
    1. Backport to Drupal 7
    2. Shepherd through the approval process with Drupal 7 maintainers. 
  7. Optionally - pending a discussion with core maintainers, consider porting to Drupal 8. 

Technical constraints and additional requirements

The chosen solution must meet the following additional technical constraints and requirements: 

  • The solution must be built to appropriately aggregate the feed of messages from Drupal.org, in collaboration with the Drupal Association engineering team.
  • The solution must be built according to core contribution standards. 

Vendor requirements

The Drupal Association will consider contracts from both individual developers and agencies.

An individual must: 

  • Be a member of the Drupal Association
  • Provide examples of their contributions to Drupal core, especially feature additions, rather than bug fixes.

An agency must: 

  • Active Supporting Partner of the Drupal Association that qualifies for any level of the new Drupal Certified Partner Program
  • Provide examples of their contributions to Drupal core, especially feature additions, rather than bug fixes.
  • Provide a statement or link that reflects your organization's commitment to Diversity, Equity, and Inclusion.

Other Considerations:

Please indicate if you’re willing to accept in-kind benefits if your bid comes in higher than our allocated budget. The cash portion of the budget should not exceed $30,000 USD.

The point person for this project at the Drupal Association is generally available between 4:00 PM - 11:00 PM UTC. We welcome global responses but we’d prefer meeting times to be within our standard business hours. We will make every effort to accommodate times outside of standard Pacific Time business hours.

Timeline

The project messaging feature has a hard deadline of October 1st, 2021 in order to be included in the last release of Drupal 9, and for a backported release to Drupal 7. Preference will be given to proposals that complete the work on a much shorter timeline in order to allow plenty of time for the core acceptance process.

Individuals or Agencies who intend to participate should provide their bids and samples of portfolio work to the Drupal Association via email ([email protected]) no later than Friday, May 21st at 5pm U.S. Pacific. Respondents will be notified of the decision no later than June 15th.

Apr 30 2021
hw
Apr 30

This DrupalFest series of posts is about to end and I thought that a fitting end would be to talk about the future of Drupal. I have already written a bit about this in my previous post: what I want to see in Drupal 10. However, this post is more aspirational and even dream-like. Some of these may sound really far-fetched and it may not even be clear how to get there. For the purposes of this post, that’s fine. I will not only worry about the how; just the what.

Modular code

Drupal is already modular; in fact, it is rather well designed. As Drupal grows and adds new features, the boundaries need to be redefined. The boundaries that made sense years ago for a set of features may need to be different today. The challenge is in maintaining backwards compatibility for the existing code ecosystem around Drupal. It also means fundamentally rethinking what are Drupal’s constructs. The transition from Drupal 7 to 8 was iterative and in many cases, the systems were just replaced from procedural to object-oriented. We need to rethink this. We should not be afraid to use the newest trends in programming and features from PHP, and rethink what architecture makes sense.

If you’re thinking this sounds too much like rewriting Drupal, you’re right. And maybe the time isn’t right for that, yet. I do believe there will be a time when this will become important. This will also help with testability which was the topic of my previous post about Drupal 10.

Assemblable code

Once we have code that’s modular, we can break it off into packages that can be reused widely. This is already in minds of several core developers and the reason why you see the Drupal\Component namespace. I want to see more systems being moved out not just in a different namespace but in different packages entirely.

The direct upshot of this is that you can use Drupal’s API (or close to it) in other PHP frameworks and applications. But that is not my main interest here. My interest in seeing this happen is that forcing the code to be broken this way will result in extremely simple and replaceable code. It will also result in highly understandable and discoverable code and that is what I want. Drupal core is opaque to most people except those who directly contribute to it or to some of the complex contrib modules. I believe that it doesn’t have to be and I wish we get there soon.

Microservices

Taking this even further, I would like to see some way to break down Drupal’s functions over different servers. I am thinking there could be a server dedicated to authentication and another for content storage. This is a niche use case and therefore a challenge to implement in a general-purpose CMS such as Drupal. This would need kernels that can run in a stateless way and also breaking systems so that they can be run independently using their own kernels. The challenge is that all of this will have to be done in a way that won’t increase the complexity of the existing runtime. This, as you might have guessed, is contradictory.

More themes

I might not have thought about this sometime back. But with a renewed interest in the editor and site-builder, I think it makes sense to improve the theme offerings available to Drupal. There were many times when I build powerful functionalities but then show them in a completely unimpressive look. Olivero and Claro are big steps in this direction and I am hoping for more of the same quality. If there were, I wouldn’t have used Contrib Tracker for over 3 years with the stock bootstrap theme (even the logo was from the theme until recently).

Unstructured Content

Drupal is great at structured content but I see Drupal used more and more for unstructured pages. The way we do this now is using structured bases such as paragraphs to represent components but this method has its shortcomings. Paragraphs and similar concepts quickly get too complicated for the editor. Editing such pages is very painful and managing multiple pages with such content is tricky.

We now have the Layout Builder that is a step forward in addressing this problem. The ecosystem around the Layout Builder modules is scattered right now, in my perception, and I would love to see where the pieces land (pun intended). I would like to see Drupal fundamentally recognize these pages as separate from structured data so that the editing workflows can be separate as well. More in the next section.

Better media and editor experience

I know media is a lot better today than it was before but there’s still a long way to go. When working with pages (not structured content), I would like a filtered editing experience that lets me focus on writing. WordPress does this part well, but WordPress has a specific target audience because of which it can make this decision. This is the reason I previously said I would want Drupal to treat unstructured pages differently.

With a more focused editor experience, I would want better media handling as well. I want to paste images directly into the editor without worrying about generating resized versions. I also want pasted images to be handled somewhat similarly to the current media elements so that we can still deduce some structure. Even tiny things like expanding tweets inline and formatting code correctly could be supported in the editor and that could make a big difference.

Starter kits

We need ways to spin up sites for varied purposes quickly but I don’t think distributions are a good answer, at least not the way they are implemented right now. Once you install a distribution, you are stuck with it and would have to rely on distribution maintainers for updates. I think that’s a waste of effort. I know it’s possible to update parts of distribution without waiting for a release but the process is harder. And if a distribution is going out of support, it is difficult to remove it; it is almost impossible if the distribution authors don’t design for it. We already have challenges with migrations and upgrades, why add the distribution on top of that?

Instead, I would like to see some support for starter kits within Drupal core. Instead of distributions, you would install Drupal from a starter kit which would point to modules and contain configuration. Once installed, it would be as if the site was installed manually. I believe a lot of such solutions are also possible outside Drupal core (using drush, for instance) but it would be nice to have some sort of official interface within the core.

What else?

I know that all of the things I mentioned are possible in part or have challenges that involve tradeoffs. It’s not easy to figure out which tradeoffs we should make considering that Drupal’s user base is massive. For example, what I wrote about today are my wishlist items for Drupal. They may be something completely different for you and that’s okay. I believe the above are already aligned with the vision set for Drupal by Dries and the core team (ambitious experiences, editor-first, site-builder focused, etc). With that as the North Star, we should be able to find a way forward.

That is it for now. I can dream more about the removal of features (such as multisite) but that’s for some other day. Let me know what things you would like to see in Drupal.

Apr 29 2021
Apr 29

Currently LocalGov Drupal relies on Bootstrap for its grid system. I think we can achieve a really nice grid system with about 30 lines of CSS. Let's see!

When it comes to CSS frameworks, I really don't like using them. I find them bloat and opinionated. I also think they go against basic principles of scaling a web project, because each dependency we include forces that dependency on all users.

If there's something that we can do in a few lines of code that means we don't need to depend on a framework, I think we owe to to ourselves to investigate it. Here's my proof-of-concept of a grid system based on the GOVUK and NHSUK design systems' grid systems. It's only about 30 lines of CSS.

I've started streaming some of my work, especially if it's contributions to open source, on twitch if you'd like to follow along/subscribe. As it turns out, Twitch deletes your videos after 14 days, so I have also been uploading them to YouTube. Feel free to subscribe.

Apr 29 2021
Apr 29

I've been doing a lot of contribution to LocalGov Drupal. Here's my thought process when refactoring some code for the alert banner module.

LocalGov Drupal is the distribution for local councils in the United Kingdom. When reviewing the alert banner module, there were a few small issues that niggled me with the alert banner HTML and CSS. I wanted to make sure the HTML followed BEM naming conventions, and then the CSS the same. This means the CSS is easier to read, and relies less on nested selectors.

There was also a tiny issue with hover states on some of the buttons.

I've started streaming some of my work, especially if it's contributions to open source, on twitch if you'd like to follow along/subscribe. As it turns out, Twitch deletes your videos after 14 days, so I have also been uploading them to YouTube. Feel free to subscribe.

Apr 29 2021
hw
Apr 29

The configuration API is one of the major areas of progress made in Drupal 8. It addresses many challenges of managing a site across environments for Drupal 7 and before. It’s not perfect. After all, it’s just version 1 and there is work going on in CMI 2 to fix the problems in the current version. That is not the subject of this blog post, however. In this post, I want to talk about one of the lesser understood features of configuration management: overriding.

Most site builders and developers interact with the Drupal configuration API with “drush config-export” and “drush config-import”. Except for a minor exception, Drupal configuration is an all-or-nothing deal, i.e., you apply the configuration as a whole or not at all. There are modules like config_ignore and config_split to work around all these limitations. These modules allow ignoring parts of configuration or splitting and combining configuration from different locations. Further, Drupal allows overriding specific configuration through settings.php. A combination of all of these makes the Drupal configuration workable for most complex site-building needs.

The Configuration API

The configuration API provides ways to support all of the different scenarios described above. There are very simple constructs that are comparable to Drupal 7’s variable_set and variable_get functions. If you want to write a simple module that needs access to configuration, the documentation is enough to work out all of the details for you. Yet, there are some lesser understood parts of how the configuration handles cases such as overriding and other edge cases. I have seen this often in interviewing several people and even though it is well documented, its lack of visibility explains why people are not aware.

As I said before, I will mainly talk about the overriding system here. You are welcome to read all about the configuration API from the documentation. Specific areas I would recommend reading about are:

The configuration API needs to handle overriding specifically because they could affect exported configuration. If you are overriding configuration in the settings.php file, and if the config system didn’t know that, those values would be exported to the configuration file. This is not what you want. If your intention was to export it, then there was no reason to set it in the settings.php file. That is why the configuration API needs a mechanism to know about this.

Overriding configuration

The configuration API can handle these overridden values separately and also make them available to your code. This is why if you are exporting configuration after setting the values in settings.php, the overridden values won’t be present in the exported YML files. As a site builder, you won’t have to worry about this. But if you are a module developer, it helps to understand how to differentiate between these values.

Before I talk about the code, let’s consider the scenarios why you would need this. As a module developer, you might use the configuration in different ways and for each, you may need either the overridden data or the non-overridden (original) data.

  • For using the configured value in your logic, you would want the overridden data. Since you are using the value in your logic, you want to respect Drupal’s configuration system of overriding configuration.
  • For showing the value in a form, you might want to show the original data. You can choose to show overridden data but Drupal core shows the original data. There is an issue open to change this. Right now, there is a patch to indicate that the data is overridden and changes won’t take effect on the current environment.
  • While saving the values to the configuration, it will always be the original data. As a module developer, you cannot affect the overridden value through the configuration API. In fact, to save the configuration, you would need to call getEditable and that will always return the original data. When you set the new value and save it, you will change the value in the configuration storage. Yet, the override from settings.php will take precedence.

Accessing overridden values

If you have built modules, you might already be aware that you can use a simple get call on the config service to read a value from the configuration (see example). This will return an immutable object which is only useful for reading. To save it, however, you would need to go through the config factory’s getEditable method (example). This will return an object where you can set the value and save it.

While it may seem obvious that this is the only difference between “get” and “getEditable” methods, there is a subtler difference. The “get” method returns the overridden data whereas the “getEditable” method returns the original data. This means, there is a chance that both of these methods might return different values. As a module developer, it is important to understand this difference.

What if you wanted to get an immutable object but with original data? There’s a method for that: getOriginal (see example). Most modules won’t need to worry about the original data in the configuration storage except when saving. For that reason, it is not common to see this method in use. Even if a module were to indicate the differences in original and overridden configuration, it can use getEditable in most cases.

The difference between “get” and “getEditable” has been the trickiest interview question I have ever asked. Out of hundreds of interviews, I have only occasionally seen someone answer this. I hope this post helps you understand the difference and why the subtlety is important to understand.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web