Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jan 12 2021
Jan 12

The term Drupal invokes different feelings among different people, based on their professional background or on what they have heard or learned about the CMS. Over the years, Drupal CMS has evolved from a simple tool for hobbyists to a powerful digital experience platform for global enterprises. While Dries describes Drupal as a platform for "ambitious digital experiences", it is commonly referred to as a content management framework that allows for extensibility & scalability through the addition of various user-created modules that build upon its core framework.

Since Drupal 8 and its adoption of continuous innovation, new features and modern libraries are being added with every bi-yearly release. Drupal 9 is out already with its first feature (minor) release of Drupal 9.1.0 last month and we are already seeing the fulfillment of some of Drupal 9's strategic initiatives. With Drupal, promises are always delivered. And this is something every organization, big or small, looks for in a CMS. 

Still confused about moving to Drupal 8 now or Drupal 9 later
 

Drupal for everyone - ease of use

Popularity

Drupal is one of the most popular CMSes currently available & is the preferred choice for government agencies, large associations, non-profits & numerous Fortune 500 companies. Currently, over 1,738,777 websites around the world are built on Drupal. 

This graph compares the trend for the term "Drupal" with another popular CMS "Joomla" over a period of 3 years & it clearly depicts the growth of popularity of Drupal.

image

But why do large enterprises prefer Drupal? Is it as "easy to use " as they say? Let us find out.

Features for Everyone

One of the toughest challenges that Drupal adopters face, whether they are new site owners or novice developers, is trying to figure out what is difficult & what is easy with Drupal. Most of their questions revolve around the ease of use that Drupal as a platform brings to the table. Let us look at some of the basic (yet important) features that Drupal CMS provides to website owners.

Installation

For Drupal installation, more than the “technical knowledge”, you will rather than just need to know how to connect through FTP and install databases. Once you are ready with your prerequisites for the Drupal 8 installation, it will hardly take you minutes to complete the entire set up. The installer performance has been improved by 20% in Drupal 9.1.0 which makes installation faster and easier. Would you believe me if I say that the installation time of Drupal CMS for a new user with knowledge of general installation of other systems might be less than “One & half minute??”

image

Maintenance and Upgrades

One of the main focus areas for Drupal 9.0.0 was to improve its upgrade experience. And so it has. Upgrading to a major version is also as easy as upgrading to a minor version. This has been clearly witnessed on the Drupal 8.9 to Drupal 9 upgrade. 

Drupal CMS ensures that the maintenance and upgrades are easy to handle by the site administrators. The procedures for updating your website include backing up the website and then replacing the files using a web update interface.

Backing up the website takes minimal effort as the site administrator can back up the whole website by downloading only one file which contains the assets of the website.

Drupal also notifies the site admin every time an upgrade is required, thus ensuring that the website never misses an opportunity to stay up-to-date. However, if the administrator does not wish to change the version, Drupal CMS also provides security updates for the previous versions. For example, even though Drupal 9 has released June 2020, Drupal 7 still continues to receive support from the community.

Community Strength and Contribution

The unofficial tagline of Drupal - “Come for the Software, stay for the Community” speaks volumes about the strength of the community. Functioning well since 2001, the Drupal Community is known for its dedicated bunch of developers and contributors who use, build, teach, document and market the best practices in Drupal. You can find their amazing works on Drupal.org.

Usability

Drupal CMS allows administrators to access any page or a section of the page in visitor mode by clicking on edit. While the core does not include a WYSIWYG editor, you can still get it in the form of a module, replacing all the other editor integration modules. Drupal CMS allows easy editing of pages or sections of a page by creating a simplified experience for the editors and administrators.

Scalability

Drupal CMS is highly scalable with high traffic handling capabilities. Its web pages are cached indefinitely as the default setting configuration, but can also be manually cached for a specific time. Moreover, functionality area blocks can be cached, thus allowing better traffic handling capabilities for your websites.

Whether it is the extreme traffic spikes on certain occasions or the constant web traffic, Drupal handles all of that with utmost ease. Did you know that the digital experience of Australian Open 2019 was powered by Drupal? Like Dries said “When the world is watching an event, there is no room for error!”

Web 2.0 Features

Drupal CMS is an excellent community platform provider and It outperforms all other options in this particular area. The platform allows a website administrator to set permissions for site visitors to comment on any content of website.

Drupal also facilitates administrators to set permissions on who can edit, create or delete various content types. It can be an article, pictures, videos or any other media files, everything managed by the admin.

Security

Security is a major concern for web properties these days and Drupal leaves no stone unturned to ensure that your website is secure from any possible security breach. Security updates are published on drupal.org and the users are provided a notice every time a new update is released. Drupal’s active community is alert and any security loopholes are remedied very quickly. They also provide references to guide the user in making a site more secure. 

When it comes to security, Drupal wins hands down when compared to other opensource CMSes in the market today. Check out these statistics below which compares the number of sites that have been compromised by these popular CMSes in 2016. Drupal accounts only for 2% of the hacked websites according to this research.

Drupal security comparison

User Roles and Workflow

The greatest asset of Drupal CMS is its ability to create any number of user roles and assign different permissions. While Drupal’s core includes two default set of roles, anonymous user and authenticated user, it allows you to create multiple user roles depending upon the content types. Also, granular permissions to each user can be assigned based on content section using the taxonomy function.

image

How Does Drupal Make Things Easy?

  • Advanced Control of URL: Drupal provides a precise control over URL structure of a page. Each content item which is called node in Drupal can be given a custom URL. Also, the path auto module can automate custom URL structure for each content type.
  • Custom Content Types and Views: Using Views and the Content Construction Kit (CCK), Drupal allows you to create new content type without having to write a single line of code! Yes, any number of custom content types can be created and displayed in many different ways without any code! Some examples of content types that you can create are forum posts, tutorials, blog spots, news stories, classified ads, podcasts, videos and more.
  • Themeing and PHP Template: PHP knowledge for themeing? No, not anymore! Themeing in Drupal can be done with absolutely no PHP knowledge. Drupal CMS uses PHP template theme engine by default.
  • Hook System: This system in Drupal enables you to hook in new modules easily. This hook system is invoked when a particular activity is performed in Drupal. This approach allows Drupal core to call at specific places certain functions defined in modules and enhance the functionality of core. They make it possible for a module to define new urls and pages within the site (hook_menu), to add content to pages (hook_block, hook_footer, etc.), to set up custom database tables (hook_schema) and more.

I completely agree with Dries when he said that Boris nailed it!

Drupal sucks less - Dries Buytaertimage

Jan 12 2021
Jan 12

2020 was hard.

At Promet Source, we’re planning for and counting on 2021 being easier and better in many ways.

We realized last year that there actually was something we could do to raise the bar for 2021 and make life easier and better for everyone who manages a Drupal website. 

We developed Provus.
 

What is Provus?

The brainchild of Aaron Couch, Promet’s Lead Solutions Architect, Provus is Promet’s newly launched Drupal platform. Utilizing Atomic Design principles, Provus combines the latest drag-and-drop page building tools in Drupal with a curated library of design components, enabling content editors to easily layer designs, add functionality, and rearrange layouts.  

An essential differentiator from other drag-and-drop tools is the degree to which Provus empowers content creators, while at the same time adhering to an organization’s brand guidelines to ensure consistency and aesthetic alignment. 

From a development perspective, Provus is allowing for vast new efficiencies as we work toward eliminating the wall that had previously existed between easy-to-create and manage SAAS solutions, and scalable Drupal solutions, for websites that have complex data models and a depth of content.
 

New Perspectives and Possibilities

Provus was inspired by the realization that nearly every website consists of various combinations of roughly 15-20 types of features or patterns. By organizing a library of high-quality components that can be repurposed for low-code, no-code site building, we create a foundation for:

  • Easier content editing capabilities with drag and drop functionality
  • Greater design flexibility within defined brand standards
  • Streamlined development using Drupal’s proven content models

The Provus Technology Stack

Provus Technology Stack Promet’s Open Source Provus starter kit for component-based Drupal sites is based on Atomic Design principles using Emulsify as the base theme and leveraging Storybook to create a library from which the newly themed components are mapped into Drupal Layout Builder for a flexible, dynamic, drag-and-drop CMS. 

Provus in Action

Traditional Drupal theming includes CSS and JavaScript selectors that are intertwined with their context, connecting them to the backend implementation. The result of this “theme for the page,” approach is that assets that can’t be repurposed across projects.
Having identified that component-based theming tools are key to next-level efficiencies in website building, our next step was to single out an optimal approach for delivering reusable components. 

Promet’s strategy for achieving this new UI and content management paradigm incorporates the Emulsify® design system, which is a component-driven Drupal theme and gives us a huge lift in building repurposable components. Emulsify functions as both a starter component library with Storybook, which contains the Atomic Design library and is a tool for building user interface components. Storybook can be turned on from within the Emulsify theme, resulting in a highly efficient new workflow.
 
With Provus, components built using JavaScript and CSS are curated int o a library. If the backend implementation changes or we want to move it to another project, the component itself is not changed, allowing us to efficiently redesign and reuse it.

What Sets Provus Apart?

Content editor empowerment, combined with the robust guidance and governance are key factors fueling the success of Provus. More specifically:

  • Self-adjusting features within components create a foundation for both readability and ADA accessibility, by ensuring, for example, adequate contrast between fonts and background colors. 
  • Design governance offers the assurance that content editor empowerment does not translate into mismatched, crowded, or sub-par page designs. Customization options are presented within an expertly calibrated design framework for ensuring the highest quality designs and user experiences on all devices, without breaking layouts or straying from an organization’s brand guidelines. 
  • Content editors are able to seamlessly edit components and change patterns within the view mode, eliminating time-consuming processes of reentering content and switching back and forth between edit and publish modes.

As a thought leader on how humans interact with technology, Promet Source has enthusiastically pursued component-based design systems for their potential to drive high velocity capabilities that drive consistency and collaboration. 

While Provus provides for game-changing advantages on multiple levels, we’re most excited about the amazing new capabilities that we are now able to offer our clients. In blending a formal design system that ensures brand consistency across the site with the flexibility of drag-and-drop site building tools within Drupal core, we are reducing the cost of ownership and empowering clients with a site that’s designed to flex and expand to fit evolving needs and new priorities. 

Interested in learning more about Provus or seeing a demo of Provus in action? Let us know how we can help and we'll be in touch!

                 Stay in-in-the-know with what's new and next for Drupal. Subscribe to the Promet Source newsletter.


 

Jan 11 2021
Jan 11

As I am frequently coaching individuals who start in the Scrum Master role, I realized there was one aspect that was rarely written about: how to begin.

Discover more about the service Agile teams and processes our digital agency has to offer for you.

Yes, it's a terrifying role

So that's it, you have passed the certification for Scrum Master and it is time for you to join the team and play the role. You are terrified, and I understand why. After all, you're lucky we've spared some budget for this role and you'd better make the team perform better right away so your position is maintained. All eyes on you. If you don't make it, we give it all back to the almighty Project Manager. In the spotlight of all these expectations, here is an invitation to take a step back and relativize.

It's not about you, it's about the team

Most importantly, it is not about you, and will never be. It is about the team. So do not carry its destiny upon your shoulders. All you will ever do is serve them and hold a mirror to them. That's it. You have to walk with them, not ahead of them. Your angle is the one of curiosity: "Oh, have you noticed that? What do you think about it?". Naive is powerful, because it blows away all preconceptions. You can, over and over, invite your team to look at the status quo with a fresh angle, which may inspire them to take action, or try new things (and follow up on them). If you've managed that, your job is done.

Start from where the team is

It is also a bad idea to go in with an upfront plan of how you want to "change how things are run". Chances are there are many assumptions in your head, which may be completely off. Instead of bulldozing your way into the team, blasting and criticizing whatever is present, I urge you to think that whatever is in place has been put in place by professionals. The way the team functions today is how it has best overcome problems so far, so respect that. I'll quote the Kanban principle: "Start from where you are". And from then, lead the team to experiment, little by little. It may come a long way.

Don't wait to be ready

The polar opposite of this attitude is also very tempting. It is to remain paralyzed. "I don't feel ready". Who does? While it is certainly a good thing to attend a course and obtain a certification, there are enough books, articles and conferences on Scrum and Agile to fill several lifetimes. For the benefit of your team, don't wait until you've read them all. Practice is going to be your teacher. The best there is. Just like the team, you are going to do the best you can, day after day, and for sure it's not going to be perfect...

Look for criticism

... So there will be criticism. That is great news. If nobody says anything, that means everybody thinks you're beyond the point of recovery and it's not even worth it anymore to give feedback. Constructive criticism is your ally in doing a better job for your team. I even advise you to actively seek feedback. There are retrospective activities tailored just for that, such as "Build Your Own Scrum Master". Make it a game for the team. That way, you show that though you take the role seriously, you certainly do not take yourself seriously.

About today

So, what about today? Day One? Well, two postures I've previously written about are always available: The Servant and The Mechanic. As a servant, there's probably a hand you can lend to the team right now. Ask around, and remember, a chore is not a chore. It's a chance to lead by example. If you pull your finger out for your teammates, you'll not only shine but you'll also inspire them to do it more as well. As a process mechanic, have a look at the team's Scrum. How is the next sprint coming along? Is the backlog prioritized? If you have chosen User Stories to express needs, are there enough of them in a ready state? What does "Ready" mean for your team? Those are great conversation starters. Dive in. And if anything's off, investigate, don't blame.

Get accompanied on the journey

Sure, all of this is still a handful. But you don't have to go it alone. There is a tremendous global community of practice and many local ones too. Don't be afraid to check out scrum.org forums, browse meetup.com for groups near you – or far away from you, as remote work has made the world even flatter than before. If there are several Scrum Masters in your organization, hook up with them, set up weekly coffees to exchange your war stories. And if you feel like getting accompaniment on your journey, don't hesitate to reach out. Whether it is me or one of my colleagues from the Liip coaching team, it would be with pleasure to walk along with you.

Jan 11 2021
Jan 11

When developing or maintaining a Drupal theme, there is often a need to understand why a theme needed to override a given template. A diff of the template in the active theme compared to the template in the base theme would make this easy to understand. A drush command that found the template files for you and output a diff would solve this quite nicely.

Template Diff

The Template Diff module does just that. It provides a drush command that accepts a template name and will display the diff between two specified themes. If no theme is specified it defaults to comparing the active theme and its base theme.

Examples

Compare the active theme and its base theme:

drush template_diff:show views-view

Compare "foo_theme" vs "bar_theme":

drush template_diff:show views-view foo_theme bar_theme

Compare "some_theme" and its base theme:

drush template_diff:show views-view some_theme

The output will look something like this:

$ drush template_diff:show views-view
 [notice] Comparing chromatic (active theme) and stable (base theme).
- stable
+ chromatic
@@ @@
 {#
 /**
  * @file
- * Theme override for main view template.
+ * Default theme implementation for main view template.
  *
  * Available variables:
  * - attributes: Remaining HTML attributes for the element.
- * - css_name: A CSS-safe version of the view name.
+ * - css_name: A css-safe version of the view name.
  * - css_class: The user-specified classes names, if any.
  * - header: The optional header.
  * - footer: The optional footer.
…

If you have ideas on how to improve this, submit an issue so we can make understanding template overrides even easier.

Jan 11 2021
Jan 11

Lynette has been part of the Drupal community since Drupalcon Brussels in 2006. She comes from a technical support background, from front-line to developer liaison, giving her a strong understanding of the user experience. She took the next step by writing the majority of Drupal's Building Blocks, focused on some of the most popular Drupal modules at the time. From there, she moved on to working as a professional technical writer, spending seven years at Acquia, working with nearly every product offering. As a writer, her mantra is "Make your documentation so good your users never need to call you."

Lynette lives in San Jose, California where she is a knitter, occasionally a brewer, a newly-minted 3D printing enthusiast, and has too many other hobbies. She also homeschools her two children, and has three house cats, two porch cats, and two rabbits.

Jan 11 2021
Jan 11

I like to think of this module as something you don't realize you need until you understand exactly what it does. With that in mind, let's start with an example…

Imagine you have a "Document" content type (or media entity) that you use to upload PDF files to your site. Document entities are then used as part of various other entities (often content types) on your site via reference fields. Now for the important bit: Document entities are not meant to be viewed on their own - they are only meant to be available as a part of another entity via a reference field.

When a site design calls for this type of situation, what happens to the "Full display" view (/node/[nid] or /media/[mid]) mode of the Document entity? Often it is ignored; not even styled for display. Under normal circumstances the full display view mode has no reason to ever be requested, but if developers never had to worry about edge cases, then our lives would be much easier.

This is where the Rabbit Hole module enters the picture - it allows us to specify (via the bundle's "Edit" page) that if someone tries to load the full display view mode, the Rabbit Hole module kicks in and directs the user to a specified path.

Rabbit Hold module screenshot

So, if you have entities on your site that aren't meant to be displayed on their own, it's best to use the Rabbit Hole module to ensure your site visitors don't end up on a page you're not expecting.

Jan 11 2021
Jan 11

Whether you are running your business into B2B space or B2C space, the need for agility and speed in workflow management is indispensable. Because eventually, clients also expect faster delivery of the project/application to catch up with their customers’ requirements.

However, if developers do not use any standard tools, it can add unnecessary overhead and eat away their development time. Also, given that they are coming from different backgrounds and skillsets, it would become difficult for stakeholders to set up projects, onboard developers, troubleshooting, and even train them as large-scale projects come with complex requirements.

That is why it’s critical to have a standardized development environment across the teams. This blog guides you on using Lando software (an open-source tool that provides a single local development environment for all the developers’ projects) with Drupal 9 composer, PHP & SCSS Linters, and a multisite architecture scenario.

How Lando Provides a Standard Development Environment?

Setting up the project from ground level to managing configurations and distributing it to each developer, including frontend & backend,  becomes tedious due to various aspects, including different machines, a different configuration of the machine, and different OS.

And that’s where Lando software comes into the picture.

What is Lando Software?

It is an open-source, cross-platform, local development environment, and DevOps tool built on Docker container technology. Its flexibility to work with most of the major languages, frameworks, and services helps developers of all skill sets and levels to specify simple or complex requirements for their projects and then quickly get to work on them.

Some of the benefits of Lando include-

  1. Maintaining standardization across project/application.
  2. Offering speedy development(prebuilt configuration of the composer, drush).
  3. Add tooling to extend it from services. 
  4. Recommends out-of-the-box settings that are customizable.
  5. Automates the complex steps involved in unit testing, linting, compiling, or other recurring workflows.

How to Use Lando With Drupal 9’s Composer.json for Faster Development?

Consider a scenario when a developer has been replaced in the team with the new developer for the existing Drupal project. The new developer might not be familiar with the OS that others are using. Here, it would become difficult for him/her to install the composer quickly. And hence, this would delay his/her onboarding process.

However, if the team is already using Lando for development, it would take care of the operating system’s bottleneck itself. In fact, the composer is already built in the recipe (Recipes in Lando are the highest level abstraction and contain common combinations of routing, services, and tooling) of Drupal 9 and is also compatible with different OS. The only thing is developers should know how to use it.

code written in maroon background

Steps to Use Lando with Drupal 9’s composer.json

The prerequisite for this setup is that your local development machine should be compatible with Docker and Lando and installed successfully without any glitches. Make sure when you are running docker setup, other ports are not conflicting with Lando setup.

Here are the steps to be followed-

  1. You need to clone this  Drupal 9 open source git repository.(Ex:

    git clone  [email protected]:AbhayPai/drupal9.git)

  2. Change the directory to the cloned repository. (Ex: cd drupal9)

  3. Start your app using the lando start command. Before you begin, you can change some parameters in .lando.yml as per the need of your application.

This repository would give you some common tools that include linting of PHP, linting of SCSS, linting of js files, and compiling of SCSS files and services like node.js and npm to directly connect with the Lando app. You do not need to go inside any container after starting your application. By default, this repository is only able to lint custom themes and is flexible enough to extend it to custom modules and profiles.

How to Use PHP Linters With Drupal in Lando

As Drupal is one of the largest open-source communities, millions of developers contribute and offer coding solutions in different ways. To standardize the coding practices and make the modules easy-to-maintain and readable, varying from indentation, whitespaces to operators, casting, line length, and many more, Drupal has a core package that takes care of these standard practices automatically when configured in the project. In general, these are called PHP Linters.

Following are the steps to configure the PHP linter in the project-

  1. Download dependencies package of Drupal coder using `lando composer requires drupal/coder`.
  2. Define a file for linter standard or copy file from Drupal core in your project folder where all standards are predefined in the XML file. It resides in core/phpcs.xml.dist.
  3. Configure a tool of `lint:PHP` within the .lando.yml file like the below example-

code written in black background

4.  Confirm if tooling is configured correctly just by using the ‘lando’ command to list all tooling. code written in white background

5.  Use this newly configured tool in your project using ‘lando ’. In this case, it is ‘lando lint:php:themes’

code written in maroon background

This automating tool which is configured with Lando software will help developers save time for finding and fixing these issues and will also ensure best practices are followed in the project repository.

How to Use SCSS Linters With Drupal in Lando

SCSS is a preprocessor used for writing CSS or CSS3 in any modern-day project. This SCSS is used because it helps developers to write less code and remove redundancy in the repetition of classname and other properties which are frequently used in the project.

The purpose of using SCSS linter in the project is to ensure that the quality of the code is high and easily maintainable for future enhancement. Further, it would save time in development and faster delivery of the projects.

Following are the steps that need to be followed for configuring the SCSS linter in our project-

  1. Configure node service and install gulp inside that service within .lando.yml file.
    code written in black background
  2. Configure tool for using npm with Lando within .lando.yml file.
    code written in black background
  3. Confirm if tooling is configured correctly just by using the ‘lando’ command to list all tooling.
    code written in white background
  4. Create a package JSON file and install and configure the stylinter package in the project.
  5. Create a new script in the package.json file for triggering stylinter.
    code written in black background
  6. Configure the tool to trigger this using lando.
    code written in black background
  7. Confirm if tooling is configured correctly just by using the ‘lando’ command to list all tooling.
    code written in white background
  8. Run this tooling command and Lando will lint it automatically.
    code written in maroon bavkground

This automation tool integrated with Lando for SCSS linter will ensure that best practices and code hygiene is followed in the project repository.


How Can Lando Help in Reducing Developers’ Efforts While Building Drupal Multisite Architecture?

Let’s take a scenario where your project ( client’s website) is live now and running smoothly. Now the client wants to create multiple new sites in alignment with the existing site. For instance, the new sites should have custom modules, themes, profiles, etc. to ensure brand consistency. 

Here, Drupal would come in handy as it would simplify the multisite architecture and speed up the local development setup with Lando through some minor tweaks in configuration files.

For setting up multisite architecture in an existing project, you need to follow below steps- 

  1. Configure .lando.yml file to setup app server URL for the new website
    code written in black background
  2. Configure database server for setting up this site with the new website
    code written in black background
  3. Configure drupal settings like sites,php, and folder structure for site2; to leverage this Lando configuration
    code written in black background
    code written in black background
  4. Rebuild configuration for setting up this new website.
    code written in maroon background

The minor tweaks in the existing project would help you extend existing Lando projects/websites to build multi-site architecture via local development and accelerate the delivery process for the client.

Conclusion

If you have come this far, Dhanyavaad (thank you). I hope that this article would help you in speeding up the development process & hence, faster project delivery, knowledge transferring of your application/project with Drupal, and leveraging Lando at its best by using inbuilt composer for automation in local development environments.

Now that you are armed with the knowledge and Lando’s benefits, what are you waiting for? Get started now!

Jan 09 2021
Jan 09

Brian Perry, front-end architect with Bounteous, joins Mike Anello for a beginners talk about front-end components, including Brian's advice for what the first steps are for someone who wants to get started using them.

URLs mentioned

DrupalEasy News

Audio transcript

We're using the machine-driven Amazon Transcribe service to provide an audio transcript of this episode.

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher and YouTube.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Jan 09 2021
Jan 09
“A thorough website audit can clue you into the necessary changes and will help you drive significant results.”

Websites are complex beasts and the issues that arise are of inevitable nature. Being oblivious to these issues is quite common when you don’t conduct the site audit properly and regularly. What happens next is quite obvious - you fail to identify a wide range of website issues which interrupts the potential users to access your website, thereby acting as a major barrier to the growth of your business website. 

So, the question that arises here is - what is the best possible way to optimize your website in order to hit the predetermined goals?

Unless you have been living under a rock, you already know that website audit is the resonating answer for the same. Don’t you? 

Well, a website audit is the most common yet the most efficient approach that every organization undergoes who wish to achieve goals associated with the traffic and performance boost. As a matter of fact, a good website audit takes into account all the factors including performance issues, security vulnerabilities, general site maintenance, and site changes and upgrades that can undoubtedly influence your website’s success. 

Have you ever audited your website? No? Then, now is the right time!

A comprehensive Drupal website audit is a necessity today and is highly recommended to make sure that your website is up to date and performing well. Whether you are a small business trying to optimize your site for organic search, or an agency doing the same for a client, it can be a bit difficult to know where to begin from and how in-depth your analysis should go. No need to worry, we have got you covered.

In this blog we have put together several parameters that are of great importance when it comes to carrying out an in-depth analysis of your website. Subsequently, we will be providing you the tools that will help you glean the most useful information throughout the audit process. 

Illustration explaining Drupal website audit checklists with 'Drupal website audit' written at top and several boxes containing textual information below it


Before we run into the on-page audit components, let's start with few basic but important domain level checks that every organization, irrespective of their size and nature should be updated with.

Site Map

A site map is basically a blueprint of your website that helps search engines (Google, Yahoo, and Bing) to find, crawl and index all of your website’s content. Site maps can be good for SEO as they allow search engines to quickly find pages and files that are important to your site. 

SSL Certificate

SSL certificate is the backbone of the website that enables encrypted communication between a web browser and a web server. Websites need to have a validated SSL certificate in order to keep user data secure, verify ownership of the website, prevent attackers from creating a fake version of the site, and gain user trust. 

WWW resolution

WWW resolution assesses whether your website redirects to the same page with or without WWW (World Wide Web). It is better and more convenient for users when it does. 

Robots.txt 

Robots.txt is a file that lives at the root of your website to instruct your crawling preferences to various search engines. Not to mention, a Robots.txt file allows you to lock away areas of your website that you may not want crawlers to find.

On-page Site Audit Components

Apart from the aforementioned basic domain checks, there are several other components that are capable enough to influence the outcome of the website audit. Further, these influences can either have a positive effect on the quality of the website or can provide great repercussions on the reputation of the website on the face of the direct clients as well as end-users.

1. Drupal’s Best Practices

Creating and maintaining a Content Management System (CMS) like Drupal takes both time and effort. Further, you are required to follow some basic web development practices that can help you protect that investment and simultaneously provide a great user-experience. 

With that being said, the following pointers outline some best practices that are required to program with Drupal.

Drupal Architecture

  • The content structure must include all the fields and content types. 
  • Choose limited content types and files in your development plan to avoid confusion among content creators.
  • Use new entity type and single entity type for different and similar data types respectively.

Check the code

  • Use an indent comprising 2 spaces, with no tabs and the lines.
  • All binary operators should have space before and after the operator to serve the readability purpose.
  • To distinguish control statements from function cells, they should have one space between the control keyword and opening parenthesis
  • All lines of code should comprise a minimum of 80 characters.
  • Use short array syntax to format the arrays with a space separating each element (after the comma). 
  • Use require_once() and include_once() respectively when unconditionally and conditionally including a class file. 

Infrastructure

  • Stack size should be not too large, nor too small. 
  • Dive into logs to detect errors and prepare for growth and spikes. 
  • For security issues, it’s crucial to configure to protect from internal attacks as well as external attacks.

Optimise the front-end

  • Define component elements using their own classes. 
  • Exercise and test your site rigorously to resolve PHP errors, if any. 
  • Use a stable administrative theme during development. 
  • Use DRY CSS and group reusable CSS properties together and name these groups logically. 
  • Name components using design semantics. 
  • In order to keep your designs more organized, use SASS.

Test, error, repeat

  • Get your site reviewed by peers to get an additional idea on what to do next. 
  • Set up a testing environment to get your website tested easily and quickly. 

SEO Practices

  • Use Robots.txt, so the right pages and information is indexed. 
  • Bring navigational drop-down menus into action that silently contributes to search engine optimization.
  • Enable the URL aliasing with Pathauto to ensure the search engine understands what the webpage implies. 

Security Practices

  • Always keep your core updated. 
  • Arm yourself with some additional security modules.   
  • Make sure you only use modules approved by the security team. 
  • Don’t forget to keep your backup ready to face any uncertain events.

Maintenance Practices 

  • Keep your code under version control.
  • Maintain and update separate environments for the different stages of the site.
  • Limit access to the production site for all but the most trusted users.
  • Access all logs ever and again, including Apache, Drupal, and MySQL.
  • Review and assess your architecture frequently and make plans for the future.

To go through a detailed explanation of Drupal's best practices, read here.

2. Mobile Usability

Mobile usability testing helps you identify the potential issues/problems that are hindering a mobile friendly user-experience on your website. The need to conduct a mobile usability audit is extremely important because with the advancement in smartphone browsers, more people are visiting sites using their mobile phones. 

Below are some common yet important elements that can help you to produce great mobile-friendly sites. 

Responsive Design

It allows page elements to reshuffle as the viewport grows or shrinks. Responsive design plays a pivotal role as it allows you to create dynamic changes to the appearance of your website when there is a change in screen size and orientation of the device it is being viewed on. 

AMP URL

Originally developed by Google, Accelerated Mobile Pages (AMP) is an initiative to speed up the loading time of web pages on mobile devices. The biggest advantage that AMP URL offers is faster and simpler web pages that can be displayed equally well on all device types, including mobile and desktop.

Mobile Pages Audit Tools

There are a number of tools that can help you perfectly optimize your site for mobile. Here are a few tools that that you should have in your bookmarks-

  • Screenfly
  • Google Resizer
  • Browserstack
  • Ghostlab
  • Crossbrowser Testing

Check out this guide on mobile-first design approach to know more.

3. Speed

Performing a website speed audit is important as it helps you evaluate the speed and responsiveness of the website and further identify the areas that need quick improvement. 

Page load speed

It refers to the time taken by the website to fully display the content on a specific page which directly impacts user engagement and a business’s bottom line. Page load speed is important to users for the obvious reason - faster pages result in more efficient and better on-page user experience. An ideal page load speed should vary between 2-5 seconds. 

Page Speed Audit Tools:

Market is flooded with a variety of tools that can be used to test page load and improve the website speed. Following is a handpicked list of some common tools- 

  • Pingdom
  • Google pagespeed insights
  • Google analytics site speed
  • GTmetrix
  • Dareboost
  • YSlow

4. Performance

Website Performance Testing refers to a software testing process used to determine how a particular website behaves and responds during various situations. Conducting a website performance audit is incredibly important for websites because it helps you to identify and eliminate the performance bottlenecks in the software application.

Take a look at the following list of performance elements that contribute to the response time of the website and overall end-user experience.   

HTML/CSS/JS 

  • JS and CSS count: Delivering a massive amount of CSS and JS to the browser can result in more work for the browser when parsing the CSS/JS against the HTML and that makes the rendering slower. Try to send only the CSS/JS that is used on that page and remove CSS rules when they aren't used anymore.
  • CSS Size: Delivering a massive amount of CSS to the browser can result in more work for the browser when parsing the CSS against the HTML and that makes the rendering slower. Try to send only the CSS that is used on that page and remove CSS rules when they aren't used anymore.
  • Image Size: Avoid having too many large images on the page. The images will not affect the first paint of the page, but it will eat bandwidth for the user.
  • Page Size: Avoid having pages that have a transfer size over the wire of more than 2 MB on desktop and 1 MB on mobile.
  • Image scaling: Scaling images in the browser take extra CPU time and will hurt performance on mobile. So, make sure you create multiple versions of the same image server-side and serve the appropriate one.
  • Documents Redirects: You should never ever redirect the main document because it will make the page load slower for the user. Instead, redirect the user if the user tries to use HTTP and there's an HTTPS version of the page. 
  • Charset Declaration: The Unicode Standard (UTF-8) covers (almost) all the characters, punctuations, and symbols in the world. It is highly recommended to use that.

Header performance

  • Cached Header: Setting a cache header on your server response will tell the browser that it doesn't need to download the asset again during the configured cache time! 
  • Cached Header Length: Setting a long cache header (at least 30 days) is better as it promises to stay long in the browser cache. 

Servers

  • Fast render speed: Avoid loading JavaScript synchronously inside of the head, request files from the same domain as the main document (to avoid DNS lookups) and inline CSS or use server push for really fast rendering and a short rendering path.
  • CPU rendering time: You need to be able to render the page fast which is highly reliable on which computer/device you run on. It is important to note that the limit here is high i.e., spending more time than 500 ms will alert this advice.
  • No. of requests per domain: Avoid having too many requests per domain. The reason being, browsers have a limit on how many concurrent requests they can do per domain when using HTTP/1. 
  • CPU scripting time: Do not run too much JavaScript as it will slow down the page for your user. Again, this metric depends on which computer/device you run on but the limit here is high i.e., spending more time than 1000 ms will alert this advice.

Performance Audit Tools

Here are some common tools that you can use to run website performance tests in order to achieve optimal performance. 

  • GT Metrix
  • Webpage Test

Read this comprehensive guide on Drupal performance optimisation techniques to know more.

5. Accessibility 

An accessibility audit is a comprehensive evaluation of how well your digital properties meet the needs of people with any limited ability. It is important to conduct the accessibility audit as it provides a detailed look at how and where you can enhance your digital products/services to improve digital accessibility.

Here are some of the first steps you can take to check the type of experience your website delivers for people with digital access needs:

Check your page title

  • Make sure that every page has an input title. 
  • This is usually done through the 'view source' option available in most modern browsers.

Turn images on and off

  • This can be done using an advanced option. For say, google chrome provides access to turn images on and off, which makes it easy to look for ‘disappearing’ text. 
  • Subsequently, check your image alt text for issues such as the missing or incorrect description of the image contents.

Turn sound on and off

Using the computer's sound options, turn off sound to make sure that your website is conveying the same meaningful information, with or without sound.

Manage plug-ins

  • Using special plugins, you can easily apply different views on the top of the page. 
  • For example- you can test grayscale to ensure that people who are color blind have access to each and every information available on a particular page.

Keyboard accessibility 

  • Try to operate and navigate your website without a mouse or trackpad. 
  • Check if all the functions are operable using keyboard navigation alone. 

Check Zoom in 

  • People with visual impairments often enlarge the elements to see what is present on the screen. 
  • Therefore, zoom to 200 or even 300% to check if anything pixelates or not. 

Check-up page structure and hierarchy

  • Your heading text should be H1, followed by various subheadings i.e., H2, H3, and so on. 
  • For example - follow the order 1-3, so H2 cannot come before H1 and H3 cannot come before H2.

Check multimedia elements

As per the information issued by web content accessibility guidelines (WCAG), websites must specify important information contained within multimedia elements (video/audio/photo) in a text-based alternative.

Accessibility Auditing Tools

There are some free online tools that you can use to uncover the accessibilities issues that are present in your site. 

  • Wave Evolution tool
  • Google Lighthouse
  • Sortsite
  • Pay11y
  • Stark contrast checker  

Take a look at this web accessibility planning guide and how Drupal is ensuring web accessibility standards to know more.

6. Security 

Conducting security audits are befitting to examine and identify the existing/potential threats that can jeopardize the website. Further it also involves improving the security of the website to make online business safer.

Following is a quick and easy list of elements you can evaluate to detect the security risks lurking in your website.

Ascertain the assets to focus on 

  • List out the high priority assets required to monitor and scan, including sensitive customer and company data, internal documentation and IT infrastructure.
  • Do not forget to set out a security perimeter 

Checklist your potential threats

  • Name your threats to ensure what to look for and how to adapt your future security measures. 
  • Some common security threats you might put on your list include weak passwords comprising sensitive company data, use of malware, phishing threats, unwillingness to accept service attacks and maleficent insiders.

Determine the current security performance

Evaluate the current security performance of the website to keep hackers at bay, trying to invade the company’s systems. 

Establish configuration scans

  • Setting up a higher-end scanner will help you detect security vulnerabilities.
  • Run some configuration scans to detect configuration mistakes made.

Look out for reports

Do not forget to give a detailed look at the reports generated by your auditing tools.

Monitor DNS for unforeseen events

Always keep track of the credentials used for your domain. 

Scrutinize your website

This a must task when you wish to spot some hard-to-access files and directories on your website.

Carry out internal vulnerability scan

  • Install an agent on each computer in your organization to monitor the vulnerability level.
  • Performing an internal vulnerability scan in a month or 3 months would be a great option.

Perform phishing tests

  • Perform cybersecurity training by sending out fake phishing emails to team members.
  • Running such tests would give a close-to-real-life experience of what a phishing attack is. 

Security Auditing Tools

Now that you have a plan, you might need some tools to put your plan into action. For your convenience, we have listed down a few tools that you can use-

  • OWASP Testing Guide
  • Burp suite
  • Nessus
  • Qualys web apps scan
  • Rapid7 

Get a thorough understanding of Drupal security by going through why Drupal is the most secure CMS, its provision for open source security, importance of security modules for Drupal website and Drupal website's data security strategies.

7. Search Engine Optimization (SEO) 

An SEO audit is an important facet of the website which identifies and analyzes the foundational issues affecting the organic search performance of the website. Conducting an SEO audit is extremely essential for any website as it allows you to analyze  the current SEO efforts (irrespective of fact how prolific or sparse they are) and further take immediate action on those insights.

Below are some of the most important areas that an SEO audit covers to maximize optimization-

Find and fix indexation issues

  • Make sure your site is well-indexed in Google.
  • Look for the number of pages that Google has indexed for your domain.

Conduct on-page SEO check

  • Keywords: While auditing your on-page SEO, start with the keywords. Make sure both long & short-tail keywords are incorporated seamlessly throughout the content. Moreover, adding LSI keywords help improve organic visibility eventually.
  • Optimization of headers: Use keywords in the headers. It is to be remembered that search engines including Google use H1 tags to understand the primary topic of a page.  
  • Call to actions: Curate content with the right CTAs for the maximum conversions. Good CTAs make a site look more structured and professional, attracting visitors’ attention. 
  • Optimized URL: It’s also crucial to have keyword-rich URLs for the website to improve the organic click-through-rate (CTR). The shorter the URL, the better is the ranking.
  • Meta description: Meta description plays an imperative role in SERP as Google uses description tags to generate a search results snippet. Hence, every page on your site needs to have a 160 characters meta description with a primary keyword. 
  • Internal links: Using internal links to publish new content is a must. Internal links are instrumental in establishing site architecture and spreading link equity (ranking power) at large. It is recommended to use descriptive keywords in anchor texts to give readers a sense of the topics. 
  • Schema markup: Furthermore, use Schema markup, an advanced level on-page SEO technique to help the search engine bots crawl relevant information for users. The Schema markup uses a unique semantic vocabulary (code) in microdata.
  • Image optimization: Lastly, the optimization of images with keywords in the image alt text also carries weight. This practice increases the potential to rank in image search apart from boosting the SEO efforts of webpages. 

Detect and delete broken links 

  • Check for the broken links list and find which link has the most inbound links.
  • Work through this list and either delete or replace the errors found. 

Duplicate and thin content pages

  • Check for duplicate pages as they have an adverse affect on SEO. 
  • The pages should have a decent word count else it would be considered thin content poage and might not attain a better ranking over SERP or even not get indexed.

SEO Auditing Tools

Following are the tools that you can use to track and detect errors that are hindering your site from achieving the top spot on Google. 

  • Google Analytics
  • Google Search Console
  • SEMrush
  • WooRank
  • Moz
  • Ahrefs
  • SpyFu

Access this ultimate guide to Drupal SEO to know more.

8. Consent Management

Consent management is a process that allows websites to meet legal regulations such as GDPR and CCPA by obtaining user consent for collecting their data or information. With a good consent management platform (CMP) in place, websites are able to create better customer experience and further deepen relationships with their consumers.

9. Hosting Infrastructure 

Having a quality web hosting infrastructure is essential for any website as it helps you determine the loading speed, downtime, bandwidth, and SEO factors of the website.

If you use a free or cheap web host, it will create a lot of hosting problems like frequent downtime issues for you in the future. 

Here is a list of some important things that you should consider before you choose a web hosting plan.

Fast servers 

  • Profits are directly proportional to the speed that webpages load, therefore make sure your web host offers at least a T3 internet connection.
  • Internet users lack patience and need quick results. Make sure your web host does not exceed the 30 seconds time frame. 

Unrestricted CGI Access 

CGI programs are put-to-use by many professional sites at some point or the other, therefore look for a web host that can provide you with CGI-bin access.

SSH and FTP Access

  • You can easily encrypt the data moving between your computer and your website server with the help of SSH. Doing so helps you reduce the burden on your programming development time.
  • A good web host must qualify the need to utilize an FTP with an intent to transfer files back and forth from your local computers and your web server.

Access to Raw Server Logs

This feature allows you to gain access data relating to your website’s traffic, including traffic you get per week, time period of visitors on your site, etc. 

Server Backups

  • Server backups ensure that you don’t lose out on anything at the time of uninvited events. 
  • Not all web hosting services provide automatic database or server backups, in such situations you are required to pay an additional amount to create full backups for your whole sites.

Services, Scripts, and Software

  • A good web host should offer a vast library of scripts wherein you can add forms, statistics, and other extras to your website.
  • Besides this, the scripts should also provide some e-commerce features including shopping cart software, real-time processing availability and much more. 

Tech Support

A web host of good quality should provide technical support to the website. 

Conclusion

To conclude, conducting website audits may seem like a strenuous task, but it is an important responsibility that helps you identify issues that can hinder the growth process of your website. Not to mention, the entire process may sound a bit nerve-wracking however the end results derived are worth the hard work. If you want to maximize the business benefits of your website, then a website audit is all you need to put-into effect. 
 
Furthermore, a website audit is not a one-off process that you conduct once in a blue moon. In other words, conducting a website audit is a mindset that helps you gain deeper insights into your website which further helps you stay on top of your website maintenance before it gets too late. Being successful in the digital market space requires some degree of agility and adaptability, and guess what this goes for websites too. 

Would like to put yourself way ahead of your less-informed competitors? Feel free to contact us at [email protected] and our industry experts will help you conduct a comprehensive site audit the right way.

Jan 08 2021
Jan 08

Now on Drupal 9, the community isn’t slowing down. This month, we sit down and talk with Angie Byron, a.k.a Webchick, a Drupal Core committer and product manager, Drupal Association Board Member, author, speaker, mentor, and Mom, and so much more. Currently, she works at Aquia for the Drupal acceleration team, where her primary role is to “Make Drupal awesome.” We talk about Drupal, coding, family, and her journey throughout the years.

This article was originally published in the January 2021 issue of php[architect] magazine. To read the complete article please subscribe or purchase the complete issue.

Jan 08 2021
Jan 08

Register today to secure the discounted rate and confirm your attendance at the biggest Drupal event of the year online April 12 - 16, 2021. Early registration ends February 3 at 11:59 PM ET.

Jan 08 2021
Jan 08

(Available as freelancer)

Joris Snoek

Business Consultant
/ Drupal Developer

Last week we released 'group chats' in Drupal distribution OpenLucius, a social productivity platform. At first sight it maybe looks like a small module. But it took quite some effort to get to this release. Also because I wanted to release it open source: no concessions in (code) quality and maintainability.

Our 'group chat journey' started around 3 years ago, when we kicked off building OpenLucius on Drupal 8. We thought it was best to implement the realtimeness with MongoDB, because of the no sequel character and its speed. Also, a lot of chat examples were using the MEAN stack.

ReactJS / VueJS (not needed)

Also, javascript frameworks like ReactJS and VueJS were/are a hype -some developers like to jump on it without looking into native Drupal alternatives, I also fell into that trap -it's a dangerous area:

Hello over-engineering, complexity and project misery.

We thought we needed it for frontend interactivity, but that implementation added even more unnecessary complexity to the project.

Drupal 8

After a long struggle, that Drupal 8 version never saw the light of day. We did use it for a while internally but it was not suitable for the outside world -that's a euphemism right there.

Native Drupal 9 group chat, a PoC

So about a year ago I started with a proof of concept to see if realtime group chat was possible with use of just Mysql and jQuery, it turned out it did! So that meant that the group chat module could be implemented native in Drupal.

That was a huge relief and paved the way to an open source release. Because I wanted the installation process to be as simple as possible for everybody, not complex with installing ReactJS / MongoDB and what not.

Just click through the Drupal install wizard and done, that was the goal -and we reached that.

Well.., full disclosure: one piece of external tech required is Node.js (with Socket.io). Else realtime emitting/pushing messages in 'rooms' (group chats) just isn't going to work, Drupal core has no Websocket tech built-in.

But installing the Node.js chat engine also is a few click operation after installing OpenLucius. And: it's optional, so a basic Drupal wizard install is the only thing required to get OpenLucius up and running.

Fast forward 2021, Drupal natives Mysql and jQuery FTW!

So, after the successful proof of concept in 2020, it is safe to say:

be very considered when implementing external tech. Drupal core has excellent native tech for facilitating speed and interactive UI's.

Of course the tech you choose in the end depends on your requirements/user stories. But just make sure you invest enough time in analysing Drupal core and contrib modules, before integrating external tech.

Especially if you want to release open source.

Code for performance

So Drupal natives Mysql and jQuery can work great.. as long as you code it right. And with 'right' I mean, in our case of the group chat, that the code needs to be as lean as possible: chat messages must eventually be realtime.

So I implemented custom Drupal entities and custom code that only does the thing it needs to do, nothing more and certainly nothing less (so for example the Drupal core node system is obviously not handy in this case).

To wrap it up, it turned out that these rules prevailed:

It's hard to make things simple. Experience is the grandmaster of all skills.

Technical details in next blog

In my follow up blog I will get into the tech behind the group chat in Drupal: the use cases and how I implemented them, all without page refresh / Drupal AJAX based:

  • Chat screen initialisation
  • Adding messages and realtime emitting them to all chat users
  • Files uploads via AJAX
  • Dynamic theme libraries, for socket.io connection
  • Node.js / Socket.io implementation
  • @mentions javascript library, with auto suggest
  • Mailing the people that where @mentioned
  • Security and permissions
  • Security hardening with group uuid's
  • How to handle CORS if socket.io runs on external server
  • If connection drops, make sure no messages are missed
  • Edit messages via AJAX
  • Deleting files from the chat via AJAX
  • Dependency injection

    So stay tuned y'all!

Try now or download open source

If you want to test the group chat in OpenLucius this instant, that of course is possible, click here to get started -> Hit 'try now' button. Or download and install OpenLucius yourself via the project page on Drupal.org

Jan 08 2021
Jan 08

Website statistical data is required knowledge if one wants to effectively manage the website, and thus – determine the directions for its further development. Thanks to numerous Drupal modules, you can make the necessary improvements also via integrations with Google Analytics. In this article, you will find information about the GoogleTagManager (google_tag) module, thanks to which you can conduct an effective and structured analysis of your website.

The need to put external scripts on websites dates back to the 90s, when webmasters got a taste for counters and guest books. Over the past three decades, simple external services have evolved into advanced and powerful solutions such as: Salesforce, Hotjar or Mautic, the implementation of which we provide at Droptica. Modern websites use dozens of tools to research user behaviour, collect the data on visits and for A/B testing. Today it is difficult to imagine effective marketing without them.

Dates

The Google Tag Manager service was launched in October 2012, and the Drupal module to support it was released in February 2014. A stable version of the module for Drupal 8 was released only at the beginning of 2018. The development was quite slow, however google_tag was very stable from the beginning.

The module's popularity

It must be said that the module has won over a large group of users. According to official statistics, it is used by over 51 thousand websites while more than half of this number are websites based on Drupal 8/9.

Module's creators

Jim Berry from Boombatower is responsible for the maintenance of the module. He is a developer who has made a huge contribution to the Drupal community. He is the author of over 1500 commits in dozens of projects.

What is the module used for?

The google_tag module is used for advanced integration with Google Tag Manager, reaching much further than simply pasting the GTM script into the website code. Here you have access to user roles and exclusion lists, among others. I will present all these functionalities later in the article.

Unboxing

You can download the module at https://www.drupal.org/project/google_tag. After the installation, add a new container and enter its ID obtained in the Google Tag Manager panel. You can find the module settings in the Configuration - System - Google Tag Manager menu.

Module's use

The google_tag module has a default configuration that will work for the vast majority of applications. However, it is worth looking into the advanced settings, which give more control over the included scripts.

Containers list

A novice user will certainly ponder what the list of containers available in the configuration panel actually is. Well, the latest versions of the module allow you to handle more than one GTM container (in other words: various unique identifiers assigned by Google). Such a solution works perfectly when you have a page farm for which you can separate common scripts stored in a shared container.

gym-containers

When adding a new container, you will see a form with the default settings filled out. If you do not want to waste time customising each of the containers, you can specify the default settings in the "Settings" tab.

Adding a container

The form for adding a container mainly contains its unique identifier (Container ID) obtained from Google and a label that will be used to recognise individual containers within the Drupal administration panel.

Google Tag Manager - Container add

In the "Advanced" tab you can find three interesting settings:

  • Ability to rename a data layer. In most cases, the standard dataLayer is enough, but it is worth keeping this option in mind in case of a conflict. To clarify: a data layer is an array (more precisely - an object) in JavaScript that is used to transfer data from a website to a GTM container.
  • Support for allowlist and blacklist options, described in more detail in GTM help. This is a rarely used, but useful functionality that allows for additional blocking of some tags in the production environment before they are ready for implementation, among other things.
  • Configuration of the current environment which enables switching between the production and development set of tags.

Google Tag Manager - Container add advanced

A bit further down you will find the exclusion settings. The GTM script can be activated and deactivated based on:

  • The current path in the URL (by default, tags are not launched on admin pages).
  • User Roles (A common case of use is disabling analytical scripts for the administrator role).
  • HTTP status code (e.g. 404).
  • Content type.

Google Tag Manager - Container add exclusions

Module configuration

In the settings tab there is a configuration common to all containers. There are several options to improve the performance of your website and make debugging easier.

Google Tag Manager - settings

The first four settings are for the optimisation of the JS script delivery that supports Google Tag Manager. I recommend leaving the recommended values here, and at the same time I would like to warn you against possible errors returned in the website log. During projects in our Drupal agency, we often encountered a situation where the GTM script file was not generated due to the lack of directory permissions.

Hooks and integrations

The google_tag module provides the following hooks:

  • hook_google_tag_insert_alter () - unlocks or blocks a given container if you want to make its operation dependent on factors other than the standard ones (i.e. role, path, content type and HTTP code).
  • hook_google_tag_snippets_alter () - changes the default JS code that supports GTM.

In addition, it is possible to implement plugins providing Conditions for placing the Google Tag Manager script on the website.

Summary

In a previous article I discussed in detail the integration of Drupal with Google Analytics. I am sure that use of the above, together with the google_tag module described in this text, will make website analysis even more effective and orderly. The google_tag module has almost become an industry standard over the past few years. I recommend using it and exploring the possibilities that the world of Drupal tags and data layers opens up for us.

Jan 07 2021
Jan 07

student in cap and gown attending a virtual college graduation ceremony

Customer Stories on The Pivot to Digital 

In a year like no other, Mediacurrent’s higher education partners were challenged to shift strategies. We recently welcomed a customer panel of seven marketing strategists to share their stories at our company retreat. With voices ranging from two Ivy League universities to an online college, the panel reflected on how the pivot to virtual tours was an ambitious undertaking. So too was the need to rethink student communications and reassess the metrics that matter most.

The following is a transcript of a virtual roundtable by Michael Silverman of Mediacurrent with a panel of seven digital leaders in higher education. It was conducted in December 2020 at the Mediacurrent company retreat. Some of the questions have been edited for brevity and clarity.

How has your recruitment strategy changed with COVID-19? What works now for student enrollment marketing?

For this digital director at a private catholic university, COVID-19 drove his team to imagine creative alternatives for events and approach their marketing funnel in a new way: 

There's been the need to be more creative to reach our target audience. We had to find different ways to engage prospective students. One thing we did was to host a socially distanced drive-in event where prospective students came to the college and watched all about our college. We’ve also moved to more virtual events (I can tell you because I entered over 300 events on our Drupal site!) for new and returning students. We look for different ways to connect with them, to make sure that they stay engaged with the university.

We had a habit of focusing on top of the funnel brand awareness and it was harder to get prospects into the funnel. So we had to make a more concerted effort to reach the students that were already in the funnel, getting them to apply and then put their deposit down. We were working with a smaller pipeline and we had to be more efficient in speaking to it. 

According to this director-level digital strategist for a major state university in the northeast, highlighting the value of a local education was a successful tactic:

When we were being able to get people onto the campus, 70% of people who came to visit ended up applying. Because we were missing out on that, moving to virtual visits, we had to change our messaging quite a bit. 

We found that people were more likely to want to stay at home or stay local. Because our biggest audiences are in-state, and we have 19 campuses across the state, that's been a big point for our message. We really focus on the campus and that local education rather than that “big brand” messaging. 

On a campus of 2,000 students in the Great Plains region, this marketing expert saw how small group tours are more personalized than before:

After shutting our campus down for the spring and summer, we were able to get back to a model that allows for individual family tours. That personal touch has helped us a lot. In October, we hosted more individual tours than any group tour of previous years. Our admissions counselors really take pride in fostering relationships with prospective students through personal interactions like texting, calling, or writing letters.

Aside from campus visits, what were some other leading indicators for applications? 

Prospective students had a high need for information about how the school was reacting to the pandemic. This state university (the same referenced above) saw the opportunity for retargeting campaigns:  

We’ve started to focus more on people coming to the admissions website and just reading through some of our COVID information. Our focus groups found that given the uncertainty, people wanted us to be able to proactively communicate what was going on and what we were planning to do with student enrollment moving forward. So we drove a lot of people to that information. If we saw that people were reading that and clicking over to one of our conversion actions, we would set up retargeting campaigns towards them and get them further down. This was a new strategy because we had never really used any of our PR materials within our enrollment advertising. 

We had never really done retargeting before for trying to get people to accept their offer of admission. We've started to build up some scores in our Slate CRM for the probability of enrollment. We've been able to figure out the people that are most likely to enroll and are able to retarget them at the beginning of the funnel with lookalike audiences and Facebook. Then we’re also retargeting accepted students who are still in the funnel. 

Where do you see the biggest change in measurable goals for your organization due to the changes brought on by COVID-19?

This CMO of an online college for holistic health was able to boost enrollment even as the competition for distance education was skyrocketing:  

We didn't see an enrollment drop-off at all. In fact, we've seen an increase in enrollment. Back in February, I pulled all of our pay-per-click marketing. I had a feeling that if this hit, every single on-campus entity would need to go online and we wouldn’t be able to compete. That strategy saved us. 

We stopped focusing on trying to attract people to enroll. We knew that everyone else was trying to attract them as a consumer. We started doing educational wellness webinars to help people to grow their skills, inviting them to engage with us on an entirely different platform. 

Has your institution been forced to cut costs or reallocate resources? If so, how has that affected your group?

This web strategist for a small university in the midwest weighed in that she faces uncertainties in the upcoming admissions cycle. Looking ahead, her department budget will be geared toward third-party community platforms:

We’re a small school and we were able to pivot pretty well...until now. Our students come because they get to co-op the entire time they're here as a part of the degree requirement. So we're now starting to see it going into this admissions cycle, but we're being very creative because obviously, you're not having your large scale visit events on campus.

For my role, which is running the main site, there probably will be some dollars pulled from me in order to focus on some third-party platforms that are focused on building a community with potential students. Not necessarily budget cuts, but I’ve seen the shifting of money to focus on some of these ongoing virtual things that will continue. 

Without the in-house IT resources to launch a new website, this project director at an Ivy League school relied on Mediacurrent for support:

We hired Mediacurrent prior to the onset of the pandemic to create an online platform that would be useful in the event of a future financial crisis. Two months later, we found ourselves potentially in the midst of that financial crisis.

Our IT department needed to focus on making it so that our students could all attend class online from all over the world. All of a sudden I was in the middle of a Drupal website project, and frankly, I'd never heard of Drupal before this. 

What are the biggest pain points from day to day related to the technology and management of your website?

The crisis forced us to go digital in many ways. It's incredibly important for our websites to stay accessible. Mediacurrent has done a good job of understanding what matters to our stakeholders and helping us navigate accessibility. That’s a huge priority.

This is where working at a school that has a lot of name recognition can bite you. We may not have had to do as much outreach or aggressive marketing as some other schools. So we were extremely behind the curve when we changed to virtual info sessions. We were able to get our information sessions up and running in a way that's decentralized so I didn't have to manage all of that. We could train other staff to get them up and running and host them on their own, which we do through Slate. 

Having virtual information sessions and other digital channels is something that definitely will continue going forward because it allows us to get our message to a broader audience. We're able to share what the school community is like, and what our financial aid can offer.

Marketers from a state school and a small private school both shared how Drupal empowered them to quickly adapt the digital experience: 

On our current site, the back end user experience is really difficult. We have no flexibility to change things when our strategy changes. It's a mess. So what we're building now in Drupal 8, we are very, very excited about. We've been working very closely with the team at Mediacurrent to improve the user experience for our authors and also being able to adapt to changes quickly. 

This year is nothing but pivot. I’m constantly making changes to the website. On our previous Drupal 7 site, I had a hard time adding blocks of content. Now, with Drupal 8 and Layout Builder, I’ve got everything that I need in my tool kit. I can go in on the pages to move around what I need to move around. I’m able to change the content up on a dime. 

What has been your experience working with Mediacurrent? 

All of our panelists agreed that finding the resources to launch new digital campaigns was a steep challenge. This Ivy League marketer summed it up best:

Staff in higher ed are stretched very, very thin. And at this moment, I'm finding that it's harder for us to be forward-looking. The availability and transparency with the Mediacurrent team have been wonderful. We’ve had many Mediacurrent developers working on our team over the past couple of years and as well as user experience and project managers. They’ve not only helped to find ways to improve our site and make the experience better for prospective students and current students but also to keep up with the necessary bugs and Drupal security features. 

The panelists also thanked the Mediacurrent team for being a reliable partner in uncertain times:

It was an enormous blessing not to worry about the development of our platform given everything else that was going for our school in response to the pandemic. I had complete trust in the Mediacurrent team and you didn't let us down.

I’ve needed things more quickly to adapt our strategy this year. As a digital agency partner, Mediacurrent did that. It’s made the difference between sinking and swimming.

What areas of your digital strategy do you see remaining post-pandemic?

An Ivy League project director reflected on how lessons learned from virtual learning may carry over to the classroom:

The particular program that I'm involved in is a specialized master's program targeted specifically overseas. In addition to all of the other travel-related concerns associated with the pandemic, there are also visa issues, et cetera. By necessity, we've been in this mode of, rather than bringing students to campus to sort of convince them to ultimately enroll, needing to do that in a virtual way. As with the classroom experience, we're hoping ultimately to get back to a non-virtual experience. But there are pieces of the virtual experience that we would think about trying to preserve, even in a nonvirtual world. 

We're anxious to get back into the classroom but there are pieces of the online experience that we've enjoyed and have started to think about how we can bring some of those elements into a physical classroom. Something as simple as the ability to interact with students via the chat function in Zoom. How do you think about taking that functionality and applying it in a physical classroom? And I don't know that we have any great answers yet. But it's very much something that we're thinking about.

This private catholic college sees a data-driven website in its future: 

Partly because of COVID, my marketing department was moved into admissions. It's been great because we've had access to more data, so it's allowed us to be more targeted and granular in our advertising. Now I know down to zip codes where my most likely students are to come from. 

So it's been a real benefit in a time where we've had to be more efficient with what we're doing, what we're spending, what we're advertising. And it's kind of also the direction for where I want to go with the website. And that's making sure that all of my analytics, all of my CRM pieces, everything is hooked into Drupal and doing what it needs to do so that we can be efficient even after COVID.

Now What? Rethink Your Digital Focus 

Whether the goal is to boost enrollment, improve student retention, inspire, educate, or engage learners, your website plays a critical role. See how other institutions adapting their digital experience in our upcoming webinar. Join experts from Mediacurrent, Siteimprove, and Pantheon who have helped some of the best-known colleges and universities deliver engaging digital experiences. We hope to see you there! 

higher ed webinar banner

Save a spot for the webinar: Reimagining Your Higher Ed Web Strategy

Jan 07 2021
Jan 07

Since last June 2020, we heard a lot about Drupal 9. And you may wonder, should we do the update to Drupal 9?

Depending on you current infrastructure, the consequences won't be the same. That's why, our experts wanted to do this infography in order to give you some insights about you next Drupal project.

A project? A question? Don't hesitate to contact us: https://www.smile.eu/en/contact

Infography Drupal 9

Jan 07 2021
Jan 07

In our recent project, we had a requirement from one of our clients where we need to validate data in CSV files based on custom requirements. This validated CSV would need to be imported into Drupal 8 into various content types.  

In this article, we will look at the requirement, the library, the architecture of the custom module, the different components of the module with some code samples and finally adding some ideas on how this module can be made more reusable and even contributed.

Introduction

Our client is a well known international NGO with offices worldwide, each with different types of data management systems and frameworks. They wanted a centralized system to manage the data from each of these offices. Having concluded that Drupal 8 was the ideal solution to implement that centralized system, the challenge was to set up a migration pipeline to bring in data from all of the offices and their varying frameworks. Consequently, the files generated by these systems needed to be validated for specific constraints before being imported into our Drupal system.

Challenges and Goals

Following are the goals that the system should meet:  

  1. The CSV files were in a custom format and there were multiple files with different structures and needed to be handled accordingly. Each column needed to have another validator. 
  2. The files needed to be validated for errors before they could be imported and the errors needed to be logged with line numbers and relevant error messages. 
  3. The validation had to be triggered automatically when the files were downloaded from a central location. 
  4. Notification emails had to be sent on successful and failed validation to the IT admins. 
  5. After successfully validating the files, the validator needed to trigger the next step of the process, which is importing the files.

The main challenges

  1. The validation had to cross-reference the incoming data with existing data and also with data in different files (referential integrity checks). 
  2. We also had to check the uniqueness of certain columns in the CSV files. Doing this in a database is pretty easy and straightforward, but this had to be done before inserting it into the database.

Step 1: Choosing a CSV reader library

The first step was to figure out a PHP based CSV reader library. League CSV was found to be the best option due to the below reasons:

  1. It was managed by composer and was already being used by the Migrate module in Drupal core and hence no additional code needed to be added for the library to work.
  2. The library covered many common scenarios like iterating through rows of the CSV, getting the field values and headers, and streaming large CSV files.
  3. And finally, it was implemented in an object-oriented way.

Step 2: Architectural requirements

Below are the requirements we had concerning the architecture of the code:

  1. The code needs to work as an independent service to call it at different locations of code and thereby invoke validation wherever required.
  2. The validations need to be as generic as possible so that the same validation rule can be reused for different fields in the same CSV or in others.
  3. We need to have an extensible way to specify the validation to be done for each field. For example, whether a specific field can be allowed to be blank.

Step 3: Designing the components of the validator

To satisfy the above architectural requirements, we designed the validator module into the following sub-components:

The main service class

Below are the main responsibilities of this class:

  1. Load the CSV library and loop through each of the files in a particular folder.
  2. Use the methods supplied by the CSV league to read the file into our variables. For example, each row of the file will be stored in an array with an index containing each column data.
  3. During processing, the filename is taken in and checked to see if the validator method in the Validator class matching the filename exists.  
  4. If the method exists, then validation is done for the file and errors are logged into the error log table.
  5. If there are no errors, the class triggers the next event, which is migration using a predefined custom event via the Event API of Drupal. 
  6. This also passes the status of the import to the calling class so that emails can be triggered to the site admins.

The Validators class

Here, we basically assign constraints for each file type in a method. The input to the validator class would be a complete row.  

The Constraints class

This class contains the individual constraints that check if a particular type column meets the required criteria. These constraints are methods that take in the column value as a parameter and return an error message if it does not meet the criteria for that column type. This class will be invoked from the validators class for each column in every row.

The Error log

As its name suggests, the validator needed to capture the errors and log them somewhere. We defined a custom table using the database hooks provided by Drupal. A custom view was defined in code to read the data from this table. The errors captured by the constraint class were logged into the database using this logger.

Eventsubscriber and mail notification

We needed the validation to be auto-triggered when the files were downloaded. To achieve this, we tapped into Drupal’s EventSubscriber and Response APIs. 

Referential Integrity checks

Most of the columns did not have any relation with existing data and could be validated on the fly. However, some of the data had to be validated if it has corresponding references either in the database or in another CSV file. We did this as follows.

  1. For those values which act as a parent, dump them into a temporary table, which will be cleared after validation is completed.
  2. When we arrive at another CSV with a column that references values dumped above, then we query the above table to check if the value is present. If yes, return TRUE.
  3. If the value is not present in the temporary table, then we search the Drupal database as the value might have been imported as part of the previous import. If not, then we throw a referential error for that row in the CSV.

The code snippets are available here.

We used the migrated data as a source for a headless backend using REST. For more details on the specifications, refer to our blog on how to validate API response using OpenAPI3.

Future scope and ideas to extend this as a module by itself

We have written the module with an architecture where the validators can be reused but require some coding effort. Below are changes that can be done to make this module a contribution.

  1. Add configurations to have a list of files that need to be validated.
  2. Each file will have an option to add the fields that need to be validated and the type of data (similar to what you have when creating content type).
  3. Based on the above list of files and field types, we can validate any number of CSVs with any number of columns. 
  4. We would need to modify the above classes to fetch the columns' data type and call respecting constraints for each CSV.

As a result of doing the above changes, anyone will be able to use this module to validate CSV files with their own columns.

Hope this blog helped you with this module and how it can be made more reusable and even contributed. Share your experience in the comments below! 

Jan 07 2021
Jan 07

Drupal is a popular web-based content management system designed for small to large enterprises with needs such as complex workflows, multilingual content, and enterprise integrations. An increasing number of organizations move to Drupal from their current systems every year and with richer features being added to Drupal 9 and planned for 10, the growth will only accelerate. This means that migrations to Drupal remain an ever-popular topic.

Drupal provides a powerful and flexible migration framework that allows us to “write” migrations in a declarative fashion.

The migration framework supports a variety of sources and the ability to specify custom sources and destinations. Furthermore, the framework provides a powerful pipelined transformation process that allows us to map source content to destination fields declaratively.

Thanks to this framework, migration is more of a business challenge rather than a technical one. The overall process (or workflow) of the migration may differ depending on various business needs and attributes of the current (source) system. Depending on the type of migration, we may plan to reuse in-built migrations (in core or contrib), selectively adapt migrations from different sources, or entirely write new migrations. Further, depending on the source, we may choose to migrate incrementally or at one-time.

Many similar decisions go into planning an overall migration strategy and we’ll talk about the following here:
 

01. Migration Concepts

02.Understanding the content

03. Drupal to Drupal migration

04. Migration to Drupal from another system

05. Migration from unconventional sources
 

Migration Concepts

The Drupal migration framework is composable, which is why it can be used flexibly in many scenarios. The basic building entity (not to be confused with Drupal entities) is called, migration. Each migration is responsible for bringing over one discrete piece of content from the source to the destination. This definition is more technical than a business one as a “discrete piece” of content is determined by Drupal’s internal content model and may not match what you might expect as an editor.

For example, an editor may see a page as a discrete piece of content, but the page content type may have multiple files or paragraph fields or term references, each of which has to be migrated separately. In this case, we would have a separate migration for files, paragraph fields, and so on, and then eventually for the page itself. The benefit of defining migrations this way is that it allows the migrate framework to handle each of these pieces of the content itself, providing features like mapping IDs and handling rollbacks.

Correspondingly, a migration specifies a source, a destination, and a series of process mappings that define the transformations that a piece of content may go through while being mapped from a source field to a destination field. These are called plugins (because of their internal implementation). We might use different source plugins depending on the source system with the common ones provided by Drupal core (for sources such as Drupal, SQL databases, etc.).

There are dozens of contributed modules available for other systems such as WordPress, CSV files, etc. Similarly, process plugins are diverse and influential in allowing a variety of transformations on the content within the declarative framework. On the other hand, destination plugins are limited because they only deal with Drupal entities.

Incremental Migrations

The Drupal migrate framework supports incremental migrations as long as the source system can identify a certain “highwater mark” which indicates if the content has changed since a recent migration.

A common “highwater mark” is a timestamp indicating when the content was last updated.

If such a field is not present in the source, we could devise another such field as long as it indicates (linearly) that a source content has changed. If such a field cannot be found, then the migration cannot be run incrementally, but other optimizations are still available to avoid a repeat migration.

Handling dependencies and updates

The migrate framework does support dependencies between different migrations, but there are instances where there might be dependencies between two content entities in the same migration. In most cases, the migrate framework can transparently handle this by creating what are known as “stubs.” In more complex cases, we can override this behavior to gain more adequate control on stub creation.

As discussed in the previous section, it is better to use “highwater marks” to handle updates but may not be available in some cases. For these, the migrate framework stores a hash of the source content to track if the migration should be run. Again, this is handled transparently in most cases but can be overridden when required.

Rollbacks and error management

As long as we follow the defined best practices for the migrate framework, it handles fancier features such as rollbacks, migration lookups, and error handling. Migrate maintains a record of each content piece migrated for every migration, its status, hashes, and highwater marks. It uses this information to direct future migrations (and updates) and even allow rollbacks of migrations.
 

Understanding the content

Another important part of the equation is the way content is generated. Is it multilingual? Is it user-generated content? Can content be frozen/paused during migration? Do we need to migrate the revision history, if available? Should we be cleaning up the content? Should we ignore certain content?

Most of these requirements may not be simple to implement, depending on the content source. For example, the source content may not have any field to indicate how the content is updated and in those cases, an incremental migration may not be possible. Further, if it’s impossible to track updates to source content using simple hashes, we may have to either ignore updates or update all content on every migration. Depending on the size of the source and transformations on the content, this may not be possible and we have to fall back to a one-time migration.

The capabilities of the source dictate the overall migration strategy.

Filtering content is relatively easy. Regardless of the source, we can filter or restructure the content within the migration process or in a custom wrapper on a source plugin. These requirements may not significantly impact the migration strategy.

Refactoring the content structure

A migration can, of course, be a straightforward activity where we map the source content to the destination content types. However, a migration is often a wonderful opportunity to rethink the content strategy and information flow from the perspective of end-users, editors, and other stakeholders. As business needs change, there is a good chance that the current representation of the content may not provide for an ideal workflow for editors and publishers.

Therefore, it is essential to look at the intent of the site and user experience it provides to redefine what content types make sense then and in the near future. At this stage, we also look at common traits that distinguish the content we want to refactor and write mappings accordingly. With this, we can alter the content structure to split or combine content types, split or combine fields, transform free-flowing content to have more structure, and so on. The possibilities are endless, and most of these are simple to implement.

Furthermore, in many cases, the effort involved in actually writing the migration is not significantly different.
 

Drupal to Drupal migration

This is usually the most straightforward scenario. The core Drupal migrate framework already includes source plugins necessary for reading the database of an older version of Drupal (6 or 7). In fact, if the intention is to upgrade to Drupal 8 or 9 from Drupal 6 or 7, then the core provides migrations to migrate configuration as well as content. This means that we don’t even need to build a new Drupal site in many simple cases. It is simply a question of setting up a new Drupal 8 (or 9) website and running the upgrade.

Drupal is often not used for simple cases or for any non-trivial site needs rebuilding. 

A typical example is “views,” which are not covered by migrations. Similarly, page manager pages, panels, etc., need to be rebuilt as they cannot be migrated. Further, Drupal 8 has brought improvements, and updated techniques to build sites and the only option, in that case, is to build the functionality with the new techniques.

In some cases, it is possible to bring over the configuration selectively and remove the features you want to rebuild using a different system (or remove them altogether). This mix-and-match approach enables us to rebuild the Drupal site rapidly and also use the migrations provided in core to migrate the content itself. Furthermore, many contributed modules augment or support the core migration, which means that Drupal core can transparently migrate certain content belonging to contributed modules as well (this often happens in the case of field migrations). If the modules don’t support a migration path at all, this would need to be considered separately, similar to migration from another system (as explained in the next section).

Incremental migrations are simpler to write in case of Drupal-to-Drupal migration as the source system is Drupal and it supports all the metadata fields such as timestamps of content creation or updates. This information is available to the migrate framework, which can use it to enable incremental migrations. If the content is stored in a custom source within the legacy Drupal system and it does not have timestamps, a one-time migration may have to be written in that case. See the previous section on incremental migrations for more details.

While Drupal-to-Drupal migrations can be very straightforward and even simple, it is worth looking into refactoring the content structure to reflect the current business needs and editorial workflow in a better way. See the section on “Refactoring the content structure” for more details.
 

Migration to Drupal from another system

Migrating from another popular system (such as WordPress) is often accomplished by using popular contrib modules. For instance, there are multiple contrib modules for migrating from WordPress, each of which migrates from a different source or provides different functionalities. Similarly, contrib modules for other systems may offer a simple way to define migration sources.

Alternatively, the migrate framework can directly use the database source to retrieve the content. Drupal core provides a source that can read all MySQL compatible sources and there are contributed modules that allow reading from other databases such as MSSQL.

Similar to the Drupal migration source, features such as incremental migrations, dependencies, and update tracking may be used here as long as their conditions are satisfied. These are covered in detail in earlier sections. 

Check out the case study that outlines migrating millions of content items to Drupal from another system
 

Migration from unconventional sources

Some systems are complex enough to present a challenge during migration, even with the sophistication of source plugins and custom queries. Or there may be times when the content is not conventionally accessible. In such scenarios, it may be more convenient to have an intermediate format for content such as a CSV file, XML file, or similar formats. These source plugins may not be as flexible as a SQL source plugin (as advanced queries or filtering may not be possible over a CSV data source). However, with migrate’s other features, it is still possible to write powerful migrations.

Due to limitations of such sources, some of the strategies such as incremental migration may not be as seamless; nevertheless, in most cases, they are still possible with some work and automation.

An extreme case is when content is not available in any structured format at all, even as CSVs. One common scenario is when the source is not a system per se, but just a collection of HTML files or an existing web site. These cases are even more challenging as extracting the content from the HTML could be difficult. These migrations need higher resilience and extensive testing. In fact, if the HTML files vary significantly in their markup (it’s expected when the files are hand-edited), it may not be worth trying to automate this. Most people prefer manual migration in this case.
 

Picking a strategy

Wherever possible, we would like to pick all the bells and whistles afforded to us by the migrate framework, but, as discussed previously, a lot of it depends on the source. We would like every migration to be discrete, efficient with incremental migration support, track updates, and directly use the actual source of the content. However, for all of this to be possible, certain attributes of the source content system must be met as explained in the “Understanding the content” section.

The good news is that we often find a way to solve the problem and it is almost always possible to find workarounds using the Drupal migrate framework.

Jan 07 2021
Jan 07

Drupal 9 Module Development

Before we get our hands dirty with menus and menu links, let's talk a bit about the general architecture behind the menu system. To this end, I want to see its main components, what some of its key players are and what classes you should be looking at. As always, no great developer has ever relied solely on a book or documentation to figure out complex systems.

Menus

Menus are configuration entities represented by the following class: Drupal\system\ Entity\Menu. I previously mentioned that we have something called configuration entities in Drupal, which we explore in detail later in this book. However, for now, it's enough to understand that menus can be created through the UI and become an exportable configuration. Additionally, this exported configuration can also be included inside a module so that it gets imported when the module is first installed. This way, a module can ship with its own menus. We will see how this latter aspect works when we talk about the different kinds of storage in Drupal. For now, we will work with the menus that come with Drupal core.

Each menu can have multiple menu links, structured hierarchically in a tree with a maximum depth of 9 links. The ordering of the menu links can be done easily through the UI or via the weighting of the menu links, if defined in code.

Menu links

At their most basic level, menu links are YAML-based plugins. To this end, regular menu links are defined inside a module_ name.links.menu.yml file and can be altered by other modules by implementing hook_menu_links_discovered_alter(). When I say regular, I mean those links that go into menus. We will see shortly that there are also a few other types.

There are a number of important classes you should check out in this architecture though: MenuLinkManager (the plugin manager) and MenuLinkBase (the menu link plugins base class that implements MenuLinkInterface).

Menu links can, however, also be content entities. The links created via the UI are stored as entities because they are considered content. The way this works is that for each created MenuLinkContent entity, a plugin derivative is created. We are getting dangerously close to advanced topics that are too early to cover. But in a nutshell, via these derivatives, it's as if a new menu link plugin is created for each MenuLinkContent entity, making the latter behave as any other menu link plugin. This is a very powerful system in Drupal.

Menu links have a number of properties, among which is a path or route. When created via the UI, the path can be external or internal or can reference an existing resource (such as a user or piece of content). When created programmatically, you'll typically use a route.

Multiple types of menu links

The menu links we've been talking about so far are the links that show up in menus. There are also a few different kinds of links that show up elsewhere but are still considered menu links and work similarly.

Local tasks

Local tasks, otherwise known as tabs, are grouped links that usually show up above the main content of a page (depending on the region where the tabs block is placed). They are usually used to group together related links that have to deal with the current page. For example, on an entity page, such as the node detail page, you can have two tabs—one for viewing the node and one for editing it (and maybe one for deleting it); in other words, local tasks:

Local tasks take access rules into account, so if the current user does not have access to the route of a given tab, the link is not rendered. Moreover, if that means only one link in the set remains accessible, that link doesn't get rendered as there is no point. So, for tabs, a minimum of two links are needed for them to show up.

Modules can define local task links inside a module_name.links.task.yml file, whereas other modules can alter them by implementing hook_menu_local_tasks_ alter().

Local actions

Local actions are links that relate to a given route and are typically used for operations. For example, on a list page, you might have a local action link to create a new list item, which will take you to the relevant form page.

Modules can define local action links inside a module_name.links.action.yml file, whereas other modules can alter them by implementing hook_menu_local_ actions_alter().

Contextual links

Contextual links are used by the Contextual module to provide handy links next to
a given component (a render array). You probably encountered this when hovering over a block, for example, and getting that little icon with a dropdown that has the Configure block link.

Contextual links are tied to render arrays. In fact, any render array can show a group of contextual links that have previously been defined.
Modules can define contextual links inside a module_name.links.contextual.ymlfile, whereas other modules can alter them by implementing hook_contextual_links_ alter().

For more on the menu system and to see how the twist unfolds, do check out my book, Drupal 9 Module Development.

Thanks for the support.

Jan 06 2021
Jan 06
Jan 06, 2021 Product

Nowadays every organization needs/has some kind of group chat, like Slack or Microsoft Teams. But those chats are often detached from your other team activity like: project management, social posts, files and folders. Communication, work and documentation gets fragmented easily - very frustrating.

Often, group chat is just another island with a lot of distraction.

It doesn't have to be like that: our brand new and integrated group chats cause none of the above, but they áre:

Integrated with all other teamwork

Integrated in groups, use our group chat together with all your other team activity like:

  • Projects and tasks
  • Messages
  • Stories
  • Notebooks
  • Files and folders
  • Social posts
  • Polls
  • Culture/social questions
  • Check-ins
  • Shout-outs
  • Icebreakers
  • A task board (kanban) that's usable for users of all digital levels

Customizable, brandable

Just like the rest of the Lucius features, our group chat is:

  • Customizable to your organization's needs and workflows
  • ‘Brandable’ to your companies house style
  • Because: 100% open source

And last, but not least:

  • Multilingual
  • Also works easily with clients
  • Extendable with extra needed modules, integrations, functions or permissions
  • OpenSaas
  • Private hosting is possible

Basic features

group chat open source

Test our Group chat now, or install it yourself open source.

If you want to test our group chat this instant, that of course is possible, click underneath to get started. Or download and install OpenLucius open source yourself.

All-in-one toolkit for remote work -and culture.   Try for free now

Download and install OpenLucius open source yourself.   Download and install

Jan 06 2021
Jan 06

This function will remove the hook_query_entity_query_alter() function from the Group contrib module. If it doesn't, you could check the module weight of your custom module with the core.extension.yml file. There you can set

my_module: 0

to

my_module: 10

The module weight must be higher than the module weight of the contrib module which you want to modify.

Jan 06 2021
Jan 06

Every day, millions of new web pages are added to the internet. Most of them are unstructured, uncategorized, and nearly impossible for software to understand. It irks me.

Look no further than Sir Tim Berners-Lee's Wikipedia page:

The markup for Tim Berners-Lee's Wikipedia page; it's complex and inconsistentWhat Wikipedia editors write (source).The browser output for Tim Berners-Lee's Wikipedia pageWhat visitors of Wikipedia see.

At first glance, there is no rhyme or reason to Wikipedia's markup. (Wikipedia also has custom markup for hieroglyphs, which admittedly is pretty cool.)

The problem? Wikipedia is the world's largest source of knowledge. It's a top 10 website in the world. Yet, Wikipedia's markup language is nearly impossible to parse, Tim Berners-Lee's Wikipedia page has almost 100 HTML validation errors, and the page's generated HTML output is not very semantic. It's hard to use or re-use with other software.

I bet it irks Sir Tim Berners-Lee too.

The markup for Tim Berners-Lee's Wikipedia page; it's complex and inconsistentWhat Wikipedia editors write (source).The generated HTML code for Tim Berners-Lee's Wikipedia page; it could be more semanticWhat the browser sees; the HTML code Wikipedia (MediaWiki) generates.

It's not just Wikipedia. Every site is still messing around with custom

s for a table of contents, footnotes, logos, and more. I could think of a dozen new HTML tags that would make web pages, including Wikipedia, easier to write and reuse: , , , and many more.

A good approach would be to take the most successful Schema.org schemas, Microformats and Web Components, and incorporate their functionality into the official HTML specification.

Adding new semantic markup options to the HTML specification is the surest way to improve the semantic web, improve content reuse, and advance content authoring tools.

Unfortunately, I don't see new tags being introduced. I don't see experiments with Web Components being promoted to official standards. I hope I'm wrong! (Cunningham's Law states that the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer. If I'm wrong, I'll update this post.)

If you want to help make the web better, you could literally start with Sir Tim Berners-Lee's Wikipedia page, and use it as the basis to spend a decade pushing for HTML markup improvements. It could be the start of a long and successful career.

— Dries Buytaert

Dries Buytaert is an Open Source advocate and technology executive. More than 10,000 people are subscribed to his blog. Sign up to have new posts emailed to you or subscribe using RSS.

Calendar iconJanuary 6, 2021Clock icon1 min read time Tag icon
Jan 05 2021
Jan 05

How can we leverage Open Source contribution (in particular to Drupal) to maximize value for our customers? In this article, I would like to share the results of a recent workshop we held on this question as part of our internal gathering LiipConf.

Discover more about the service CMS our digital agency has to offer for you.

Together with a few colleagues we met for a brainstorming session. The goals set for this session were:

  • Share experiences about open source contribution at Liip and together with customers
  • Reflect on added value we can generate when contributing to Open Source
  • Mention any blockers, uncertainties or difficulties that you encounter when it comes to Open Source contribution
  • Come up with ways of including Open Source contribution into our workflows
  • Brainstorm what our customers would find valuable to know about Open Source contribution

Check-in

In our check-in, we asked which topics attracted people to come to the workshop. We had a good mix of engineers, product owners and UX folks from Drupal and Symfony in our meeting. The topics of interest spanned from “motivating clients to pay to create reusable solutions”, “sharing experiences in the context of contributions”, “getting started with contributions in 2021”, “listening in”, “finding ways to giving back”.

Method

Thanks to Emilie’s suggestion and facilitation, we used the Customer Forces Canvas to structure the discussion.

Open Source contribution board based on Miro.com and Customer Forces CanvasOpen Source contribution board based on Miro.com and Customer Forces Canvas

The canvas allowed us to capture different aspects of adopting contribution practices by asking structured questions:

  1. Triggering Event - What were those events that led to your decision to contribute back to Open Source?
  2. Desired Outcome - What outcome were you looking for?
  3. Old Solution - What solution were you using that was already in place?
  4. Consideration Set - What were alternative solutions that were considered?
  5. New Solution - What solution was selected? Why?
  6. Inertia - What were some concerns/anxieties you had before starting to contribute?
  7. Friction - What were some concerns after you started contributing?
  8. Actual Outcome - What was the actual outcome after starting to contribute? Did it meet your expectations?
  9. Next Summit - What would you like to see next for contribution? Why?

Discussion points

Examples mentioned were finding issues in existing Open Source solutions. Another key triggering event was that when the client understood how
Open Source works, they would be much more motivated to fund contributions. Often it is the motivation by an individual or the team striving to create better solutions without the need to maintain custom code individually for a customer project.

Goals we are striving for when contributing to Open Source include externalizing maintenance efforts to the community at large as well as doing good. By contributing back we are fueling the ecosystem that keeps our software up to date and innovative. We create more sustainable solutions when we are able to use standardized building blocks and follow community best practices.

When facing contribution opportunities, we are often presented with various ways to solve the issue. Fix the issue in custom code (miss the chance of contribution), fix the issue in a contributed module or fix the issue in Drupal core. Depending on the layer of abstraction, we can shoot for quick solutions or spend more time working on a generic solution. Alternatives to fixing the issues ourselves also include that we sponsor other maintainers to work on a sustainable solution that includes the resolution of the current issue.

We have also encountered issues where relying on too much abstract code created a risk for the project over time, especially when you deviate from the standard components it might become easier to internalize the functionality into the custom project’s code base so that it can be adapted without context switching but at the cost of needing to maintain the functionality without community support.

Even non-perfect code or work-in-progress can be released as Open Source so that others are able to build on it and eventually these building blocks will be further evolved. Sandbox projects or alpha releases can serve well as incubators for contributed code. Over time, when the project gets more mature, the semantic versioning approach with alpha & beta releases allows to specify well what users of the module can expect.

When discussing what was holding us back from contributing, many reasons can apply. Contributing to Drupal core takes more time than writing custom code. Sometimes it is just that folks involved don’t understand how Open Source works or what it is good for. When we create quick & dirty solutions, we sometimes don’t feel quite ready to Open Source them. Sometimes, we just don’t feel a need to contribute back because we can achieve our short term goals without doing so. Family folks mentioned that they can’t commit private time and focus on getting the job done during work time.

When discussing what was holding us back when making a contribution, we found that sometimes the effort invested doesn’t match the outcome. We need to invest more time than what we think is worth solving the problem. This can be especially driven by the fact that contributed code may imply higher quality standards enforced by peer-review from the community. It’s also the urge that once a solution is Open Source, we feel like we need to maintain it and invest more time continuously. If a custom solution is cheaper, why should the client pay for it when they cannot reuse it themselves? Sometimes we are not certain if anyone else will be willing to make use of our custom code.

We talked about the benefits that folks got when adopting contribution was adopted as a practise. Getting good community feedback on their solutions, having their solutions improved and evolved further so that it matches new use cases was mentioned. Giving speeches at conferences is also something that was found to be valuable. As a new step for contribution, folks mentioned that they would like to get help pushing their contributed modules so that they get adopted by a wider audience.

We also identified some USPs (Unique Selling Proposition) for contribution during the discussion. Clients would not need to pay for improvements contributed by the community. The maintenance of solutions based on contribution becomes more reliable. Contribution elevated self-esteem for clients and teams and helped increase visibility. It helps as a sales argument for agencies towards clients and also helps engineers to become hired by a Drupal agency like Liip. Some folks even manage to make money on platforms like GitHub sponsors or Open Collective.

Takeaways

We closed our meeting to collect some takeaways and what’s next for folks in contribution. Here’s a list of the key takeaways:

  • A “contrib-first approach” that incorporates the contribution mindset
  • Adding contribution checkpoints into the definition of ready/done
  • Inviting for cross-community contribution between Symfony and Drupal
  • Raising contribution in daily meetings, motivating each other to speak at conferences
  • Making sure that our contributions are used by others
  • Helping to find areas of contribution for non-developers
  • Balancing being a taker vs. a maker
  • Evolving a plan to communicate our efforts around contribution

What’s next for you in contribution? Have you experimented with the Customer Forces Canvas? Thank you for improving Open Source & let us know in the comments.

Image credit: Customer Forces Canvas (c) LEANSTACK
https://leanstack.com/customer-forces-canvas

Jan 05 2021
Jan 05

MidCamp 2021 is going to be a camp like none other. We’re tired of virtual sessions and we’re reimagining our camp to make it more community-oriented and interactive! 

In early March 2020 when we made the decision to take MidCamp 2020 virtual, we had very little idea of how the year would unfold. Our 2020 camp was a blast, but 9-months deep into the pandemic in the US, we needed to reassess our situation for 2021. We did some digging, and tried to get back to our roots. Here’s where we landed.

Tl;dr

  • MidCamp is happening virtually March 24-27, 2021.
  • It’s going to be something totally different—focused on humans and providing personal growth through collaboration.
  • Tickets will be pay-what-you-wish.
  • Keep an eye out for sponsor info, coming soon.

WHY are we doing this?

  • We want to sustain the Drupal project & community. Local events are often considered the best onramp to the Drupal project, and we need to keep camps healthy to keep Drupal healthy.
  • We want to maintain our presence as a brand and a team. MidCamp is now 7-years old, we have a great team of organizers, and we want to keep the gang together.

WHAT are we doing?

MidCamp 2021 will be a four-day event, but this year we’re designing the program (and the tickets) to be much more freeform and drop-in/drop-out. None of us have the time or the energy for four full-days of Zooming any more.

  • Wednesday: Community Day. A new concept we’re piloting, Community Day is meant to onboard attendees to the Drupal Community. Attendees will be presented with a range of introductory presentations, interspersed with small-group mentoring sessions.
  • Thursday: Opening Ceremonies. After learning about Drupal and the Community, attendees will enjoy a day of lightly structured activities to decompress, have fun, and have some human time. Twitch party? Gather town? D&D quest? They’re all possible.
  • Friday: Unconference. Instead of formal sessions, we’ll do Drupally things in a one-day un-conference format. If you’re unfamiliar with the format, read about organizing and attending.
  • Saturday: Contribution Day. Our traditional day to give back to the Drupal project. All experience levels are welcome, and you might even get your first core commit!

WHO is this for?

MidCamp is for “people who use, develop, design, and support the Web’s leading content management platform, Drupal.” We wanted to elaborate on that statement this year.

MidCamp 2021 is for:

  • People who are completely new to Drupal, that could be:
    • a technology professional who has never used Drupal,
    • a recent dev bootcamp or computer science / information science graduate, 
    • a job-seeker or career-changer with an interest in being involved in a vibrant and supportive web development community.
  • Mid to Senior-level professionals who work with Drupal in any fashion.
  • Drupal contributors of any kind.

WHERE is it?

The Internet. We’ll host activities across a variety of platforms, but all will be accessible from anywhere in the world.

WHEN is it?

March 24-27, 2021. Most activities will occur during business hours, Central TIme.

HOW (much) is it?

As our event will be non-traditional and much less costly to run than an in-person event, MidCamp 2021 will be pay-what-you-wish. Individual and corporate sponsorship information is coming soon.

In Conclusion

Thanks for sticking around. We’re excited for what 2021 has in store. Join the conversation on Slack, listen in on Twitter, or subscribe to the email list.

Jan 05 2021
Jan 05

Even though 2020 came to a close with an overwhelming sense of “good riddance,” the year was not all bad. It was filled with as many surprises as it was filled with opportunities for growth, learning, and many new developments.

The realities of remote work revealed new levels of resilience and flexibility, Drupal 9 was released right on time, and here at Promet Source, we pulled together a lot of collective brainpower to introduce new possibilities for empowering content editors while streamlining web development. 

Our weekly blog posts reflect our commitment to draw upon a depth and breadth of our team’s expertise to convey best practices, new insights, innovations, and thought leadership for the Drupal and web development communities.

Here are Promet's 10 blog posts that grabbed the most attention.  

 

1. Drupal Enabled Drag-and-Drop Content Management, by Chris O’Donnell

Drupal enabled drag and drop blog image

Leading up to the end-of-year launch of Provus, which offers a new approach to designing, developing, and managing Drupal sites with intuitive, no-code, drag-and-drop page-building tools, this post explained the foundations of component-based web design systems and the accompanying leaps forward for efficiency and content editor empowerment. Read Drupal Enabled Drag and Drop Content Management.

 

2. Provus! Drupal Content Editing Reimagined, by Mindy League

Provus Feature Image

Signaling new directions and game-changing possibilities for 2021, this final post of the year sparked a surge of interest in Provus, Promet’s new platform for better content editing in Drupal, and presented insight into the kind of thinking that drove the development of this new platform. Read Provus! Drupal Content Editing Reimagined.
 

3. How to Master Entity Access in Drupal, by Bryan Manalo

How to Master Entity Access banner

The first in a two-part series on Entity Access, this how-to provided an in-depth tutorial on hook entity access, along with a discussion of when and how to use it. Read How to Master Entity Access in Drupal.

4. How to Facilitate an Innovative Remote Meeting, by Mindy League

remote work illustration

Early into the pandemic, as many began looking for new ways to enhance engagement, Promet offered a new approach for breathing new life into remote meetings by applying the techniques of design thinking and human-centered design. Read How to Facilitate an Innovative Remote Meeting.

5. Anticipating Post Pandemic Web Design Trends, by Mindy League 

Post pandemic design trends

As Covid-19 heads for the history books, “normal” stands to look a lot different than how we remembered it. Pointing to design changes that have been sparked by global upheaval in past decades, this post looked at what’s next and cited upcoming trends for web design. Read Anticipating Post Pandemic Web Design Trends.

6. Remote Work Success in a Time of Caution and Quarantine, by Pamela Ross

Promet's Pamela Ross

With a track record of attracting talent from all over the world and effectively collaborating via Zoom, Promet Source entered the pandemic with an edge over companies that were scrambling to adjust to working remotely. This post shared some of Promet’s expertise on the topic with five key strategies for optimizing the remote work opportunities. Read Remote Work Success in a Time of Caution and Quarantine.

7. Drupal 9 Has Dropped! What to Do Now, by Aaron Couch

Drupal 8 to Drupal 9 migration

Despite a global pandemic, Drupal 9 was released on time, as promised, on June 3, 2020. This post covers the key features of Drupal 9 and lays out a strategy for assessing migration readiness. Read Drupal 9 Has Dropped! What to Do Now.

8. Pros and Cons of Five Web Accessibility Tools, by Denise Erazmus 

scales for weighing pros and cons

There are a wide range of available tools designed to support ADA web accessibility compliance, but they vary in the number and types of errors they detect and the degree to which they can help ensure compliance. To help sort through options, this post covers the five most popular tools or extensions, along with the key pros and cons of each. Read Pros and Cons of Five Web Accessibility Tools.

9. Always Be Optimizing for SEO, by Ishmael Fusilero

Optimize for SEO

This post explains why and how organizations need to approach SEO as an ongoing activity, consistently monitoring metrics, along with a strategy to leverage the intelligence hidden within the data. Read Always Be Optimizing for SEO.

10. Drupal 8 Load Testing with Locust, by Josh Estep

Load Testing with Locust

Load-testing is an essential step in the development process. It quantifies the amount of traffic a site can sustain both during development and prior to launch. This post provides a how-to on the use of Locust as an Open Source load testing tool for Drupal 8. Read Drupal 8 Load Testing with Locust

With a diverse talent base, Promet Source is well positioned to share expertise and insights that connect, engage, inform, and spark new ideas. Do you have big plans for your website in 2021? Let us know what we can do to help you achieve your goals!
 

Banner with link to subscribe to Promet Newsletter
 


 

Jan 05 2021
Jan 05

Want to give back to the Drupal Community? Here’s your chance. Drupal Global Contribution Weekend, January 29-31, is a virtual worldwide event everyone can participate in from anywhere in the world. 

Jan 05 2021
Jan 05

The following is a step by step instruction for implementing reading minutes left for a particular article, blog, or similar, just like we see on medium.com.

The JS file

  • I have used this JS library.
  • Place this code in a JS file named read-remaining-minutes.js and place it in the corresponding theme.
(function($) {
  $.fn.readingTimeLeft = function (options) {

    var s = $.extend({}, {
      stepSelector: '*',
      wordPerMinute: 100,
      eventName: 'timechange'
    }, options);

    var $this   = $(this)
    , $window = $(window)
    , $steps  = $this.find(s.stepSelector);

  // For each step element, store the quantity of words to come
  $steps.each(function (i, el) {
    var textAhead =  $steps.slice(i, $steps.length).text();
    $(el).data('words-left', textAhead.trim().split(/\s+/g).length);
  });

  // Filters elements that are in viewport
  $.fn.filterVisible = function () {
    var wW = $window.width(), wH = $window.height();
    return this.filter(function(i, e){
      var rect = e.getBoundingClientRect();
      return rect.top >= 0 && rect.right <= wW &&
      rect.bottom <= wH && rect.left >= 0;
    });
  }

  function throttle (fn, limit) {
    var wait = false;
    return function () {
      if (wait) return;
      fn.call(); wait = true;
      setTimeout(function () { wait = false; }, limit);
    }
  };

  var triggerOn = 'scroll.' + s.eventName + ' resize.' + s.eventName;

  // Throttle updating to 50ms
  $(window).on(triggerOn, throttle(function (e) {
    var wordsLeft = $steps.filterVisible().last().data('words-left');
    $this.trigger(s.eventName, wordsLeft / s.wordPerMinute);
  }, 50));

  // Destroy function
  $this.on('destroy.readingTimeLeft', function (e) {
    $(window).off(triggerOn);
    $steps.removeData('words-left');
  });
  return $this;

};
}(jQuery))
  • Next in the template.php file for your corresponding theme, you need to add the above JS file for the particular content type. You can do something like this below:
if ($vars['node']->type == 'article'') {
        drupal_add_js(drupal_get_path('theme','my_theme') . '/js/read-remaining-minutes.js');
}
  • After you have told Drupal to add the JS file, through the above code, your JS code will be ready for the page.
  • Now you need to specify where you want to add this functionality.
  • For that, I have a custom JS named “my-custom-js-code.js” file, in this same theme itself, where I usually write all my custom JS. Here I will specify my custom JS code.
// Reading time left for a blog post
    // #calculable-content is the id of the content on which we want to apply the calculation for reading time
    $('#calculatable-content').readingTimeLeft()
      .on('timechange', function(e, minutesLeft) {
        if(isNaN(minutesLeft)) {
          // .time-left is the class belonging to the read remaining div
          $('.time-left').hide();
        }
        else {
          // If less than 1 min remains then display "Content Finished" else show the minutes left
          var text = Math.round(minutesLeft) < 1? $('.time-left').text('Content Finished') : $('.time-left').text(Math.round(minutesLeft) + ' min left');
          $('.time-left').show();
        }
      })
    $(window).trigger('scroll');
  • Here I am considering that when the scroll reaches the end, it will show “Content Finished”. I will explain the id and the class used below.

Modifying .tpl.php

  • We have placed our JS codes as needed. Now we need to link it to the class in HTML so that it appears on the page.
  • I have a .tpl.php file which is responsible for rendering all the HTML content for the particular page named “custom-template.tpl.php”
  • In this .tpl.php file, at the place where you want this read remaining minutes block of text to appear,  you have to specify the HTML for it.
 
  • The time-left class is the wrapper class for the block, that is the entire block of the text itself.
  • The id calculatable-content is what we are using to calculate the time left, which will dynamically change while you scroll through the page.

Implementing CSS

  • We need to add a decent enough css so that it appears on the page without hurting the eyes!
  • You can use the following css, using this will place the block of text at the right top section of the page.
.time-left {
  position: fixed;
  right: 0;
  top: 176px;
  padding: 10px 10px 10px 40px;
  background: #068bb8;
  color: #fff;
  font-size: 15px;
  line-height: 19px;
  cursor: default;
  border-bottom: 0px;
  z-index: 999;
  &:before {
    content: url('../../../../../sites/all/themes/my_theme/images/time-left-white.png');
    position: absolute;
    top: 12px;
    left: 15px;
    @media screen and (max-width: 767px) {
      top: 8px;
      left: 10px;
    }
  }
  @media screen and (max-width: 767px) {
    padding: 6px 6px 6px 35px;
    font-size: 12px;
  }
}

Final approach

Now you just need to hit the cache clear, sit back and enjoy. Observe how the time changes as you scroll through the page!

Jan 05 2021
Jan 05

You might want to add the functionality for a magnific popup where there are multiple items, say images, videos which on clicking would open up in a popup and you would be able to scroll through those. Something like this: https://dimsemenov.com/plugins/magnific-popup/.

Worry not! You do not need to go through the entire documentation in the above link. I have done the hard work for you so that you can get it done in the wink of an eye!

Initialization and modification in custom JS

  • First you need to include the JS library in your theme.
  • The minified file is quite big, so I am not providing it here.
  • You can find the minified JS file here: https://github.com/dimsemenov/Magnific-Popup/blob/master/dist/jquery.magnific-popup.min.js.
  • Place this JS file in the theme you wish to use.
  • Next in the template.php file for your corresponding theme, you need to add the above JS file for the particular content type. You can do something like this below:
if ($vars['node']->type == 'article'') {
        drupal_add_js(drupal_get_path('theme','my_theme') . '/js/jquery.magnific-popup.min.js');
}
  • Once done, you need to write the custom js, where you want this magnific popup to be triggered.
  • The custom JS should look something similar to this:
// Gallery section magnific popup
      if($('.gallery-section .tab-content').length) {
        // magnificPopup for tab 1
        if($('.gallery-section .tab-content .tab1).length) {
          $('.gallery-section .tab1).magnificPopup({
            delegate: 'a',
            type: 'image',
            gallery: {
              enabled: true
            }
          });
        }
}

Somethings to note:

  • I had a tabbed gallery section. Each of the tabs contained a video as the first element and then the rest were images.
  • Here first I check if the gallery section exists. If so, then I again check if the particular gallery tab exists. If so, then for that particular gallery tab I implement the magnific popup.
  • Where “delegate: a” means that I am imposing the functionality on the “a” tag.
  • I have specified the type as an image. You might have the question then how would it work for the video right? I will definitely tell you that in the later section.
  • Finally, we initialize the gallery as true, for it to work.

Implement the custom Html

  • Implement the custom HTML as you like, a gallery tabbed section in my case.
  • Let us see an example of the html I have used:


tab video



  • Now comes the fun part! All of the above are images, except for the first one which is a video. For that to work properly, you simply need to add the following class “mfp-iframe” in the class for the respective video “a” tag.
  • Here I have 1 video and 4 images. That is a total of 5 elements. So when you cycle through these, below you will be able to see that the total count is shown as 5.
  • For sections where you may have multiple tabs, you need to repeat the js
$('.gallery-section .tab1).magnificPopup({
            delegate: 'a',
            type: 'image',
            gallery: {
              enabled: true
            }
          });

For each of the tabs with their corresponding ids respectively. Else it will take the total count of all the elements “for that particular section” and “not that particular tab” and cycle through, say, 100 elements (the total number of elements that you may have in the entire section) instead of the 5 elements in that particular tab.

I am not providing the CSS as that is subjective to how your section looks. Enjoy!

Jan 05 2021
Jan 05

10 Years!We're kicking off our 10th year of Drupal Career Online - the longest running long-form Drupal training program in existence. To help mark the occasion, we thought it would be fun to share some of the things we've seen over the past 10 years that our students (both DCO and private training clients) have shared with us that made us think, "yeah, you really should enroll in Drupal Career Online..."

  1. Not using Composer yet - this is more of a recent (Drupal 8+) development, but we're still surprised when we see folks not using Composer to manage their Drupal 8 codebase. The DCO teaches best practices for using Composer and the drupal/recommended-project core Composer template.
  2. Using the "Full HTML" text format for everything everywhere - it is just plain scary when we see this, as it usually indicates a lack of understanding of both Drupal core text formats and basic security practices. The DCO provides both instructor-led and independent-study lessons on text formats.
  3. Relying on a single layout tool - in Drupal 8+, there are multiple ways to layout a page. This includes block placement, custom templates, Panels, Paragraphs, and Layout Builder. Not understanding the strengths and weaknesses of each of the more widely used solutions can lead to "everything looks like a nail, so I'll use a hammer everywhere" solution, which can result in a poor implementation. The DCO covers the basics of each of these layout techniques.
  4. Fear of Drupal versions greater than 7 - "the drop is always moving” – Drupal is continually evolving (and so is the DCO!). Embracing emerging versions of Drupal, like 8+, keeps you current, makes you more employable and introduces you to modern web development techniques.
  5. Modules are enabled and you have no idea why - one of the primary skills the DCO teaches is how to find answers, mainly by helping you create and grow your Drupal network. From classmates, to the active DrupalEasy learning community, community mentors, to online Drupal etiquette; we show you how and where to efficiently find answers.
  6. Your site always has errors on the Status Report page - the DCO's "site maintenance" lesson begins with the Status Report page. We provide a step-by-step approach to troubleshooting Status Report (and other) issues that may appear on sites you maintain.
  7. Your available updates page has more red than green - updatings modules can be scary. Git, Composer, database updates, and testing methodologies can sometimes make the seemingly simple task of updating a module arduous. Maybe you're the type that "updates all the things at once" and then crosses your fingers and hopes everything works. The DCO provides a step-by-step methodology for updating both Drupal core and contributed modules.
  8. Your site has one content type that is used for everything (aka, "I have no idea what entities, bundles, and fields are") - this is often a red flag that the site's information architecture (IA) isn't quite what it should be. Our site-building lessons include a healthy dose of IA, focusing on Drupal core entities, bundles and fields and how to efficiently map an organization's data to Drupal.
  9. Pathauto isn't installed nor enabled - maybe you're not the type to get up every morning and scour Twitter for the latest Drupal news. Luckily, we are, and much of the best-practice-y stuff we find goes directly into Drupal Career Online. We'll talk about contributed modules that most sites should absolutely be using.
  10. You have no idea what cron is (or if it is running) - when we perform site audits, this is normally one of the first things we look for on the Status Report page. The DCO covers this and other topics focused on Drupal best practices. 

If you're reading this and it is hitting a close to home, consider joining us at one of our upcoming Taste of Drupal webinars where we'll spend an hour talking and answering questions about the next semester of Drupal Career Online.   
 

Jan 05 2021
Jan 05

Marketers are constantly on the lookout for ways to boost website engagement and increase interactivity.   Various third-party integrations and tracking tools are tried and tested to see what works best for the business. But over time, keeping a track of all these snippets/tags/integrations that are hardcoded on the website can get tedious and messy. And that’s where Google Tag Manager comes in. In this article, you will learn about how to integrate Google Tag Manager with your Drupal 8 website.

Integrating Google Tag Manager with Drupal 8

What is Google Tag Manager and how can it help?

Google Tag Manager is like a toolkit. It has all the tools you need meticulously organized in one place. Tools like Google Analytics, Adwords, personalization tools, A/B testing tools, Remarketing, Native advertising pixels, and much more. All the integration tags can be stored in Google Tag Manager for better access and management. How can it help, you ask?

  • Marketers find it greatly beneficial as they don’t have to depend on developers to add and modify integration tags on the website. They can easily do it themselves.
  • Better organization of tags can help marketers’ access and manage their integrations easily.
  • Updating tags don’t require you to change them on multiple web-pages. 
  • Test if your tags get triggered on any event/action in the preview mode.
  • You can even check for formatting or security issues before deploying it to live.
  • Provides an extra layer of data security.

Integrating Google Tag Manager in Drupal 8

Drupal 8 integrates seamlessly with Google Tag Manager and installing it is also a breeze. This module is also compatible with Drupal 9! Now that we know how useful Google Tag Manager is, let’s move on to integrating it with your Drupal 8 website.

STEP 1 – Install the module

You can download the Google tag Manager module here.

Or run this command through composer with this command :

       
composer require 'drupal/google_tag:^1.4'

Install the GTM module

STEP 2 – Configure the Module

In the admin toolbar, go to Configuration-> System -> Google Tag Manager

Configure the GTM module


STEP 3 – Open Container Page

Once you click on Google Tag Manager, you will see a container page like this –

Open container page


STEP 4 – Add a Container

Click on the Add container button. This is where you will add the Container ID that you had created previously when you signed up with GTM. If not done already, go here to signup and create a container ID for yourself (shown in the below steps).

Add a container


STEP 5 – Create a Container

Under Account setup, give the Account name and your Country name.

Under Container setup, give the Container name and select the target platform as per your requirements.

Click on the Create button.

Google Tag Manager-Create n account


Step 6 – Get the Container ID

Once done, you will be able to see a popup screen that will display the code snippet that needs to be pasted in the head section. Look out for the Container Id which will look like an alphanumeric word and begins with “GTM”. Here it is “GTM-MXQN9XL”. Copy this code to your clipboard.

Tag Manager Code - get the container ID


Step 7 – Insert the Container ID

Head back to your Drupal setup where you had to add the container ID. Give a name for your container in the Label field and paste the previously copied container ID in the Container ID field. Save the configuration.

Insert container ID


Step 8 – And we’re all set!

To verify if the installation of Google tag manager has been successful in your Drupal site, go to the home page of your website and do an “Inspect Element”. If it has been installed properly, you should be able to see it within the <head> tag.

Check if GTM module is properly installed
Jan 05 2021
Jan 05

In every software development project, we have to make estimates - how long will the project take? How much will it cost?

The truth is, though, that this is not something we’re taught in college, nor in our coding bootcamps, YouTube tutorials or Stack Overflow threads. And once we have to do it, we quickly realize the key thing about it: making accurate estimates is extremely hard.

Let me illustrate the point with a non-development example. Let’s say you’re buying food and drinks for a skiing trip with friends. This is the information you have about the trip:

  • There will be 6 - 10 people.
  • The trip will last 3 - 5 days.
  • People have different food preferences - you need to be innovative and think outside the box.
  • Buy some drinks, but not too many. And don’t forget about tea.
  • Make sure to get the best possible quality for the lowest price.
  • Before you go to the store, tell me the final bill and how much time you’ll be spending in the store.

Seems quite complicated, right? Well, software development projects, depending on their size, often have significantly more moving parts and changes down the road than a skiing trip.

In this blog post, I’ll go through the main reasons why making estimates is so difficult in software development and provide you with essential tips to help you make as accurate estimates as possible.

Why is making estimates so hard?

Front-end and back-end developers have different reasons for having trouble with making accurate estimates.

Front-end developers

  • They only receive wireframes for the design.
  • The designs look easy, but there are no descriptions of animations.
  • They get no specifications for different devices and screen sizes.
  • The client doesn't actually know what they want until they see it.
  • The client assumes that something is easy to achieve if they’ve seen it on another site.

Back-end developers

  • The task description isn’t detailed enough.
  • You’re new to the project, so you’re either missing the big picture or not understanding the overall business logic.
  • The project uses 3rd-party services you aren’t familiar with.
  • The project uses a technology you aren’t that comfortable with.
  • The project’s requirements change as it progresses (holds true for front-end developers as well).

Additionally, and this also holds true for both front- and back-end developers, we often have idealistic views of our own capabilities, which can pose problems when making estimates and in particular following our estimates.

Finally, another underlying reason for making inaccurate estimates has nothing to do with the project, your level of experience or the technologies used. It’s actually an essential human trait: a strong desire to please other people by telling them what they want to hear. 

When making estimates, this desire often translates into providing very optimistic estimates in terms of both time and budget, which more often than not don’t stand up to the test of scope changes, back and forths with the clients, and any other unexpected disruptions. 

Key tips for making estimates

So, with all this in mind, let me reiterate: making estimates is hard! Your best bet is to lean on past experience, but you of course won’t have this option when you’re new to software development and making estimates. 

Here are some essential tips for getting through these initial hurdles and having the right mindset for making better estimates, together with examples of tasks to illustrate this.

1. Break down the work

It’s much easier to estimate smaller activities than larger ones. By breaking down the work into smaller tasks, you’ll get a clearer picture of all the requirements. Consequently, your estimate will probably be higher (read: more accurate), plus you’ll likely figure out that you have additional questions.

Requirement: Build the front page

What you shouldn’t do: “Looks pretty straightforward, a day should be more than enough.” 8 hours.

What you should do: “Let’s break this into smaller pieces:

  1. Create a “CTA” section with a slider and links. 4 hours.
  2. Create a “Latest news” section with a list of news and a link to each piece of content. 2 hours.
  3. Add a “Solutions” section which has an animation for each solution icon on hover, which will need an SVG animation. 8 hours.
  4. Add a “Contact us” form at the bottom of the page. 4 hours.

Total: 18 hours.

As we can see, the estimate is totally different if we break down the task into smaller tasks and thus get a better understanding of all the requirements. 

2. Ask questions, don’t assume

Task descriptions that you’ll get will often be missing vital information. You might understand something differently than the client, and even if you are on the same page, certain things can be done in multiple ways. 

Assuming without asking questions would be almost like playing a version of the telephone game, where the final result is unlikely to reflect the client’s initial need.

Requirement: Each page needs to have a breadcrumb

What you shouldn’t do: “Oh, nice, this is an out-of-the-box feature of the CMS.” 2 hours.

What you should do: ask questions to get as much extra information as possible. Do we follow the menu? Is there a pattern for the breadcrumbs? If so, are there any special pages that won’t follow this pattern? If yes, which ones are they?

By asking questions and getting deeper into the problem, you’ll very likely produce a much higher estimate than by just assuming - but the work will be that much more likely to meet the client’s needs.

3. Propose adjusting the requirement

Maybe there’s an existing solution to the problem at hand, or it could even be a core feature of the technology you’re using. You can spend (or even waste) a lot of time working on a feature that you don’t feel is vital to the project.

By proposing to the client to adjust the requirement, it may open up the possibility for a better solution that even ends up being faster to build. If your solution ends up saving time and money, the client will definitely appreciate it.

Requirement: Every image upload has a mobile image upload (fall back to original if there’s no image uploaded for mobile)

What you shouldn’t do: “I’ll create two separate image upload fields, then write some logic for falling back to the desktop image if no mobile image is present. Should I just print both and hide one with CSS?” (This last one completely defeats the purpose, by the way.)

What you should do: ask the client if the reason for this is just the size of the image, or are they planning on uploading different visuals. Tell them about the responsive images feature your CMS might have, which makes use of the tag and allows you to upload once and select different styles for each breakpoint. It may turn out that this was exactly what the client wanted.

You may have noticed that point number two, asking questions, plays a major role here as well - you cannot propose changing a requirement if you don’t have a deep enough understanding of it. 

Changing a requirement doesn’t mean that the client won’t get the feature they want. On the contrary, it may lead to a solution that’s even better aligned with what they need.

4. Take your degree of confidence into account

Referring back to the reason for making inaccurate estimates that’s common to both front-end and back-end developers, you should take care not to overestimate your abilities or experience. 

Before making any estimates, ask yourself these questions:

  • How familiar am I with the situation?
  • Have I done something similar before or is it something totally new to me?
  • Do I have all the necessary information to complete the task?
  • Am I new to the project or have I worked on it before?

Requirement: We want our users to be able to log in with their Twitter account

What you shouldn’t do: “I just made a Google account login a month ago, it’s probably more or less the same.” 8 hours.

What you should do: “Ok, I’ve made a Google login, but I’ve never worked with a Twitter API. My degree of confidence is low, so I’ll take 8h as a base, then multiply that by at least 2. 16+ hours.

5. Use a sequence

Development teams usually agree on a particular sequence for their estimates, e.g. the Fibonacci sequence (1, 2, 3, 5, 8, 13, 21) or a Fibonacci-like sequence (0.5, 1, 2, 3, 5, 8, 13, 20, 40, 100).

In agile software development, estimates are typically done with story points rather than actual hours. The two story point sequences most frequently used are 1, 2, 4, 8, 16 and 1, 2, 5, 10, 20.

It’s key for all developers working on the same sprint or project to stick with the selected sequence for all their tasks. For example, when you’re using the Fibonacci sequence and know that a task will take longer than 5 hours to complete, choose the next number in the sequence (in this case 8) rather than the next natural number (6).

6. Set a maximum number of hours per task

You (and your team) should set the highest limit for a single task. As we just saw, no task should take longer than 16 hours, or 20 in a Fibonacci-like sequence, for example. If it’s bigger, break it down into smaller parts, as we covered under point 1.

7. Don’t forget about the things we tend to forget about

Finally, we need to mention all the things besides the actual development that you’ll need to factor into your estimates - but which a lot of developers tend to forget about when making them:

  • Communication, meetings
  • Initial setup
  • Revisions
  • Testing
  • Bug-fixing
  • Deployment

These are all key and inevitable parts of any software development project you’ll be working on, so make sure to have this in mind during initial planning and estimating.

How we approach it at Agiledrop

Most of what I’ve discussed in this article is based on the way we’re used to doing things at Agiledrop. We’ve been following these guidelines ourselves and have really improved the accuracy of our estimations over time by doing so.

One thing we do that I believe has really contributed to this is the fact that several different developers often go through the estimates for the same task, which gives us an even better understanding of the task and its potential pain points, as well as helps us be even more accurate in the future.

Conclusion

Whiteboard with multicolored post-it notes

So, to recap, these are the key things to do when making estimates in software development:

  1. Break down the work into smaller tasks
  2. Don’t assume without asking questions
  3. Propose adjusting the requirement
  4. Factor in your degree of confidence
  5. Agree on a sequence for estimating
  6. Set a maximum time limit for each task
  7. Don’t forget about the extra things

I hope these tips will help you in all your development estimates. All of this may take a bit of time getting used to for those who are just getting started with client projects, but the more projects you do, the more intuitive it will all become, and soon you will start feeling much more confident in making accurate estimates.

Jan 04 2021
Jan 04

Lynette has been part of the Drupal community since Drupalcon Brussels in 2006. She comes from a technical support background, from front-line to developer liaison, giving her a strong understanding of the user experience. She took the next step by writing the majority of Drupal's Building Blocks, focused on some of the most popular Drupal modules at the time. From there, she moved on to working as a professional technical writer, spending seven years at Acquia, working with nearly every product offering. As a writer, her mantra is "Make your documentation so good your users never need to call you."

Lynette lives in San Jose, California where she is a knitter, occasionally a brewer, a newly-minted 3D printing enthusiast, and has too many other hobbies. She also homeschools her two children, and has three house cats, two porch cats, and two rabbits.

Jan 04 2021
Jan 04

Note: This post is written with a Drupal context, but applies to any PHP project.

This is a test that I wrote recently, which uses the camel case method name that is recommended by the Drupal and PSR-2 coding standards:

public function testThatPathAliasesAreNotTransferredToTheNewLanguageWhenOneIsAdded(): void {
  // ...
}

It has a long method name that describes the test that is being run. However, it's quite hard to read. Generally, I prefer to write tests like this, using the @test annotation (so that I can remove the test prefix) and snake case method names:

/** @test */
public function path_aliases_are_not_transferred_to_the_new_language_when_one_is_added(): void {
  // ...
}

This to me is a lot easier to read, particularly for long and descriptive test method names, and is commonly used within parts of the PHP community.

This approach, however, can result in some errors from PHPCS:

  • The open comment tag must be the only content on the line
  • Public method name "DefinedLanguageNodeTest::path_aliases_are_not_transferred_to_the_new_language_when_one_is_added" is not in lowerCamel format

We can avoid the errors by excluding the files when running PHPCS, or modifying rules within phpcs.xml (or phpcs.xml.dist) file to change the severity value for the rules. These approaches would mean either ignoring all PHPCS sniffs within the test files or ignoring some checks within all files, neither of which is an ideal approach.

Ignoring whole or partial files

We can tell PHPCS to ignore whole or partial files by adding comments - there's an example of this at the top of default.settings.php file:

// @codingStandardsIgnoreFile

The @codingStandards syntax, however, is deprecated and will be removed in PHP_CodeSniffer version 4.0. The new syntax to do this is:

// phpcs:ignoreFile

As well as phpcs:ignoreFile which ignores all of the sniffs in an entire file, there are also commands to disable and re-enable PHPCS at different points within the same file:

// Stop PHPCS checking.
// phpcs:disable

// Start PHPCS checking.
// phpcs:enable

Disabling specific rules in a file

As well as excluding a section of code from checks, with phpcs:ignore you can also specify a list of sniffs to ignore. For example:

// phpcs:disable Drupal.Commenting.DocComment, Drupal.NamingConventions.ValidFunctionName

By adding this to the top of the test class, these specific sniffs will be ignored so no errors will be reported, and any other sniffs will continue to work as normal.

If you're unsure what the names of the sniffs are that you want to ignore, add -s to the PHPCS command to have it include the sniff names in its output.

For more information on ignoring files, folders, part of files, and limiting results, see the Advanced Usage page for the PHP CodeSniffer project on GitHub.

You can also see this being used in some of the tests for this website.

Jan 04 2021
Jan 04

2020 has ended. (Finally!) So, is it time to rejoice? Is it time to be cheerful and optimistic?  Or, do we have to be realistic and concerned about the coming months?  We have this proclivity to celebrate and be mirthful when the New Year begins. But the fact is that Covid-19 hasn’t left yet and has actually left an indelible mark. We keep hearing about the occurrence of new Covid strains and how it is becoming more fatal than ever as days go by. Anyway, the good news is, the coronavirus vaccine is already here and multiple countries are fast-tracking the approval processes.

Beach in background and ' Drupal 2020 Year in review' written at the centre


With our strong resolve to tackle it, we have found ways to fight the gloom. 2021 has only just begun and will bring with it new challenges like every year does. No matter how difficult it gets, we will dodge everything that comes our way this year too and come out on top together. The Drupal fraternity, too, has risen to the top by uniting and becoming a force to reckon with. The Covid-19 pandemic has only made the Drupal Community to come even closer and work together. The power of the open source community was discernible as the Drupal project, which did get affected by the pandemic, witnessed an awful lot of growth in 2020. As the year has gone by, it’s time to look back, analyse, see what worked and what not and look forward to 2021 with hope.

19 years old: Almost two decades and still counting!

Drupal celebrated nineteen years of existence since it was first released on 15th January 2001. Well, it all began in 2000 when Dries Buytaert decided to put an internal site, used by a small group of people for socialising, online.

Snapshot of a tweet with a person's image on top left, textual content below it about Drupal's nineteenth (19th) birthday and an image with blue background and number 19 below it


So, what began as a hobby project is now a global project with hundreds and thousands of users, active contributors and a strong ecosystem. 15th January 2021 will mark the completion of two decades since Drupal came into being. With Drupal Community’s continued efforts to innovate, reinvent and evolve, Drupal has thrived and will continue to do so for many years to come.

#DrupalCares: Weathered the ‘Covid-19’ headwinds

With the Drupal Community celebrating 19 years of Drupal, a new viral disease from Wuhan was rapidly becoming the international cause for concern. Therefore, it will be incomplete without the mention of the coronavirus pandemic and its impact on the Drupal world. Without a doubt, these are unprecedented times and no one saw it coming. But, Open source, with a reputation for being recession-proof, got itself back on track with the support of millions of supporters. Drupal was no different.

A drop shaped icon with hearts surrounding it and Drupal Cares written below itSource: Drupal.org

Drupal Association, which was formed as an open source non-profit to help grow and sustain the Drupal Community, was hit financially. But the massive response to #DrupalCares fundraising campaign only showed the power of the open source community and the open source model that make projects like Drupal the best possible investment in these uncertain times. Over $500,000 was raised in just about 30 days surpassing the expectations initially set. (The plan was to raise such a large amount in 60 days.) Hundreds of businesses and organisations along with thousands of individual donors and members donated to reach the goal in record time.

[embedded content]


DrupalCon: The year of virtual events

Ever since the first gathering of Drupal contributors way back in 2005, where just about 50 people made an appearance, DrupalCon has become the global event that is highly anticipated every year. While it acts as the ideal platform for the Drupalists from around the globe to meet at one place and share intuitive ideas, DrupalCon Minneapolis 2020 was very unlikely to happen with every other nation closing their borders and putting their respective countries under lockdown to try and curb the ever-multiplying infection rate.

Collage of Screenshots from Drupal 2020 international events like DrupalCon Global 2020 and DrupalCon Europe 2020 homepages with an image of earth on top and buildings at bottom


Eventually, virtual events started kicking off everywhere. For example, the likes of Cannes, Sundance, Sarajevo, Berlin, Rotterdam, Toronto, Tribeca, Locarno, Mumbai among others joined hands for the first time to screen one-of-its-kind, free, global, virtual film festival called We Are One. And, Drupal Community, too, had its share of virtual outings in its intinerary of Drupal 2020 international conferences as it had its first ever virtual DrupalCon Global 2020. Going by its success, DrupalCon Europe 2020 followed months later. Unlike 2019, where large gatherings and in-person meetings were the norms, 2020 became the year of everything virtual. Check out Drupal 2019: Year in review to know more.

Drupal 9: The most ambitious release ever!

[embedded content]


In spite of the pandemic, the roadmap of Drupal 9 release was right on the money because of the continued efforts from the Drupal Community. From the beta test process for Drupal 9 to the release of the first beta version, everything went as per plan. Eventually, the Drupal 9 was released on 3 June, 2020. It took birth with much more usability, accessibility, inclusivity, flexibility and scalability than previous versions. After all, it was in the works for almost 5 years!

And, the release of Drupal 9 was not shorn of any fanfare either. While Drupal 8’s launch saw hundreds of release parties happening across six continents, celebrations in the time of Covid-19 was a different game altogether. For Drupal 9, the community created CelebrateDrupal.org where virtual events can be posted for people to join, photos can be uploaded of Drupal 9-resembling food items, or just simply selfies and videos could be added.

Snapshot of a tweet with a collage consisting of images of people and food items to show Drupal 9 celebration


The rebranding of Drupal had to happen along with this release. So, it did. The new brand represented the fluidity and modularity of Drupal and more importantly the Drupal Community’s value of coming together to build a greater whole.

[embedded content]


Drupal 9 is a lot easier to be leveraged by marketers and offers a streamlined way to maintain and upgrade for developers. With intuitive solutions for empowering business users, state-of-the-art new features for reaching new digital channels and further enhancing content management processes, and offering easy upgrades for avoiding the need to replatform, Drupal 9 is just what web professionals need in this day and age of ever-evolving digital landscape.

While Drupal 8’s end-of-life date has been fixed for November 2021, Drupal 7’s has been moved to November 2022 in light of the impact of pandemic. It was simply because it felt like the right thing to do and it aligned with Drupal’s values and principles that calls for building software that is safe and secure for everyone to use. This goes to show the Drupal Community’s never ending commitment to continue to care for a software almost a decade after its release. Read this ultimate guide to Drupal 9, burning questions about Drupal 9 and the key modules to start Drupal 9 project to know more.

While Drupal 9.1 was released on 2 December 2020 with new experimental Olivero frontend theme and several additions to the Claro administration theme, the plan for Drupal 10 release is already under development. Drupal 10 is being targeted to be released around June 2022.

[embedded content]

DrupalCon Global 2020 had plenty of insights to share vis à vis Drupal 10 like Drupal 10 readiness initiative, enabling Media Library, Layout Builder and Claro by default, the completion of Olivero frontend theme, offering automated updates, and adding official Drupal JavaScript components to Drupal Core.

Screenshot of a video meeting with a person's image on top right explaining Drupal 9's and Drupal 10's vision in DrupalCon Global 2020 Source: Dries Buytaert’s blog

Drupal businesses continued to grow despite pandemic

Drupal Business Survey 2020 had interesting data points to showcase. Drupal agencies have considerably done a great business without getting adversely affected by the pandemic. Financially, Drupal businesses were well placed and thrived with many asserting that results exceeded the pre-covid prognosis.

a piechart with blue coloured and red coloured separations explaining the state of Drupal during Covid-19Drupal businesses during Covid-19 pandemic | Source: Drupal.org

The survey also delineated that almost half of the Drupal companies who participated are confident that their business and on-going situation will further improve in 2021 while 20% of the companies aren’t optimistic about the coming months. As far as the industry segments are concerned, Education continues to be a sector with most Drupal projects followed by Charities & Non-Profit and Government and Public Administration.

Growth in terms of Drupal project pipeline, deal size and win rate

Graphical representations with multi coloured regions to explain Drupal usage statisticsDrupal usage statistics | Source: Drupal.org

There has been a steady growth in the adoption of Drupal 9 ever since its launch. Drupal Business Survey 2020 also indicated that the Drupal project deal size grew more than the project pipeline or win rates that, in turn, shows Drupal’s growth  to the enterprise market.

Three piecharts with multi coloured regions to explain Drupal project pipeline, deal size and win ratesSource: Drupal.org

When the survey sought to look for answers as to why Drupal is chosen for web development projects, it was observed that clients, who worked with Drupal before, most often than not, decided to stick to it (this constituted 60% of the answers). This proved that they considered Drupal as a viable option for solving their business problems. Also, the recommendations from Drupal agencies, Drupal being an open source software, its amazing flexibility and the robust security that it provides made the clients want to go with Drupal.

But, the survey also revealed some of the downsides the Drupal Community would want to give a look at and take some actions to improve further. Drupal development being time consuming and thus being expensive was one of the issues it pointed out. Moreover, Drupal being intricate and potential clients not knowing much about it also came into picture.

Increase in Drupal contributions was observed

From Drupal Global Contribution Weekend (where people more or less close to Drupal and its community participate) to the Google Summer of Code (where students developers across the world get an exposure to open source software development including Drupal), contribution-focused events are given its due importance by the open source communities like that of Drupal. Because contribution is at the core of Drupal project and it thrives because of that.

Graphical representations with multi coloured regions to explain Drupal contribution creditsNumber of people contributing to Drupal has increased over the years | Source: Drupal.org

Drupal.org’s contribution data for 2019-2020 shows that there is a 20% increase in the number of contributions to Drupal project as compared to the previous period mainly due to Drupal 9 release. Although contributed modules witnessed the majority of credits, there was an increase in the contribution credits across all project types.

Bar graph with multi coloured horizontal bars to explain Drupal contributionSource: Dries Buytaert’s blog

Even though there was a decline in individual contributors, organisational contributors increased. Moreover, 20% of the contribution credits are coming from the top 30 contributors (the top 0.4%). It was noticed that two-thirds of all contributions were sponsored. But, volunteer contributions continue to remain a pillar of success to Drupal. Also, it was seen that currently Drupal’s maintenance and innovation is largely dependent on traditional Drupal businesses and larger, multi-platform agencies are barely contributing to Drupal. Read about perks of being open source contributor and different ways to contribute to Drupal to know more.

The continued support for diversity, inclusion and equity

The US was convulsed by the nationwide protests over the death of George Floyd, an African-American man, in police custody. This incident of racially-driven police brutality once again sparked violent protests and much-needed discussions.

The Drupal Association released a statement strongly “condemning racism, racist behaviour and all abuses of power.” It reiterated its support for diversity, inclusivity and equity across all facets. Drupal thrives because of contributions from diverse contributors and it’s pivotal to its health and success. Drupal’s values and principles state that treating each other with dignity and respect is of utmost importance.

While the Drupal Association values diversity, it understands there are still plenty of things to do to create meaningful change within the Drupal Community too. The Drupal.org’s contribution data for 2019-2020 showed interesting data on gender and geographical diversity. Only 10-11% of contributions were coming from people who did not identify them as men.

Bar graph with multi coloured vertical bars to explain Drupal contributionSource: Dries Buytaerts’ blog

And, while Europe and North America continued to be the biggest contributors, Asia, South America and Africa remain big opportunities for Drupal. To know more, read about the relevance of the trio of Diversity, inclusion and Drupal.

Bar graph with multi coloured vertical bars to explain Drupal contributionSource: Dries Buytaerts’ blog

Conclusion

2020 proved to be a tough yet exciting year for Drupal. It continued to grow and weathered all the storms that came its way. It made new progress and identified where it can improve in the coming years.

2021 has begun but the Covid-19 hasn’t yet gone. It’s going to be a while before we can expect absolute normalcy to return. But until then, we shall keep moving forward, continue contributing to Drupal and grow with it.

Happy New Year to all the Drupalists!

Jan 01 2021
Jan 01

A Drupal Primer for Marketers

  1. Key Drupal Concepts
  2. Permissions
  3. Installing a Drupal Contrib Module
  4. Helpful Browser Tips

Drupal is the content management system of choice for sophisticated enterprise websites because it was built from square one with the anticipation and extensibility needed to optimize every node, every view, and every snippet of code for search engines. That is, of course, if you know how to configure it.

With many new additions to ease-of-use, functionality, and robustness, Drupal is the superior method for creating and marketing your website to the world.

We’ve had customers who have tripled traffic in weeks after upgrading from another platform. Drupal has competitive advantages from site-wide optimizations like Schema or AMP that put clients on the cutting edge of SEO. The benefits are higher rankings quicker and more traffic.

Unlike previous versions, Drupal 8 began scheduled feature releases at six-month intervals. Which means we don’t have to wait around for years if a new technology like responsive design, HTML 5, or CSS 3 comes along.

Because Drupal is dependent on Symfony 3, and Symfony 3's end of life is November 2021, Drupal 8 will reach end of life and support in November 2021. But you still can take advantage of Drupal’s SEO abilities that should port to your Drupal 9 site when you decide it’s time to upgrade.
 

Key Drupal Concepts

Let’s discuss a few key concepts that you need to know about if you’re new to the Drupal community. (Long-time Drupalers can skip to this section.)

The Drupal Community

Drupal is more than just software. It’s a community of people. Who makes up that community? It’s made up of the people who use Drupal. That’s you! Congratulations, you are now part of the Drupal community. Welcome!

The community is a club (scores of local meetups), it’s a group of companies (Acquia is just the biggest of many businesses in the Drupal community), and it’s an organization (the Drupal Association). But you can be involved without ever touching any of those entities.

Many people first get involved in Drupal by downloading the software and then, when help is needed or confusion arises, asking for assistance on Drupal.org. That’s a common way of getting to know the community. The more involved you become, the better time you’ll have using the software. It’s nice to use tools made by people you know.

"Drupal Core" and "Drupal Contrib"

Throughout this book, we refer to Core and Contrib. It’s important to understand the difference, so you know where to go for help if something isn’t working right.

All Drupal sites run a version of the Core Drupal project—Core for short. The extra contributed modules, contributed themes, and custom code that are installed are what make each project unique. Together, these contributed modules and themes are referred to as Contrib.

drupal core and contrib module differences

On your server, Core is in the /core directory. Everything else is Contrib or custom– you’ll see a /libraries, /modules, /themes, /vendor, and a few other directories.

To make it as simple as possible:

  • Core features are built into Drupal.
  • Contrib features are added-on to Drupal.
     

The Drupal community has created tens of thousands of Contrib modules. Every once in a while, a widely-used and well-written Contrib module is added to Core. This is one of the ways that Drupal Core gets new features. In fact, with the release of Drupal 8, several modules and functions that used to be Contrib are now included in Core. This means less installation, less code for you to update, and a more stable website.

A Warning About Contrib

The Drupal community develops contrib modules and themes. That means that anybody with a problem to solve (or ax to grind) can build a module and publish it on Drupal.org. Be careful when you decide to install community-contributed code on your Drupal site.

Near the bottom of the project page for a module, you’ll see something like this:

drupal module version example box

Notice there are different versions of the same module. The “7.x” and “8.x” tells you which version of Drupal it is compatible with. You’ll want to install the version that is compatible with your version of Drupal.

NOTE: As in the image above, you’ll see that the 8.x.* version will work with both Drupal 8 and 9, so keep an eye out for those.


Beta and Dev versions mean they are not ready for prime time. However, if you still need/want to use those particular versions, make sure you:

  • Backup your site before installing
  • Do some extensive testing after installation to make sure nothing is broken and it works nicely with all the other modules and code on your site.
     

WARNING: Install new modules on a development server and test them thoroughly before pushing them to the live site.

Permissions

You need permission in Drupal to use the modules called for in this book. Drupal is quite secure, and one of the ways it remains secure is with a robust, multi-layered permissions system. If you’re working with a developer, you’ll need to ask them to assign a role to you that has Administrator level permission.

Here’s a helpful email that you can send to your developer:

Dear < developer first name >,

My username on the < drupalWebsite > web server is < your username here >.

Please grant my account the “Administrator permissions” access. https://< yourDrupalSite.dev >/admin/people/permissions#module-user

I will be working with some new modules for SEO and I need to give myself permissions as I go.

< OPTIONAL > It may be a good idea to create a “Marketing User” role for this, but I’m open to your suggestions on the best way to grant me the access I need.

Thanks!

< your name >
Awesome Marketer
 

Installing a Drupal Contrib Module

Verbiage associated with installing and enabling modules can be confusing, even within the Drupal documentation. You can upload and install a module to your Drupal site, but the module will not be functional until it is enabled.

Log into your Drupal admin area and go to Manage > Extend. You’ll see a complete list of modules that are installed. However, some will have check marks next to their name, while others will only have an empty checkbox.

drupal module install

The modules with check marks next to them are enabled, while the ones without them are simply installed and not functional. We do not recommend enabling all modules unless you know what they are for or if they are necessary.

Also, while we recommend installing the modules discussed in the next sections, we recommend you enable them one by one and test your site each time before enabling any others. While this may seem tedious, enabling them all at once could result in some issues (some serious) and you won’t know which one is causing the problem.

Finally, if you are not able to enable a module (the checkbox is is not clickable), expand the module description to see if there are any missing dependent modules that will also need to be installed:

drupal module with missing requirements

With the above in mind, you can get the latest instructions for downloading, uploading, installing, and enabling modules directly from the Drupal.org website:

Helpful Browser Tips

While fairly easy, these items should be included in your skill set to help troubleshoot problems with on-page SEO.

How to View Source of a Webpage

Sometimes, we’ll instruct you to “view source”. It’s easy: most browsers provide a way to do this. Here’s how to find it:

  • Chrome: View > Developer > View Source
  • Firefox: Tools > Web Developer > Page Source
  • Edge: Tools > Developer > View Source
  • Safari: View the instructions here

How to use an Incognito Window

An incognito window is like a new browser. It doesn’t have any of the cache, cookies, login data, browsing history, etc. It’s a fast and easy way to see what a new visitor to your website will experience.

  • Chrome: File > New Incognito Window
  • Firefox: File > New Private Window
  • Edge: File > New InPrivate window
  • Safari: File > New Private Window
     
Jan 01 2021
Jan 01

Imagine that you have to integrate JavaScript code into your Drupal project&mldr; Where do you start? How do you do it? You’re looking for information but you don’t find anything “holistic”, something that goes from 0 to 100 and that puts in context how the relationships between Drupal and JavaScript are structured. Well, this article was made for you (Or for other people in your team that you want to introduce to this topic).

In this guide you will learn basic concepts of JavaScript, the terminology used in Drupal, functions, methods and common mechanics to enrich your projects by make them run with executable code on the client side. And all through a combination of theory and practice. It includes some exercises that I have integrated.

Picture from Unsplash, user Magnus Engø.

Index of sections

1-Introduction
2- JavaScript and Drupal: basic concepts
3- How to include JavaScript code in Drupal

4- Just a little bit more of JavaScript in Drupal

5- Drupal and the old jQuery

6- Drupal Behaviors

7- JavaScript without JavaScript: #ajax, #states

8- Troubleshooting: Problems and Solutions

9- Links and reading resources

10- :wq!

Index of Exercises

Exercise 1: Creating a basic custom module
Exercise 2: Defining our new custom library
Exercise 3: Defining our initial JavaScript file
Exercise 4: Adding libraries to our Drupal custom module
Exercise 5: Passing values to the IIFE format
Exercise 6: Transfering values trough drupalSettings
Exercise 7: Custom Visit Counter with JavaScript
Exercise 8: Changes based in jQuery
Exercise 9: Dialog Window from the global object Drupal
Exercise 10: Image Board from Unsplash using Drupal Behaviors

1- Introduction

Some time ago (around December 2019, but it seems a century has passed ) I started writing what I thought would be a simple guide to integration between JavaScript and Drupal. A couple of months later, in February 2020, I had a tutorial of more than eleven thousand words written in Castillian (Spanish from Spain) that I published in my Medium profile.

What was initially going to be brief has become a kind of reference guide on JavaScript and Drupal and (as far as I know) is now part of the training resources shared in many companies in Spain and other Latin American countries. Here you can reach the original publication in Medium, the so called: JavaScript & Drupal 101 TUTORIAL HANDBOOK TOTAL MAX POWER 2000 (I can swear I had a lot of fun thinking about the title).

Well, the fact is that since the publication, I received three basic types of feedback:

  • “Hey, this is wrong, you have to check it”
  • “We have people in the company from other countries, do you have it translated into English?”
  • “Thank you for not putting it behind the Medium payment wall”

So although my first intention was to move all this content to an open book format like git-book or something like that, I’ve actually grouped the first two together and I’m going to publish a review of the original post translated into English. As always, I hope it can be useful to someone.

In a complementary way, you can download all the code from the exercises grouped as a single Drupal custom module, available here: gitlab.com/davidjguru/javascript_custom_module. This works in Drupal 8 and Drupal 9.

DISCLAIMER: This guide is actually a manual for the integration of JavaScript code in Drupal-based projects, but only in the context of implementing Drupal modules. This is basically a backend issue. This guide does not contain information related to JavaScript frameworks (React, Angular, Vue) or about the use of Drupal headless as decoupled. Neither does it deal with Drupal Theming issues and its approach to them is only tangential. This tutorial is only for people related to the Drupal backend.

There we go!

2- JavaScript and Drupal: basic concepts

If this is your first approach to the intersection between Drupal and JavaScript and it may even be your first approach to Drupal and its world, it’s convenient that you review this section beforehand, in which we are going to share some terms and names that we will use throughout the tutorial.

By this way you will know what we are talking about at any time in the manual and you will be able to follow the cases, examples and exercises more easily.

  • Drupal: Our technological platform of reference in this context. Something halfway between the framework and the CMS, free software downloadable and installable from here: https://www.drupal.org. In this tutorial we’ll travel over the shoulder of a Drupal, so it is good to know it.

  • Render Array: It’s a key piece of Drupal to “paint” on screen. They are multidimensional arrays that must meet certain rules using different properties to model the elements to be rendered. The elements we usually draw are described here: drupal.org/api/drupal/elements/9.2.x. Most of the connections between Drupal and JavaScript will be done from Drupal’s render arrays, so is highly recommended to know them and learn its declarative format.

  • JavaScript: A programming language very diversified so much as to be the basis of many frameworks, libraries and tools in fashion. Today it’s executable both in client and server. In this context we will use the so called “Vanilla JavaScript”, that is, the own handcrafted code outside JS platforms. See a guide from Mozilla: mozilla.org/JavaScript/Guide. In this tutorial, although it is not an advanced JavaScript manual, we will use this language in several sections, so is great that you know it a little bit.

  • Immediately-invoked Function Expressions(IIFE): Also called “self-executing” function, it’s a specific format to declare JavaScript functions so they are executed as they are declared, as soon as they are defined. See: flaviocopes.com/javascript-iife to understand better this important concept. In this article we tried to integrate JavaScript into Drupal through this format, so it would be optimal if you at least understand the concept.

  • AJAX: This stands for Asynchronous JavaScript + XML, a combination of technologies for use partial requests (lighter than complete requests) from the client to the server, which results in speed and performance improvements. See more: developer.mozilla.org/Guide/AJAX. Although it is a complex and extensive topic, we will focused in the possibilities of implementing AJAX in Drupal.

  • DOM: The Document Object Model is the tree structure that represents all the HTML code used in the representation of the web we are visiting. See: developer.mozilla.org/Glossary/DOM. In this guide we are going to make modifications and operations on HTML elements, so we will learn how to make changes on the DOM from Drupal.

  • jQuery: It’s a mythical library based on JavaScript to facilitate (theoretically) manipulations of the DOM. In Drupal it (still, by now) maintains a very extensive presence, so we better get along with it. See: developer.mozilla.org/Glossary/jQuery. We’re going to execute jQuery code in the Drupal context.

3- How to include JavaScript code in Drupal

We will practice with the inclusion of JavaScript code in our project. To do this, we will create a new custom module and iterate on it providing you with JavaScript based functionality while we discuss the most important concepts in the following sections. In order to doing this, I recommend quickly creating a containerised test environment, using DDEV to deploy a Drupal installation on the fly. If you don’t know DDEV, you can follow my own guide published in Digital Ocean: How To Develop a Drupal 9 Website on Your Local Machine Using Docker and DDEV.

You can also deploy a lightweight version of a Drupal installation just using your PHP local config, with a light server. Follow the steps in the next snippet:

3.1- Setting up the scenario: creating a custom module

To begin with, let’s define the new custom module we will work with. I don’t know what context you have with respect to Drupal, so I’ll write down here a sequence of links that you can update with. You will need a Drupal deploy, maybe XAMP+ environment with web server, database and a Drupal deployed and ready to use, or if you’re using DDEV (as I recommended in the previous section).

Explaining how to create a custom module for Drupal is beyond the scope of this guide, but here are some links to read:

Snippets

Exercise 1: Creating a basic custom module for testing

In case you already have a Drupal site available for testing (including use of Drupal Console), just type this from the console while being inside your project and Drupal Console will take care of creating the new module:

// Using Drupal Console with params.
drupal generate:module \
--module="Custom Module for JavaScript" \
--machine-name="javascript_custom_module" \
--module-path="modules/custom" \
--description="This is a custom generated module for JavaScript." \
--package="Custom" \
--module-file \
--no-interaction

If Drupal Console is not your option, you can use Drush, launching the command:

$ drush generate
$ ddev drush generate

And you’ll get a list of options, including:

 module:                                                                                    
   module-configuration-entity                       Generates configuration entity module  
   module-content-entity (content-entity)            Generates content entity module        
   module-file                                       Generates a module file                
   module-standard (module)                          Generates standard Drupal 8 module     

And ask for a custom module creation with params, avoiding all parameters setting through dialogue:

$ drush gen module-standard --directory modules/custom --answers '{"name": "Custom Module for JavaScript", "machine_name": "javascript_custom_module", "description": "Custom Generated Module for JavaScript.", "package": "Custom", "dependencies": "", "install_file": "no", "libraries": "no", "permissions": "no", "event_subscriber": "no", "block_plugin": "no", "controller": "no", "settings_form": "no"}'

See an example here: Drupal 8 || 9 : Creating custom module using Drush generate.

You can also download this basic custom module created for examples from my gitlab repository: gitlab.com/davidjguru/basic_custom_module, or doing git clone from the whole repository for custom modules: gitlab.com/davidjguru/drupal-custom-modules-examples.

This module is quite simple and basic, only for first setps in Drupal: when enabled, only creates a new path /basic/custom with a Controller that gives you a response creating a render array in Drupal, with a very simple markup message for HTML. With this, we can start to test.

Basic Custom First Route in Drupal

We will now generate some content automatically for our exercises / test scenario. We can rename the custom module if we want, to particularize it a bit more (I’ll use the naming javascript_custom_module to avoid confusion with other test modules. We will install, activate and generate a random comments set within our platform.
To do this we’ll use the Drupal Devel Module and its Devel Generate sub-module to create test content, adding new commands and sub-commands to Drush. We’ll use Composer and Drush from inside the console project folder, just by typing:

$ composer require drupal/devel
$ drush en devel devel_generate
$ drush genc 10 5 --types=article

With these instructions above we asked to devel-generate to create ten items, using the type nodes (default in Drupal) with a comments set in each node, between 0 and 5 per node. We now have ten initial nodes to build our initial exercise scenario:

Creating a new set of nodes with type article

Next, we will reorder what this example Controller originally returned. Until now it was simply a text message, but now we are going to add a table with comments associated with the current user. To do this we are going to perform a database query using the database service, extract the returned values and process them by launching them into the table rendering system. For the query filtered by the current user data through the current_user service .

Let’s see, now the controller class would look like this:

What once enabled the test module (using Drush or Drupal Console -if it works in your Drupal installation-):

$ drush en -y javascript_custom_module
$ drupal moi javascript_custom_module 

This will generate the /javascript/custom path through the Controller and it will render on screen the following table:

Showing list of Comments in a table

With this step, we have already prepared the initial scenario and can move on to perform exercises directly with JavaScript.

Next!

3.2- The “library” concept

Working with both CSS and JS from Drupal 8 onwards has become standardised. In previous versions of Drupal you had to use specific functions to add CSS or JS resources. As I explained in this snippet: Drupal 8 || 9 : Altering HTML in headers from hooks, you had to use things like drupal_add_html_head() to add new HTML tags, drupal_add_js() to incorporate JavaScript or the drupal_add_css() function to add more style sheets.

3.2.1- Secuence for creating libraries

From Drupal 8, the sequence of inserting libraries has been standardised, and consists of fulfilling these three steps:

  • Create the CSS/JS files.
  • Define a library that includes these files.
  • Add this library to a typical Drupal render array.

But in this case, we are going to reverse steps 1 and 2: first we will see how to create the library and then we will talk about the JavaScript file itself, which could be a little more complex.

Exercise 2: Defining our new custom library

Let’s see&mldr;in our custom module, we’ll include a new file module_name.libraries.yml in order to describe the new dependencies, so in our case study, we’ll create a new file called javascript_custom_module.libraries.yml filled with the next lines:

// Case 1: Basic library file with only JavaScript dependencies.
module_name.library_name:
  js:
    js/hello_world.js: {}

// Example
custom_hello_world:
  js:
    js/hello_world.js: {}

All the libraries will be declared, as a rule of style, in the same .libraries.yml file, where we will describe all the libraries we need in our project, grouped by function or use.

Here you can see several examples of definition of libraries for Drupal with some example models:

As we can see in the examples listed in the previous gist, there are different ways to declare libraries and even to add them externally. About the declaration of libraries, we can add a couple of curiosities that are nice to know:

3.2.2- Loading libraries in head

By default, all libraries will tend to be loaded into the footer: In order to avoid operations over elements in DOM (Document Object Model) that have not yet been loaded, JS files will be included at the end of the DOM. If for some reason you need to load it at the beginning, then you can declare it explicitly using the pair parameter/value “header: true”:

js_library_for_header:
  header: true
  js:
    header.js: {}
    
js_library_for_footer:
  js:
    footer.js: {}

3.2.3- Libraries as external resources

We are looking at examples of creating our own custom libraries, but it’s also possible to declare in the .libraries.yml file of our custom module the use of an external library that is available via CDN or by an external repository.

It is possible to request to Drupal the use of an external library to incorporate it to our project, as we can see in the example of the use of backbone.js in the Drupal core, created by third parties, incorporated to Drupal and declared coherently with their external data:

Libraries as external resources

By the way, in the same file core.libraries.yml you’ll can see all the JavaScript resources declared from the core of Drupal. Some of these resources will be used here in this guide. ;-)

In this former example about backbone.js in Core, we’re seeing that finally, the library is used from a local environtment, right? so&mldr;It is possible loading a library directly from remote? we’ll see the official documentation from Drupal saying something like this:

“You might want to use JavaScript that is externally on a CDN (Content Delivery Network) to improve page loading speed. This can be done by declaring the library to be “external”. It is also a good idea to include some information about the external library in the definition.”

So we can do something like this:

angular.angularjs:
  remote: https://github.com/angular/angular.js
  version: 1.4.4
  license:
    name: MIT
    url: https://github.com/angular/angular.js/blob/master/LICENSE
    gpl-compatible: true
  js:
    https://ajax.googleapis.com/ajax/libs/angularjs/1.4.4/angular.min.js: { type: external, minified: true }

Quite interesting, right?

3.2.4- Libraries and dependencies

It is possible that within our JavaScript code, in your own .js file, we may need to use another third-party library for our functionality. Well, in that case, we can declare libraries with dependencies following a basic vendor/resource or vendor/library scheme.

Let’s see an example in which we intend to use a hide/show effect. As such animations are available in the jQuery library and it’s integrated in Drupal (we will see it later), then instead of creating those functions we’ll declare the dependency and we will be able to use them:

js_library_hide_show:
  js:
    js/my_custom_javascript_library.js: {}
  dependencies:
    - core/jquery

In addition, there is a set of options that you can use as attributes to customize the use of your new CSS / JavaScript libraries. See: Drupal org Docs: Libraries options and details.

3.3- The JavaScript file

The next step will be to define that JavaScript file that we have declared as a resource within the new previous library.

Exercise 3: Defining our initial JavaScript file

For that, we’ll create a /js folder and will put inside our new file hello-world.js wich contains our new library with a little action, just say hello by Console:

(function () {
  'use strict';

  // Put here your custom JavaScript code.
  console.log ("Hello World");
})();

So the internal structure of our custom module for testing should look like this:

/javascript_custom_module
    /js
        javascript_file_name.js
    /src
        /Controller
            YourCustomExampleController.php
    javascript_custom_module.info.yml
    javascript_custom_module.routing.yml
    javascript_custom_module.libraries.yml

3.4- Adding JavaScript libraries

Now our goal is linking the new library with its JavaScript .js file associated with the context in which it should work, right? Well, for that we are going to make a base case and then we are going to add more probable cases, given that in Drupal it is possible to attach JavaScript libraries in various ways, depending on how we need to use them in our code.

But let’s see first the base case for our case: #attached.

3.4.1- Using the #attached property in Render Arrays

On one hand, we have the eternal Drupal Render Arrays, that is, the arrays loaded with properties, values, parameters and others that we use to send to the Drupal rendering system so it transforms everything and ends up painting HTML renderable in a browser.

On the other hand, we have a property called “#attached” that offers us a set of already defined sub-properties that allow us to attach resources of different nature to any render array we are using (a controller response, a form build, etc):

  • Library -> $render_array['#attached’][‘library’]
  • drupalSettings (from PHP to JavaScript) -> $render_array['#attached’][‘drupalSettings’]
  • Http_Header -> $render_array['#attached’][‘http_header’]
  • HTML Link in Head -> $render_array['#attached’][‘html_head_link’]
  • HTML Head -> $render_array['#attached’][‘html_head’]
  • Feed -> $render_array['#attached’][‘feed’]
  • Placeholders -> $render_array['#attached’][‘placeholders’]
  • HTML Response Placeholders -> $render_array['#attached’][‘html_response_attachment_placeholders’]

We will come back to some of these cases in following sections, But for more info about the processing of attached resources, You can visit the official documentation in Drupal.org: public function HtmlResponseAttachmentsProcessor.

See some examples at:

Exercise 4: Adding libraries to our Drupal custom module

By now, we just need to go to the PHP class file (The Controller) and modify the render array that is returned at the end, including the #attached property with our new library:

// Path: javascript_custom_module/src/Controller/
// File: CommentsListController.php
// Function: gettingList()

// Before (line 42):
$final_array['welcome_message'] = [
  '#type' => 'item',
  '#markup' => $this->t('Hello World, I am just a text.'),
];

// Now (line 42): 
$final_array['welcome_message'] = [
  '#type' => 'item',
  '#markup' => $this->t('Hello World, I am just a text.'),
  '#attached' => [
    'library' => [
      'javascript_custom_module/js_hello_world_console',
    ],
  ],
];

// Form : 
$attachments['#attached']['library'][] = 'module/library';

Just after changed it, We will reinstall our custom module, clearing cache:

$ drush pmu javascript_custom_module
$ drush en -y javascript_custom_module
$ drush cr

// Drupal Console (include clearing cache)
$ drupal mou javascript_custom_module
$ drupal moi javascript_custom_module

We can see now from the Console of your browser the result of the execution of our first JavaScript code, just going to the declared route:

Loading JavaScript file in the custom module

We’ve made our first interaction with JavaScript in Drupal!
Well, now we are going to continue adding new JS cases, and then we will come back to this same initial case to continue iterating and looking at more and more available functionality.

Following this simple initial exercise, we can check the operation of basic JavaScript methods such as an alert window or a confirmation window through the integration of libraries using the #attached property:

Adding basic JavaScript functions to our custom code

3.4.2- Libraries in a TWIG template

To add libraries to a Twig template within our project, either for a custom template within our own module or in a specific Twig template of the Theme we are using, we will load it through the Twig attach_library() function that allows us to add directly to the template:

{% block salute %}
  {% if salute_list is not empty %}
    {{ attach_library('custom_module_name/library_name') }}
    

{{ parent() }}

{% endif %} {% endblock salute %}

But the truth is that it can cause problems in the rendering (that it does not arrive in time to load in the rendering cycle of the Render system that is put in motion when “painting” a page) if it is added to the global template html.html.twig . This is a debate that has been going on for some time: https://www.drupal.org/node/2398331#comment-9745117 and is also a subject for discussion with a view to changing the way libraries are loaded in the near future of Drupal: https://www.drupal.org/project/drupal/issues/3050386. So beware of the template you use it on that might not work and pay attention to changes that might come in new versions of Drupal.

3.4.3- Global libraries for a Theme

To declare your library as a global dependency for your Theme or your custom module, just include it in the declarative file of the *.info.yml resource using the libraries property:

# resource.info.yml

libraries:
  - module/library

In any case and as in the previous section, there are discussions about the evolution of this and some measures that are supposed to be taken for future versions: https://www.drupal.org/node/1542344. The advice remains the same: Pay attention to possible changes.

3.4.4- Adding libraries from Hooks

It is also possible to add new custom libraries in our Drupal context, specifically before the time of rendering existing pages, through pre-processing hooks, such as hook_page_attachments(), which still maintains the already seen way of adding resources:

// Form: 
$attachments['#attached']['library'][] = 'module/library';

Using a basic scheme for use:

/**
 * Implements hook_page_attachments().
 */
 
 function custom_page_attachments(array &$attachments) {
   
    $attachments['#attached']['library'][] = 'module/library';

 }

Another option in hooks is the hook_preprocess_HOOK() function that according to its documentation, makes it easier for modules to preprocess theming variables for various elements. Let’s see a couple of examples:

/**
 * Implements hook_preprocess_HOOK() for menu.
 */
function theme_name_preprocess_menu(&$variables) {

  $variables[‘#attached’][‘library’][] = ‘theme/library’;
}

The execution of this previous hook will make Drupal go to menu.html.twig and perform the addition of the differentiated library. Furthermore, this resource can be used in a generic way (for example, for all pages):

/**
 * Implements hook_preprocess_HOOK() for page.
 */
function custom_theming_preprocess_page(&$variables) {
  
  $variables['#attached']['library'][] = 'module/library';
}

In this case it is recommended to specify metadata to facilitate the caching of the new change, specifically if the aggregation operation of the new library depends on conditions, for example:

/**
 * Implements hook_preprocess_HOOK() for page with conditions.
 */
function custom_theming_preprocess_page(&$variables) {

  $variables['page']['#cache']['contexts'][] = 'route';
  $route = "entity.node.preview";  
  
  if (\Drupal::routeMatch()->getRouteName() === $route) {
    $variables['#attached']['library'][] = 'module/library';
  }
}

And for more specific resources:

/**
 * Implements hook_preprocess_HOOK() for maintenance_page.
 */
function seven_preprocess_maintenance_page(&$variables) {

  $variables[‘#attached’][‘library’][] = ‘theme/library’;
}

4- Just a little bit more of JavaScript in Drupal

Let’s take a closer look at the rules of use and integration of JavaScript code in a Drupal project.

4.1- Structure and guidelines for IIFE

The first thing that should call our attention is the fact that the structure of the .js extension file that we have introduced in our project through the /js folder has the following structure:

(function () {
  'use strict';

  // Put here your custom JavaScript code.
  console.log ("Hello World");
})();

In Drupal, all our JavaScript code will be integrated within a closure function, as a wrapper of the code based on the IIFE pattern, that is, the “Immediately Invoked Function Expression (IIFE)” model, used as a useful structure for three key issues:

  1. First, it allows immediate execution (or self-execution).
  2. Second, it limits the scope of internal variables: does not alter other JavaScript codes present in the project.
  3. Third, The context execution of the IIFE is created and ends up destroying it automatically: it frees up memory space, and releases it quickly.

How is this achieved? Well I think we can understand the IIFE model in an intuitive way in four steps. Let’s see:

  1. We can create a function in JavaScript as normal:
function myFunction() {

  // Here your JavaScript code.
}
  1. This function may or may not have a name (being an anonymous function) but in this case must be assigned to a variable:
// Function with name:
function myFunction(){ 

  // Here your JavaScript code. 

} -> Right

// Anonymous function assigned to a variable:
var myFunction = function() { 
  
  // Here your JavaScript code. 

} -> Right

// Anonymous function being not assigned to a variable:  
function() { 
  
  // Here your JavaScript code. 
  
} -> JavaScript error

So JavaScript does not allow us to execute the function, because after the keyword “function” it waits for a name that it cannot find.

  1. This can be avoided by introducing the anonymous function in parentheses (well actually just by putting a sign in front of it would already serve but we adopt this consensus of the parentheses as a style guideline). This makes the JavaScript engine consider it an expression, or Function Expression (instead of Function Statement, with a name):
(function() {
  
  // Here your JavaScript code.

})
  1. The function remains in memory but nobody is using it. How do we execute it? Well, we can use the final parenthesis to call its execution:
(function() { 

  // Here your JavaScript code. 

})()   -> It's only a guideline, since this algo serves:   

(function() { 
  
  // Here your JavaScript code. 

}())  -> We've passed the invocative parentheses into the expression.

In fact, if we enter parameters in the execution brackets, the function will treat them with absolute normality. We will see an example later on through a small exercise (Ex. 5: Passing values to the IIFE format).

Besides, as it is an anonymous function, it can be used as an “arrow function”:

(() => {   // Here your JavaScript code. //   })()

The latter are the forms that our JavaScript code can take in Drupal. Remember that whatever the style guideline we choose, we always need to comply with two fundamental guidelines:

  1. They are built in a compartmentalized way, without “contaminating” any global object, that is, the global execution space (that the variables only live inside their function, like a private code block).
  2. They are executed immediately, destroyed and cannot be executed again (if a page is reloaded, they are requested again).

4.2- Passing parameters in IIFE

We are going to makechanges on the rendered HTML of our Drupal through our custom module, for which we must first assign a custom selector to the element we want to modify.

Exercise 5: Passing values to the IIFE format

We start by going back to the controller class file and adding two new Drupal element rendering system properties: #prefix and #suffix which allow an HTML element to be framed within other HTML tags. In this case we want to add our own id to the element.

// Line 42.
$final_array['welcome_message'] = [
  '#type' => 'item',
  '#markup' => $this->t('Hello World, I am just a text.'),
  '#prefix' => '

', '#suffix' => '

', '#attached' => [ 'library' => [ 'javascript_custom_module/js_hello_world_console', ], ], ];

Next we create a new .js file (‘iife_salute_example.js’)with a function in IIFE format. To this function we will pass a text string as a greeting for our users (‘Dear User’), and we will declare the input parameter in its definition (‘parameter’).

(function (parameter) {
  'use strict';

  // Get the HTML element by it ID.
  let element = document.getElementById("salute");
  console.log(element);

  // Add to the HTML the new string using the parameter.
  element.innerHTML += "Salute, " + parameter;

  // Creating and adding a line for the HTML element.
  var hr = document.createElement("hr");
  console.log(hr);
  element.prepend(hr);

})('Dear User');

We’ll introduce some changes with pure JavaScript, like adding a text to the message of the HTML element, taking the value of the text string passed by parameter. Then we also put a dividing line over the element, as a separator.

We added the new file to the library resources that we had already defined previously:

js_hello_world_console:
  js:
    js/hello_world_console.js: {}
    js/iife_salute_example.js: {}

And so, if we clean the drush cr cache and reload the /javascript/custom path in the browser, we will be able to see the new changes made using JavaScript:

Rendering custom changes in HTML using JavaScript

4.3- Passing values from PHP to JavaScript: drupalSettings

We have seen in the previous section how to pass values to that IIFE within the revision of the structure and operation of this JavaScript code format and now we are going to stop at a very particular construction that is available for us to make connections between our server executable code (PHP) and our client executable code (JavaScript) within Drupal: let’s talk about drupalSettings.

Let’s think about implementing a slightly more particular greeting to the user who visits our url /javascript/custom . We want to extract data about the visitor’s identity in order to give them a more personal greeting. Ok. We can extract this information inside our Controller through the service current_user: api.drupal.org/core.services.yml/current_user/9.0.x, which offers us methods to obtain this information. We want to take this information into the code that runs on the client, so we will transfer it to JavaScript.

We were including the current_user service in the Controller, between lines 24 - 29 of the source code:

 public static function create(ContainerInterface $container) {
    return new static(
      $container->get('current_user'),
      $container->get('database')
    );
  }

So you will can use the service from the Controller using a class property, the so called $this->current_user.

We can transfer it all through drupalSettings, a sub-property available for the property #attached , which is received from the JavaScript side through the drupalSettings object, which will have the values available as new properties. Let’s see the next exercise.

Exercise 6: Transfering values trough drupalSettings

We will create a new JavaScript file for a more particular greeting, called hello_world_advanced.js. On the one hand, we’re extracting the information and adding the new library from the PHP side:

// We're adding the new resources to the same welcome element.
$final_array['welcome_message']['#attached']['library'][] = 'javascript_custom_module/js_hello_world_advanced';

$final_array['welcome_message']['#attached']['drupalSettings']['data']['name'] = $this->current_user->getDisplayName();

$final_array['welcome_message']['#attached']['drupalSettings']['data']['mail'] = $this->current_user->getEmail();

On the other hand, we’re getting the values from the JavaScript side:

(function () {
  'use strict';

  // Recovering the user name and mail from drupalSettings.
  let element = document.getElementById("salute");
  let user_name = drupalSettings.data.name;
  let user_mail = drupalSettings.data.mail;

  // Add to the HTML the new strings.
  element.innerHTML += "Update-> You are the user: " + user_name +
                       " with mail: " + user_mail;

})();

Now, adding the library drupalSettings (from the Drupal core) as a new dependency, we can to start connecting variables between PHP and JavaScript. We will change our library definition file in order to define a new custom resource that will use this new dependency:

js_hello_world_advanced:
  js:
    js/hello_world_advanced: {}
  dependencies:
    - core/drupalSettings

So we can see the new values loaded both from the web rendering and from the drupalSettings object itself, through the console (drupalSettings.data, remember):

Getting values from PHP to JavaScript using drupalSettings

Ready!

4.4- Changes in rendered HTML

We will use this section to extend functionally our custom module for JavaScript by implementing some simple and interesting features, to continue practicing with JavaScript in the context of Drupal and to standardize its use in our projects.

4.4.1- Counting visits using Web Storage

Let’s see&mldr; Do you know the concept of “Web Storage”? Well, in short, it’s a small HTML API available in modern browsers to store information internally through two mechanisms: Session Storage (for information maintained only in the context of the open page session) and Local Storage (to persist information until we explicitly remove it).

Read more about the web storage API at: developer.mozilla.org/Web_Storage_API

Here, for example, you can check the availability and capacity (usually around 5MB) of your web browser for web storage (Local and Session): http://dev-test.nemikor.com/web-storage/support-test/.

Exercise 7: Custom visit counter with JavaScript

In this step we will create a small and persistent visit counter to inform the user of the number of times he or she has loaded our custom /javascript/custom/ route.

First, we ask for the current values:

// Asking for the localStorage parameter value if exists.
let visit_value = localStorage.getItem('visit_number');
console.log("LocalStorage - current value: " + visit_value);

// Same but for the sessionStorage.
let session_value = sessionStorage.getItem('session_number');
console.log("SessionStorage - current value: " + session_value);

Then we check if they are already created and initialized. Just a little intuitive game. If they are null, we create them and load them with an initial value equal to one. If they already exist we increase them and load them again updated. We take advantage of this to display them through the console:

// Testing the localStorage visit value.
if(visit_value === null) {

  // If null we'll create the initial value.
  localStorage.setItem('visit_number', 1);
  console.log("LocalStorage: " +localStorage.getItem('visit_number'));

}else {

  // If not null we'll increment the current value.
  localStorage.setItem('visit_number', ++visit_value);
  console.log("LocalStorage: " + localStorage.getItem('visit_number'); 
}

// Same for sessionStorage.
if(session_value === null) {

  // If null we'll create the initial value.
  sessionStorage.setItem('session_number', 1);
  console.log("Session: " + sessionStorage.getItem('session_number'));

}else {
  // If not null we'll increment the current value.
  sessionStorage.setItem('session_number', ++session_value);
  console.log("Session: " + sessionStorage.getItem('session_number'));
}

At the end, we take the opportunity to display the counter values in the HTML of the page:

// Add to the HTML the counter value.
element.innerHTML += "
" + "Total visits: " + localStorage.getItem('visit_number'); element.innerHTML += "
" + "Total visits during this session: " + sessionStorage.getItem('session_number');

And when the address is reloaded, it shows the registration values via the Web Storage API:

Showing values from WebStorage

Did you know about this little storage API? and what other ideas do you have that could be implemented using it?

5- Drupal and the old jQuery

According to its own mission:

“The purpose of jQuery is to make it much easier to use JavaScript on your website."

And so it has been for many years. It is, in short, a JavaScript library created to offer a standardized way (or something like that) to interact with the elements of the Document Object Model (DOM) in the simplest and most direct way possible.

jQuery has -at the time of writing- fourteen years of life since its first published version and extensive use throughout all the websites published on the Internet. Without falling into technological holy wars, we will just assume that it is still present (for now) in the development of Drupal and that several versions and formats of jQuery are offered within the platform. We will see how to use it and how to relate to it in a (relatively) efficient way.

5.1- Fast review of the jQuery keys

As this article is not by itself a jQuery tutorial and I’m afraid that at the end the extension of it will exceed twelve thousand words, you will excuse me for not stopping too much here. jQuery requires another manual of the same (or higher) extension. So let’s give some context through some basic keys and we’ll go on. Pay attention.

Remember:

  1. In jQuery, $ is an alias for jQuery.

  2. Usually, jQuery starts when the document is fully loaded, through the instruction: $(document).ready(function(){ // }.

  3. jQuery offers thousands of ways to interact with HTML elements, from selectors through the element id (#id), its CSS class (.class), HTM tag names (“div”), or attribute values (name = value). The list and its options is endless and it is convenient to have it somewhat tested: https://api.jquery.com/category/selectors.

  4. With the management of its selectors, you will be able to make changes at several levels in your HTML: CSS styles, add/alter/remove elements, add visual effects, make callbacks and Ajax requests. For all this you will use jQuery (perhaps).

And don’t forget to consider jQuery’s recommendations for good use. See this set of guidelines, quite old but interesting: http://lab.abhinayrathore.com/jquery-standards.

5.2- Availability of jQuery in our Drupal version

From Drupal 8 onwards, was changed the system for loading libraries and resources, causing nothing (or almost nothing) to be loaded by default.This, among other things, implies that jQuery is not included in every page unless you request it as a dependency for your resource (a library dependency for your module or theme, declared as we have already seen).

At this moment, all the libraries related to jQuery are declared in advance but they will only be preloaded if you need them. These libraries can be located in the /core/core.libraries.yml file:

The jQuery dependencies marked in core

Where you can see from line 350 of the file the list of jQuery libraries associated to Drupal’s core. As you can see, there are many jQuery libraries declared, some of them to be explicitly requested as dependencies in custom resources (modules or themes) and others for internal consumption, since sometimes, Drupal uses underneath jQuery plugins to build elements like buttons, navigation tabs and other resources.

Here is a graph prepared in 2015 by Théodore Biadala, @nod_ about the extensive use Drupal makes of jQuery (a little outdated, is from 2015): http://read.theodoreb.net/viz-drupal-use-of-jquery.

5.3- Using a different version of jQuery

Let’s suppose that for some specific needs of the project, we want to use a different version of jQuery than the ones supported within our version of Drupal, what to do? (asked the wise man). Well, we can add it as a resource to our project without problems through the guidelines we already know:

jquery-custom:
  remote: https://github.com/jquery/jquery
  version: "2.2.4"
  license:
    name: MIT
    url: https://github.com/jquery/jquery/blob/2.2.4/LICENSE.txt
    gpl-compatible: true
  js:
    js/jquery-2.2.4.min.js: { minified: true, weight: -20 }

And then we can overwrite the dependency from its declaration in the file my_custom_resource.info.yml:

libraries-override:
  # Replace the entire library.
  core/jquery: my_custom_resource/jquery-custom

Exercise 8: Changes based in jQuery

We will perform a couple of exercises using jQuery in our custom module.

  1. Loading text Lorem Ipsum via AJAX

After the previous exercises with JavaScript, if we close all the windows we have now, we will stay in our /javascript/custom route alone with our table of results showing comments associated with the current user, which was:

Showing the original list of comments

We will provide an introductory text to the page through the consumption of an external API that will provide us with Lorem Ipsum paragraphs. We will declare the new dependency in the usual *.libraries.yml file:

js_playing_with_jquery:
  js:
    js/playing_with_jquery: {}
  dependencies:
    - core/jquery

In this case we will try to load the new library through a hook of type hook_page_attachments() inside the file javascript_custom_module.module:

/**
 * Implements hook_page_attachments().
 */
function javascript_custom_module_page_attachments(array &$attachments) {

  // Getting the current route name.
  $route_name = \Drupal::routeMatch()->getRouteName();
  
  // Load the library only if match with the selected page by route.
  if (strcmp($route_name, 'javascript_custom.hello') == 0) {
    $attachments['#attached']['library'][] = 'javascript_custom_module/js_playing_with_jquery';
  }
}

And in the folder js/ we will create the new file playing_with_jquery.js , in which we will dump all our mandanga.

Let’s start by adding some introductory text to the page. in order to do this we’ll make a request to the web baconipsum through its API, for which we will use the jQuery function $.getJSON() that handles three parameters: a URL address, some data to build the request and a callback function in case the request is successful. This itself is a wrapper provided by jQuery to handle as a HTTP GET verb request in a JSON format: api.jquery/getJSON.

Let’s see what we can do: First we will add a new HTML container for the texts (

), then we will make the request, getting the results and loading a new paragraph (

) into the newly created container.

(function ($) {
  'use strict';

  $(document).ready(function(){

    console.log("The Playing with jQuery script has been loaded");

    $('#block-bartik-page-title').append("");

    // Calling AJAX.
    $.getJSON('https://baconipsum.com/api/?callback=?',
      { 'type':'meat-and-filler', 'start-with-lorem':'1', 'paras':'4' },
      function(baconTexts) {
        if (baconTexts && baconTexts.length > 0)
        {
          $("#bacon-text").html('');
          for (var i = 0; i < baconTexts.length; i++)
            $("#bacon-text").append('

' + baconTexts[i] + '

'); $("#bacon-text").show(); } }); }); })(jQuery);

But let’s give it some movement thanks to the bizarr errrr&mldr;dynamic functions provided by jQuery. We are going to rethink a little this initial script to make a progressive loading of the Bacon Ipsum welcome paragraphs.

First of all, we will put a button. We’ve already stained the rendered page too much and we’re going to leave the view clean before playing with bacon:

// Creating the new elements just a div and a button.
$('#block-bartik-page-title').append("");

$('#block-bartik-page-title').append("");

Next, we will add a click event to that button so that when it is pressed, it will start processing bacon:

// Adding a click event to the former button.
$('#getting-bacon').click(function () {
  
  // Processing bacon. 
  
});

In case we already have bacon loaded, we take care of cleaning the div:

// Hidding the block for the next load.
  $("#bacon-text").hide();

And we go ahead to process out bacon requests:

// Getting values in JSON format.
  $.getJSON('https://baconipsum.com/api/?callback=?',
    {'type': 'meat-and-filler', 'start-with-lorem': '1', 'paras': '4'},
    function (baconTexts) {

    // We're in the callback function for success in JSON request.
      if (baconTexts && baconTexts.length > 0) {

        $("#bacon-text").html('');

        // Loop into the received items.
        for (var i = 0; i < baconTexts.length; i++) {

          // Creating the naming keys.
          var bacon_id = "bacontext_" + i;
          var new_bacon = "

" + baconTexts[i] + ""; // Add the new element to the div mother. $("#bacon-text").append(new_bacon); } } });

">

To make the subject a bit more dynamic, we added one of jQuery’s less poisono&mldr;emm&mldr;more discreet animations with a confirmation message and the .slideDown() function from jQuery, which vertically scrolls the content from top to bottom:

// Show the div mother show in a progressive way.
$("#bacon-text").slideDown(7000, function(){
  console.log("New bacon has been loaded");
});

And when you reload everything, you see the completeexecution of all the JavaScript on the page:

Execution of the whole JavaScript code

Here you have the code formatted as a gist:

6- Drupal Behaviors

In this guide, we already know how to integrate JavaScript in our modules and projects, how to create interactions, passing parameters between PHP (server) and JavaScript (client), integrating jQuery in our dependencies and as a final step to prepare the last step that should integrate all the above, we must talk about the concept of “Drupal Behaviors”.

What is a Behavior? It’s the organized way that Drupal offers us to add and index behaviors based on JavaScript, through the extension of an own hook_behavior object that is part of another global Drupal JavaScript object.

6.1- Anatomy of a Behavior

We will review the basic functional structure of the Behavior itself, as this format becomes the essential form of Drupal’s JavaScript integration and it is in our interest to know its parts first. Let’s see.

Anatomy of a Drupal behavior

Let’s have a look.

  1. namespace: A Drupal behavior has to have a specific and unique name in order to be located, identified, executed and removed. It will become part of the Behaviors object and will be indexed there. In this case it is simply named “namespace”.

  2. attach: This is the function to be executed as soon as the Behavior is loaded. For the executions of Behaviors, it will be gone through the indexed behaviors and for each one will be called its function"attach”, each one doing what it has to do.

  3. detach: As when adding, a function is provided to be executed when the behaviour is removed from the behaviour log.

  4. context: It’s a variable where the piece of the page that is being transformed is loaded. In an initial loading of the page, it will be the complete DOM, in AJAX operations it will be the corresponding HTML piece. This variable helps us to tune up more with our operations, so we must have clear how to handle it.

  5. settings: This variable we’re seeing in the screenshot is used to transfer values from the PHP code to JavaScript and make them available in the form we saw earlier from our code. To do this we must declare the core/drupalSettings as a dependency of our JavaScript library.

  6. trigger: The trigger variable that is passed to the function associated to detach represents the condition for the deactivation of the behavior, where some causes are admitted:

  • unload: This is the default reason, it means that the context element has been removed from the DOM.
  • serialize: For forms with AJAX, where this variable is used to send the form itself as context.
  • move: The element has been moved from its position in the DOM from its initial location.
  1. jQuery: In this case, this point just represents the passage of parameters to the IIFE, usually (jQuery, Drupal) as integrated dependencies available for our code.

6.2- The global object “Drupal”

As stated in the official Drupal documentation, Drupal.behaviors is an object that is by itself part of the global JavaScript object “Drupal”, created for the entire running Drupal instance. This was a concept already used and exploited in previous versions of Drupal, with some aspects remaining over time.

The main one: that the modules that want to implement JavaScript must do so by adding logic to the Drupal.behaviors Object. Let’s see how, and let’s know the basis of Behaviors: the global object “Drupal”.

If you know the concept of “Object” in JavaScript, you will know that it’s an advanced way of handling data in JavaScript, and basically, it consists of a disordered collection of related information: primitive data types, values in properties, methods&mldr; everything designed under a basic structure of key pairs: value.

// Basic example for a JavaScript Object.
    let drupal_event= { 
    name: 'Drupal Camp Spain 2020',
    location: 'Málaga',
    original_location: 'Barcelona', 
    established: '2010',
    displayInfo: function(){
         console.log(`${drupal_event.name} was established in
                      ${drupal_event.established} at
                      ${drupal_event.original_location}`); 
     }                                                                            
 }
 
 // Shows feedback by Console.
 drupal_event.displayInfo();

This object is perfectly executable in the JavaScript console of your browser, and will work as expected:

JavaScript Object example from Console

Read more about objects and properties in JavaScript: geeksforgeeks.org/objects-in-javascript/.

Objects in JavaScript can be browsed, modified, deleted and above all (for the reasons we are dealing with now), extended. This is exactly what will happen with our new friend, the global object “Drupal”, an existing resource -always- in any Drupal site installed from the drupal.js library present in the /core/misc/ path:

Main file drupal.js in core

Here in the previous image we see the file (a fundamental script in Drupal), which serves to provide centrally various JavaScript APIs in Drupal and to provide a common namespace to group all the extensions that will be added to the global object. In fact, if you call the global Drupal object, you will be able to see the base content it brings:

Watching the content of the global object Drupal

Of all the previous list, perhaps it is Drupal.behaviors and its related methods (attachBehaviors, detachBehaviors) that are most important to us now, although we should point out some interesting utilities:

Well, we’ve already seen a little piece of theory to gain context&mldr;it’s time to practice a little. Let’s extend what we already know how to do with a new exercise:

Exercise 9: Dialog window from the global object “Drupal”

We will take the Drupal dialog API as a reference to build a window into our project through our custom module. To begin with, we are going to register a new library in our custom javascript_custom_module module, inside the javascript_custom_module_libraries.yml file, which will now look like this:

js_hello_world_console:
  js:
    js/iife_execution_example.js: {}
    js/hello_world_console.js: {}
    js/iife_salute_example.js: {}

js_hello_world_advanced:
  js:
    js/hello_world_advanced: {}
  dependencies:
    - core/drupalSettings

js_custom_dialog_window:
  js:
    js/custom_dialog_window: {}
  dependencies:
    - core/drupal
    - core/jquery
    - core/drupalSettings

Next we load the new library as #attached in our render array returned by the Controller, from line 55 in the file CommentsListController.php :

$final_array['welcome_message']['#attached']['library'][] = 'javascript_custom_module/js_custom_dialog_window';

And we’ll build a very basic modal window, based on pure JavaScript. This dialogue will only have a simple message and a button to interact, in which we will include a style change on the element containing the message.

Let’s see the new file custom_dialog_window.js :

function () {
  'use strict';

  // Put here your custom JavaScript code.

  // First creating and initialising the new element.
  let new_tag = document.createElement("P");
  new_tag.setAttribute("id", "my_p");
  new_tag.innerHTML = "Hello World from a custom Dialog Window.";

  // Then we'll creating a new modal window.
  Drupal.dialog(new_tag, {
    title: 'Custom Dialog Window',
    buttons: [{
      text: 'Change colour',
      click: function() {
        let change_colour = document.getElementById("my_p");
        change_colour.style.backgroundColor = "red";
      }
    }]
  }).showModal();

})();

You can review all the JavaScript associated with the global object “Drupal” thanks to the great documentation Théodore Biadala (@nod_) published years ago about the Drupal JavaScript API:

http://read.theodoreb.net/drupal-jsapi/index.html

6.3- Behaviors in Drupal

In a previous section, we already saw how to run jQuery in our code. We also know that it is important to check if the document (DOM) has already been fully loaded before starting to perform actions. Basically:

(function ($) {
  'use strict';
  $(document).ready(function() {
   // Put here your jQuery code. 
  });
})(jQuery)

But let’s think carefully about this execution: it will be performed when the DOM has been loaded completely (at an initial moment), but it will not make adjustments after a partial loading of the DOM (for example, after an AJAX execution that modifies only a portion of the DOM). We need another idea. See the next example:

(function ($, Drupal ) {
  'use strict';

  // Put here your custom JavaScript code.
  Drupal.behaviors.unsplash_connector = {
    attach: function (context, settings) {
          console.log("Loaded Unsplash Behavior");

    },

    detach: function (context, settings, trigger) {
    // JavaScript code.
    }
  }


})(jQuery, Drupal);

This code, when executed, will make several print calls in Console (in this case, up to three times):

Executing code from a Drupal behavior

Why is this? Well, as we can see using breakpoints in the JavaScript debugging console of our phavorite browser, the loading of behaviors by the global Drupal object is done several times during the loading process of a single link: in this case there is a “full” loading of the DOM and several “partial” reloads through AJAX. In each case, a processing of behaviors is done through the method:

Drupal.attachBehaviors (line 17, library drupal.js)

Which loads a function that runs through all the behaviours and executes them according to their context and parameters:

Looping over Drupal.behaviors

The next step is to put some control on the execution of the instruction, passing it from an active mode (that writes in the console just when loading) to a reactive mode (that writes only when an interaction takes place):

(function ($, Drupal ) {
  'use strict';

  // Put here your custom JavaScript code.
  Drupal.behaviors.unsplash_connector = {
    attach: function (context, settings) {
      $('#unsplash_section', context).click(function() {
        console.log("Loaded Unsplash Behavior");
      });

    },
  }


})(jQuery, Drupal);

So now we have placed over the ID selector of our welcome message a click control event, which when clicked loads a message into the console:

Loading messages in Console by click event

With this small example above, we have seen how to add a small event-based (click) functionality. Let’s go on to do something more interesting.

Exercise 10: Image Board from Unsplash using Drupal.behavior

We will implement a functionality that operates by consuming an external API through Drupal Behavior.

We are going to practice with a slightly more advanced (and more beautiful) idea: we will connect to the public API for applications of an online image stock service from a new Drupal Behavior and from there we will make image requests that we’ll show then from a custom image board in our Drupal.

What do we need? Well for this recipe we will need the following ingredients:

Getting the Unsplash API key

  • A new JavaScript library within our custom module with its own .js file to store this Behavior:

    Creating the new JavaScript library

  • A new route set declared in the routing file, a new controller class and a method that generates a render array as response:

    More resources for the new unsplash functionality

To facilitate the following integrations, we are going to add to the render array a couple of properties (#prefix, #suffix) to add a new

with a own id = unsplash (see the image above).

Now with these ingredients, we’ll start. First we create the skeleton of our Behavior and define what we only want to be loaded once (and not reloaded with AJAX):

(function ($, Drupal) {

  'use strict';
  Drupal.behaviors.getting_unsplash_items = {
    attach: function(context, settings) {
      $(context).find("#unsplash").once('unsplashtest').each(function() {  
        
        // All our new functionality.   
        });  
      }
 };
})(jQuery, Drupal);

Remember: the term we provide to jQuery.once() is totally random and non-repeatable, just to trace internally that the action already happened.

First part: We create a welcome message and two buttons: one to start an image search process and another one to clean the image board generated from the search and the results.

// Adding the buttons through jQuery.
$("#unsplash", context).append("Load Images" );
$("#unsplash", context).append("Clean Board" );

// Adding an event listener to the new buttons using jQuery.
$('#load_button').click(function() {

  // In case of click we will clean the former message.
  $("#message", context).remove();

  // In case of click we will call to the prompt window.
  processingKeywords();
});

// Adding a second event listener to the clean button. 
$('#clean_button').click(function() {

  // In case of click we will clean the written former message.
  $("#message", context).remove();  
  
  // And we will remove the entire image board too.
  $("#image-board").remove();
});

As we can see in one of the previous calls, the image search process from the introduction of a keyword begins to be delegated to functions, started by the processingKeywords() function and we launch a prompt to capture the keyword and make sure to check if empty terms are being accepted:

function processingKeywords(){
 let message = '';
 let option = prompt("Please write a keyword for search: ", "boat");

 if(option == null || option == ""){

   // Null option without response.
   message = "Sorry but the keyword is empty.";

   // Render in screen the message from the prompt.
   $("#unsplash", context).append("

" + message + "

"); }else { // Valid answer launches request. message = "Ok, we're searching..." + option; // Render in screen the message from the prompt. $("#unsplash", context).append("

" + message + "

"); // Launching external request with some delay with arrow format. setTimeout(() => { gettingImages(option); }, 4000); } }

And we call the function responsible for managing the requests, gettingImages(), with the keyword as a parameter. We will use async / await to avoid problems of uninitialized variables in case the service was delayed. We also give a little delay to the call of the next function.

async function gettingImages(keyword){

// Loading basic values Access Key and End Point.
const unsplash_key = 'YOUR APP KEY';
const end_point = 'https://api.unsplash.com/search/photos';  

// Building the petition.
let response = await fetch(end_point + '?query=' + keyword + '&client_id=' + unsplash_key);

// Processing the results.
let json_response = await response.json();  

// Getting an array with URLs to images.
let images_list = await json_response.results;

// Calling the createImages method.
creatingImages(images_list);

}

At last we’ll invoke the function that will take the image address list and we will build the corresponding HTML tags:

function creatingImages(images_list) {

 // If a former image board exists we will delete it.
  $("#image-board", context).remove();

 // Creating a new image board as frame for the images.
 $("#unsplash", context).append("");

  // We will add some CSS classes for styling the image board.
  $("#image-board").addClass("images-frame");

  // Now we will set the received images inside the new board.
  for(let i = 0; i < images_list.length; i++){
    const image = document.createElement('img');
    image.src = images_list[i].urls.thumb;
    document.getElementById('image-board').appendChild(image);
  }

  // When finished we will put a border for the image board.
  $(".images-frame").css({'background-color': '#babb8f', 'border': '5px solid #1E1E1E'});
}

Note: If you are looking for information about the use of jQuery.once(), remember the transition in its use from Drupal 7 to Drupal 8 and 9 for the passage of functions as a parameter ->

// Example of use in Drupal 7 
$(context).find(".element").once("random-key", function () {});

// Example of use in Drupal 8 || 9
$(context).find(".element").once("random-key").each(function () {});

Read more about jQuery.once():

And so, if we go in our test Drupal on the path:

http://drupal.localhost/unsplash/service

We will already have available the new image board obtained from the Unsplash API and built from a Drupal Behavior:

Loading images from Unsplash

Here you have available the complete code of the Behavior that we have just implemented:

7- JavaScript without JavaScript: #ajax, #states

It was necessary, at least, to make a review on these knowledge areas where JavaScript is of indirect handling and execution. It is there but it is not seen. The subject is so extensive and can reach a level that would require more articles about the topic, so I will limit myself to make a review of some keys and launch the “to be continued&mldr;” for later (or maybe this article would never see the light).

7.1- (Brief) Introduction to AJAX in Drupal

The Ajax API in Drupal contains such an extensive set of classes, events, resources and possibilities that you can make several articles of the extension of it just about using Ajax. Due to the limitations regarding the extension of this tutorial, we will focus on some basic keys, leaving for later the possibility of preparing an article on more advanced issues.

Here you can check it out the AJAX API in Drupal.

We can use, at a basic level, Ajax for three well known formulas:

  • In links: using the class ‘use-ajax’ in a link, we can give you Ajax treatment.

  • In form elements: We can add Ajax events to our form elements by using the #ajax property within a render array definition.

  • In form buttons: adding the class ‘use-ajax-submit’ in the element declaration, we will make a call with Ajax.

Let’s see one of its main uses in form elements. This is an example of AJAX actions to be performed from the change of option selected in a drop-down list, specifically one that allows to select a region, so we are using AJAX like a trigger. In this case we’re adding the #ajax property to an element in order to change its options, so we can load some related properties and after that, we’ll create a new callback function:

   // Offers a select for Regions.
    $form['main_region'] = [
      '#type' => 'select',
      '#title' => $this->t('Select Region'),
      '#description' => $this->t('This will be your selected region for contact.'),
      '#options' => $terms_options_2,
      '#weight' => '8',
      '#prefix' => '

', '#suffix' => '

', '#ajax' => [ 'event' => 'change', 'method' => 'html', 'callback' => '::loadRelatedOfficesCallback', 'wrapper' => 'contact_form_office', 'progress' => [ 'type' => 'throbber', 'message' => $this->t('loading related offices'), ], ], ];

In this case I’m building a form using the Drupal Form API and I need some operations over a select element. Due to this, I’m adding a very specific block focused to AJAX:

 '#ajax' => [
    'event' => 'change',
    'method' => 'html',
    'callback' => '::loadRelatedOfficesCallback',
    'wrapper' => 'contact_form_office',
    'progress' => [
    'type' => 'throbber',
    'message' => $this->t('loading related offices'),
    ],
      ],

Here I’m specifying a event (change), a method for the event (html), a callback, marking a wrapper (the div for the element that will be changed from this one) and at last some indicators for the AJAX processing: an icon of “loading” and a message for the user.

What is happening in the callback? well, First we ask for the triggered element, by using $form_state->getTriggeringElement(). So you can get the item. Other importante step is get the css selector marked in the triggered element, by using $triggeringElement["#ajax"]["wrapper"].

/**
 * Callback function ready to get offices renewing the HTML component.
 */
public function loadRelatedOfficesCallback(array &$element, FormStateInterface $form_state){

  // Gets the initial values from the triggered element.
  $triggeringElement = $form_state->getTriggeringElement();

  // Gets more info or computed values.
  [...]

  // Gets the css selector to modify when callback ends.
  $wrapper_id = $triggeringElement["#ajax"]["wrapper"];

  // Executes changes and alterations.
  [...]

// Creates a new AjaxResponse and adds a jQuery Command for replace the item.
$response = new AjaxResponse();
$response->addCommand(new ReplaceCommand('#' . $wrapper_id, $changed_value));

return $response;
}

From the former callback, only two lines are interesting: the creation of a new AjaxResponse, using the related class: api.drupal.org/class/AjaxResponse and the load of a new command for AJAX, using the action commands defined in the AJAX API of Drupal: drupal.org/ajax-api/core-ajax-callback-commands.

These AJAX commands will add the required jQuery internally and will prepare the action without us having to add the necessary JavaScript code directly.

7.2- Rendering elements with #states property

The #states property is available for use within Drupal’s render arrays and assigned to a form element, it allows you to add certain conditions to the behavior of that element, enabling changes dynamically.

Actually, the #states property ends up being managed from the JavaScript library drupal.states available for loading as a dependency in the form core/drupal.states, which points to the path where the library /core/misc/states.js is located inside Drupal, although it’s not necessary to make an explicit load of it since the rendering system that manages the Render Arrays checks the existence of the property and if it is present, it directly assigns the JavaScript library.

The use of this property allows the creation of elements within a form that can alter their status -show, hide, disable, enable, etc.- based on conditions both of the element itself and of another element different from the form (that when one is clicked another is hidden, for example) and using jQuery syntax when declaring the selectors.

The mechanics is that we will declare actions from our side and Drupal from its side will provide all the Javascript/JQuery needed to make those declared actions happen on the fly. Everything starts with the use of #states as a property when declaring the element of the form, and from there Drupal is in charge of adding the necessary JavaScript to change elements through the drupal_process_states function which is deprecated from Drupal 8.8 and becomes part of the FormHelper class (although it maintains the same functionality).

// States that you can apply with remote conditions (origin):
empty, filled, checked, unchecked, expanded, collapsed, value.

// States that you can apply over an element (target):
enabled, disabled, required, optional, visible, invisible, checked, unchecked, expanded, collapsed.

The basic structure of a state is that of a multidimensional array with the following form:

[
  STATE1 => CONDITIONS_ARRAY1,
  STATE2 => CONDITIONS_ARRAY2,
  ...
]

Where an array of conditions, in turn, is another array that stores the conditions foreseen for the change of state of that element, through the scheme of use of conditions in #states:

'#states' => [
  'STATE' => [
    JQUERY_SELECTOR => REMOTE_CONDITIONS,
    JQUERY_SELECTOR => REMOTE_CONDITIONS,
    JQUERY_SELECTOR => REMOTE_CONDITIONS,
    JQUERY_SELECTOR => REMOTE_CONDITIONS,    
    ...
  ],
)],

I the next block code we will see an example of using #states. In the context of a Form created with the Drupal Form API, we make a textfield called “Name”, reacting to the state change of a previous checkbox option. If the previous checkbox is clicked, then we make our field invisible:

$form['name'] = [
     '#type' => 'textfield',
     '#title' => t('Name:'),
     '#weight' => 1,
     '#states' => [
       'invisible' => [
         ':input[name="newcheck"]' => ['checked' =>TRUE],
          ],
        ],
      ];

8- Troubleshooting: Problems and solutions

Now in this section we are going to compile some frequent errors related to the use of JavaScript in its different modalities (vanilla, Behaviors, AJAX) and its solutions.

8.1- Slow execution due to wrong use of ‘context’

In our behaviors we must send -always- the context of execution of this. It is very important in terms of performance, since it facilitates the localization of HTML selectors.

This can be seen with another simple example, so we can observe the importance of handling the variable “context”: as we have seen in previous sections, in this value is always stored the object or part of it that has just changed (at the beginning in the first load the complete DOM, then in successive AJAX calls will be each piece of HTML modified). Not controlling this, can make that in each execution of a behavior, a selector is searched by all the document instead of its concrete zone, what can slow down the execution of the website.

Thus, a defined behavior such as this:

(function ($) {
  'use strict';

  Drupal.behaviors.usingcontext = {
    attach: function(context, settings) {
      $("#unsplash").append('

Hello world

'); } }; }(jQuery));

This code will generate the next response:

Loading JavaScript without context value

Three executions (one for each load: 1 total DOM + 2 partial AJAX). In fact, it will have a similar behavior to this one (since it will go looking for the selector throughout the document):

(function ($) {
  'use strict';

  Drupal.behaviors.usingcontext = {
    attach: function(context, settings) {
      $(document).find("#unsplash").append('

Hello world

'); } }; }(jQuery));

However, if we facilitate jQuery’s work in the best possible way, we will achieve a more efficient behavior:

(function ($) {
  'use strict';

  Drupal.behaviors.usingcontext = {
    attach: function(context, settings) {
      $(context).find('#unsplash').append('

Hello world

'); } }; }(jQuery));

This version only runs the .append() once, because:

  1. The selector is located the first time, where context = full DOM.
  2. The selector is not located again, where context = HTML AJAX piece.

And within our options we have available the use of jQuery.once() as we saw in previous sections, which has a similar operation through a random selector that we add so that it can do the internal load tracking:

(function ($) {
  'use strict';

  Drupal.behaviors.usingcontext = {
    attach: function(context, settings) {
      $('#unsplash').once('cacafuti').append('

Hello world

'); } }; }(jQuery));

If we also combine the use of jQuery.once() with our own segmentation through the “context” variable, then we obtain a more optimized execution:

(function ($) {
  'use strict';Drupal.behaviors.usingcontext = {
    attach: function(context, settings) {
     $(context).once().find('#unsplash').append('

Hello

'); } };

Or we can use:

$('#unsplash', context).append('

Hello world

');

I think the important thing is that we have to learn for managing the context variable to ease the JavaScript workload ;-). Here you have a set of rendering tests about Drupal Behaviors so you can see how it works on screen:

8.2- Loading JavaScript out of context

Another case that we have seen with some frequency when inheriting a legacy project (or a new project but without respecting the proper guidelines), is the case of loads of JavaScript libraries destined only to a specific page throughout the entire website (this happens more than we think).

Someone went through the project, received the task, googled it, solved the task as well as they could, and then the next person arrived&mldr; so, when you open the browser console, everything is a sea of warnings and red errors alerting you to JavaScript loads that cannot be done, dependencies that cannot be solved, or selectors that do not locate the elements they should. It’s time to locate the imports of our resources: what are the custom JavaScript libraries used in the project, where are they being registered and how are they being added.

This is where your ability to use your working IDE’s search engines in order to locate behaviors through the console comes into play, looking for:

  1. Which ones are being created.
  2. Which ones are being executed at that moment.

Looking for JavaScript Resources in a Drupal Project

You will discover some libraries that have been added to the Theme in general and that should really only be added by #attached to only one specific page, for example.

8.3- Error: illegal choice in dynamic select

This is a typical error in custom forms created with the Drupal Form API when using AJAX, very common in scenarios where we want to create dynamic selects: we have an initial select and based on the choice made in this, we modify the options of the second select through a Callback.

It’s a classic error and very specific if you are making these fields as required fields in your form. The form validation function (even if you are overwriting your own) is re-checking the status of the form values and detecting inconsistencies. Just when we think everything is ok, we load the page, start testing and receive the following message by browser:

Illegal choice error in Drupal when using AJAX

Ok, What’s going on? Basically, and in a very short way: Drupal is taking care of protecting your installation by preventing a form element from being completely replaced by a new one or directly added to the form definition outside the main function public function buildForm(array $form, FormStateInterface $form_state) in your form definition in order to avoid attacks and injections. Due to this, you have to change the implementation. Let’s see.

Think about in what I’m doing in this piece of code from a callback function:

 /**
  * Callback function ready to get offices renewing the HTML component.
  */
public function loadRelatedOfficesCallback(array &$element, FormStateInterface $form_state){

  // Gets the initial values from the triggered element.
  $triggeringElement = $form_state->getTriggeringElement();

  // Gets the value from the triggered element.
  $value = $triggeringElement['#value'];

  // Gets taxonomy terms children by its parent using tid.
  $offices = $this->getOfficesbyRegion($value);

  // Gets the css selector to modify when callback ends.
  $wrapper_id = $triggeringElement["#ajax"]["wrapper"];

  // Builds a new updated version of the form component.
  $element = [
    '#type' => 'select',
    '#name' => 'main_office',
    '#title' => $this->t('Select Office'),
    '#description' => $this->t('This will be your selected office.'),
    '#options' => $offices,
    '#weight' => '3',
    '#prefix' => '

', '#sufix' => '

', ]; // Asks to the render service in order to convert the component in HTML pure. $renderer = \Drupal::service('renderer'); $renderedField = $renderer->render($element); // Creates a new AjaxResponse and adds a jQuery Command for replace the item. $response = new AjaxResponse(); $response->addCommand(new ReplaceCommand('#' . $wrapper_id, $renderedField)); return $response; }

Ok, but the former block doesn’t like to Drupal. It’s illegal when we’re 1) creating a new element. 2) Ask to the render servie to transform the element in HTML and 3) Loading the new element in an existing wrapper using AJAX Commands. This is problematic and It’s an approach that we should avoid.

What can we do? We can think about two options: one more secure than other. Let’s see.

  1. Mark the element to be replaced as validate using the property #validated' => TRUE, avoiding that Drupal reviewed this and let your change pass. Less secure.

  2. Change the focus: do not perform the replacement of the entire element on HTML, but dynamically modify the $options value array through Callback. More secure and recommended.

See this related proposal: Suppress validation of required fields on AJAX calls in Drupal 9.x

In this section you will find links to guides, relevant information and related reading resources.

9.1- JavaScript fundamentals

9.2- Functions in JavaScript and the IIFE format

9.3- JavaScript and Drupal

9.4- jQuery

9.5- Snippets

9.6- Others

10- :wq!

If you have managed to reach the end of this guide linearly, congratulations! Thanks for your patience and I really hope it has been useful to you.

This guide has been published without -direct- profit, but my personal interest is that it spreads and helps my communication. If it has been useful to you, share it using the “share” of this site, putting a simple tweet. It will be important for me. Thank you.

[embedded content]

Share on:Please enable JavaScript to view the comments powered by Disqus.comments powered by
Dec 31 2020
Dec 31

Donata Stroink-Skillrud, president of Termageddon LLC, licensed attorney, certified information privacy professional, and vice-chair of the American Bar Association ePrivacy Committee joins Mike Anello to talk about privacy (and other web site) policies (it's much more interesting than you think!) and why it is so important for modern web sites to have one.

URLs mentioned

DrupalEasy News

Audio transcript

We're using the machine-driven Amazon Transcribe service to provide an audio transcript of this episode.

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher and YouTube.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Note: the Termageddon links above are affiliate referral links.

Dec 31 2020
Dec 31

Wouldn’t you say that websites are meant for everyone. If a children’s clothing site is not able to serve the needs of all the parents that fall in its target audience, what would be the point then, right?

We know every person is different, from the way they see things to the way they analyse them. Our differences do not make us less than the next person; web accessibility has ensured that, at least in the web domain. There are people who have certain physical conditions that limit some of their abilities, they are simply differently-abled from the rest of us. 

  • Colour blindness can affect the visual perception of a website.
  • Wheelchair user-concerns can affect mobility. 
  • Hearing problems can affect the auditory elements. 
  • Photosensitive epilepsy can induce seizures in a person through certain elements on the web. 
  • Dyslexia can affect the cognitive awareness of a user. 
  • Sleep deprivation, an incidental issue, can affect your accessibility as well.

All of these and more can impair the user experience for people who are suffering from them. That is where web accessibility comes into the equation. This blog will talk about the ABCs of implementing the best practices when designing for web accessibility. 

What is Web Accessibility?

Where websites are concerned accessibility becomes an important consideration that can actually become their breaking point, if not done the right. Before I start on the long tirade on how to make the web accessible, it is equally important to understand what it actually is. 

Web Accessibility can plainly be understood by its purpose, which is to make websites and their numerous tools and technologies in such a way that people with disabilities can easily use them. It is actually as simple as that; building websites that cater to the differently abled people. 

A more thorough definition would point towards making websites that; 

  • Can be perceived by people with disabilities;
  • Can be understood by them; 
  • Can be easily navigated by them; 
  • Can be interacted with; 
  • And they can also contribute to the web through them. 

This concept or more like a principle takes into account all the disabilities that have an affect on the web user experience, be it auditory, cognitive, visual, speech, neurological or physical. 

You might be thinking that that is all web accessibility is responsible for, to make the web more accessible for people with disabilities, but there is more. 

  • From different input modes to bright sunlight affecting the UX;
  • From transient disabilities like fractured hand to ageing hampering your abilities; 
  • From a slow internet connection to an expensive one; 
  • From people in rural areas to the people in developing and under-developed nations;

Web accessibility is meant for all, it takes into account every aspect that can impede on a person’s web experience, eliminate them and make the web a place that is all-inclusive to the core.

It is a concept that is meant to highlight the web accessibility, that is a given because of its name, at the same time, it is also a concept that works towards usability and inclusion. All three are pretty closely related, perhaps that is why they are considered to be the fundamentals of web accessibility. You know what that means by now, however, with usability, the purpose is to build designs that can be used by everyone, while inclusion focuses on diversity, aiming for the participation of everyone in the experiences the web can offer. 

Do you not think that web accessibility is crucial in the way we design our websites? I am certain you do.

What is the standard for Web Accessibility?

In 2008, the new web accessibility guidelines were implemented and even in 2020, they are still prevalent. These guidelines have set the standards for accessibility that all web experience provides must adhere to. These are the WCAG 2.0 guidelines and they can be summed up in four principles. 

Make websites perceivable 

It is after perception that you actually start becoming involved in something. That is why the first principle of the WCAG 2.0 guidelines is making sites perceivable. How do you perceive something? Using your senses, right? Sight, sound and touch being the players here. So, the elements on your site need to be focused on these senses to be truly accessible. To exemplify, it would help a blind person to hear the description of a video to perceive what is going on in it. 

Make websites operable 

Making a site operable means ensuring that all users can use it with ease, by all, you know I mean including people with disabilities. Operating a website has to do with its navigation and interacting with various components. As per this principle, your site should be equipped to operate well in a keyboard-only navigation without any time constraints along with helping the user out, if he were to make some errors. 

Make websites understandable 

Next is the understanding of a website, your users should not have to spend a lot of time to understand some simple instructions. Therefore, the third principle focuses on using clear terms that make even the complex of issues easy to understand. 

Make websites robust

This last principle is a bit on the technical site. Using a clean code for HTML and CSS that meets the overall standards would make it very easy for other technology, third-party included, to depend on your site. It would make your site more robust and thus, easy to process.

How to design for web accessibility?

Now comes the important part, knowing the semantics of wens accessibility won’t do you any good, if you do not know how to implement them in web designing. It is not going to be a drastic change in the design palette of your existing website, rather some minor, yet thoughtful, changes can go a long way in achieving the accessibility standards. So here they are. 

Aptness in contrast 

I’m going to start with contrast because I feel that contrast is one of the major problems seen in accessibility, despite it being a basic one. The text and the background needs to be in contrast to make the text pop. It could be in images, buttons or plain CTAs. For this reason, WCAG has set certain parameters for contrast ratios that need to be followed to achieve the basic requirements of accessibility. There are three types of text generally seen on websites, and all three need to maintain a separate ratio. 

For body text, the ratio is 4.5:1;
For large text, it needs to be 3:1;
And for black and white text, the ratio is to be set at 21:1.

You may have noticed that the larger text had a smaller contract ratio, the reason is simple, larger text is easy to recognise. A size of 24 pixels or a 19 pixel bold would really be hard to miss. 

The homepage of NBC is shown.


Aptness in colour 

Did you know that one in every twelve men is colour blind? It may seem like a lot, but that is true. The inclusion of people suffering from colour blindness, low vision or total blindness, thus becomes very important, because they account to a massive proportion of web users. 

So, just using colour to highlight a component is a colossal mistake in terms of accessibility. You have to use other ways of highlighting the same component. 

Different coloured shapes can be seen in two boxes, one has number written along them and the other doesn't.

While the first image is a perfect example of what not to do, the second one can be considered as an epitome in using colour and keeping accessibility in the picture. 

Aptness in forms 

Today, you would hardly find a website that would not accompany a form to fill, after all that is their way of connecting with the audience. So, making the forms accessible to all audiences is key. And sufficient labelling is the way to do it. 

Every field that you have in the form needs to have a corresponding label that isn’t too far away from it. Adjacent labels are much better than the ones inside it that disappear after the field has been filled with content. 

You may know that oftentimes a form is filled the wrong way, so letting the user he has made an error is also crucial. This needs to be done using colours and an instruction or some sort of a sign because someone with colour blindness might not be able to see the red that is being highlighted in an erroneous field. 

An incorrectly filled online form can be seen.


Aptness in focus elements 

There are certain elements in a web design that require more focus than others, these are essentially the interactive elements of a site, such as the BUY NOW button. You would have read these two words before even reading the sentence because they are in capitals, meaning they are focused. 

The same ideology has to be followed by you in web designing. You should have a different highlight for when a button is hovered over, when it is reached by the keyboard and when it is touch or click ready. 

A webpage can be seen with one highlighted section to showcase the importance of keyboard focus in web accessibility.


If your focus elements are not highlighted properly, by say a keyboard, can it be possible for the keyboard user to actually get the full experience of your site, I think not. 

Aptness in media

Media is another integral part of your website, some might even say it gives life to an otherwise dull webpage and they are not entirely wrong. To keep the liveliness alive, you have to make the media truly accessible. 

Starting with images, it is extremely imperative that you provide alternative text to the same. Adding a caption and description is equality pivotal for audiences using a screen reader. Almost all the CMSs have a pretty prominent alt text option while you upload an image, ensure that you use it. 

The addition of alternative text in pictures is shown during the publishing process of a blog.An example taken from one of our blogs.

What alt text is for images, transcripts are for the audio elements of a website. For the users with auditory difficulties, transcripts that are the sole medium of inclusion. With videos, it becomes necessary to include an audio version of what is happening in them. Considering explaining complicated graphs and tables is another step closer towards accessibility. 

Lastly, I want to mention autoplaying audios and videos. Nobody wants to listen or see them and they only make the user rush to find the button to stop them. So, as a general rule of them, add audios and videos in such a way that the user willingly plays them. 

Aptness in navigation

A user won’t just access a single page on your site, if he does that, then you won’t ever achieve your targets. To go through all of the pages and potential of a website, a user would need to navigate through it and you have to provide an obvious way to do that. You can do that by multiple ways with the provision of orientation cues, a site map is one of the most common ways. 

All of this depends on the layout and the structure of your website, so ensuring that it is meaningful and logical would go a long way in achieving accessibility. A well structured layout should have a couple of things in order; 

  • It should be flexible and resizable;
  • It should have a minimum of 320 pixels; 
  • And it should be able to zoom upto 400% and in close proximity. 

All three help people with disabilities navigate easily. Imagine you are using a magnifying glass to read through all the content and options on a website, would you not prefer that the things you are looking at are close to each other? You don’t have to tell me, I already know your answer. 

Then come the keyboard users, you have to be mindful of them as well. Without getting into the reason for people using a keyword instead of a mouse, I would tell you that for these people the tab key is the most crucial. You have to keep in mind when creating the order of your design, since you do not want your user to get lost while navigating. 

Aptness in content 

The content on your website is its heart and soul; it is what lets the user know of your intentions and your work. Therefore, being extra diligent with the content and its placement is prudent to avoid clutter. 

Let's start with readability, you cannot write your content like it is Da Vinci Code, yes, some people might still understand it, but that is not the aim right. Use simple language with short sentences, short paragraphs and common vocabulary because if the audience is not understanding it, well then, it’s a moot point. 

Then there is the spacing issue, you need to adequate spacing between paragraphs, lines and individual letters as well. 

It is recommended that the paragraph spacing should be twice the font size;
the line height should be 1.5 times the font size; 
And the letter and word spacing should be 0,12 and 0.16 times the font size. 

A similar content is shown writeen in two different styles to highlight how important spacing can be in web accessibility.The difference between the two is pretty stark. 

After doing so much, also try using a font that is easy to read on multiple devices, Arial and Sans Serif are a couple that are just that. 

Aptness in controls 

Then comes the bit about controls, and what are we controlling? That would be ‘the interactive elements.’ Every website has them, be they in the form of buttons or links, they are going to be there. Accessibility would be achieved when these controls are designed so that every person, despite their disability can use them. 

For instance, a person suffering from tremors might not be able to interact with a really small icon. He would find it very difficult to select or unselect a checkbox, the same is also true for the elderly, who suffer from a reduced dexterity due to age. 

Therefore, the size of a control needs to be 44 by 44 pixels, as per the recommendation of WCAG. However, even the size of 34 by 34 pixels could be deemed acceptable, anything below it would be prejudicial. 

Aptness in feedback 

As a last point, I want to mention your part in the user engagement process. When a user does something, whether it is right or wrong, your website should give him an indication of the same. 

For instance, he has filled a form, then there has to be a confirmation message or if there has been a modification in the page he was using, there has to be a notification to alert him of the same. This is called your feedback to the user and it has to be prominently displayed for the user to really get it. 

Aptness in effects 

Photosensitivity is a disorder that can cause headaches, nausea and make the sufferer dizzy. Photosensitive epilepsy is a disorder that can induce seizures in the patient from just fast flashing lights. Since many websites use effects and hyperlinks that blink rapidly, these people would suffer a great deal if they go on and use the said sites. 

 Therefore, accessibility guidelines mandate that you use animations and effects that do not make people suffering from epilepsy and photosensitivity suffer more. This could be achieved by trying to use; 

  • Animations that are small in size in comparison with the screen;
  • Animations that go in the direction of the scroll and follow its speed;
  • And animations that are not constantly moving or blinking. 

To be on the safer side, try to provide an option to pause or hide them, if the user wants to. Some sites even give the option of slowing them down, now that is a thought. 

Access this ultimate guide to plan for web accessibility. And, for a complete guide on web design, read this.

Conclusion

I want to conclude by saying that designing for web accessibility may mandate that you follow a certain set of principles, but that does not mean that it is inhibiting your innovative streak. Think of it as a chance to transcend your abilities and make your web experiences all the more valuable. 

For that to happen, you would have to constantly evaluate your site, especially in the design and development phase. An accessibility audit would help a great deal in that. It would help you in knowing the areas that need improvement and guide you through the accessibility transformation. 

Dec 30 2020
Dec 30

Oftentimes it may happen that you have a Github account where you upload all your code for practice sessions or learning purposes. You have another Github account pertaining to your company profile. When you push your changes, be it to your own Github account or a project-specific, how do you make sure that the respective account is always used? Well, I am here to help!

 

Set up SSH Keys

Let’s assume your two Github accounts are named githubPersonal and githubWork, respectively.

Create two SSH keys, saving each to a separate file:

mgh 1 img

Save it as id_rsa_personal when prompted.

mgh 2 img

Save it as id_rsa_work when prompted.

The above commands set up the following files:

- id_rsa_personal

- id_rsa_personal.pub

- id_rsa_work

- id_rsa_work.pub

Add the keys to your Github accounts

Copy the key to your clipboard:

mgh 3 img

In case you do not have pbcopy installed. To install and use pbcopy:

mgh 4 img

Edit your bash file

mgh 5 img

Create alias

mgh 6 img

Refresh your bash

mgh 7 img

Add the key to your account:

  • Go to your Account Settings.
  • Click “SSH Keys” then “Add SSH key”.
  • Paste your key into the “Key” field and add a relevant title.
  • Click “Add key” then enter your Github password to confirm.

Repeat the process for your githubWork account.

Create a configuration file to manage the separate keys

Create a config file in ~/.ssh/

mgh 8 img

Edit the file using the text editor of your choice, here I am using nano, which is readily available in Linux.

mgh 9 img

Paste the following in the config file:

mgh 10 img

Update stored identities

Clear currently stored identities:

mgh 11 img

It will show all identities removed

Add new keys:

mgh 12 img

Test to make sure new keys are stored:

mgh 13 img

Test to make sure Github recognizes the keys:

mgh 14 img

You should see something like this:

mgh 15 img

mgh 16 img

You should see something like this:

mgh 17 img

One active SSH key in the ssh-agent at a time

To keep things safe, we need to manually ensure that the ssh-agent has only the relevant key attached at the time of any Git operation. So that the active ssh key is used at the time of push

ssh-add -l will list all the SSH keys attached to the ssh-agent.

Clear currently stored identities:

ssh-add -D

Add the required ssh key

Change config name and email for a project

GitHub identifies the author of any commit from the email id attached with the commit description.

First, check the name the project is using:

mgh 18 img

Check the email the project is using:

mgh 19 img

If you need to change then do the following:

mgh 20 img

Dec 30 2020
Dec 30

With teams growing in size, it’s more important than ever to define a paradigm of deploying code to achieve a synonymous rule about the code and to keep contributors on the same page. This can be achieved with “Git Hooks”.

Git hooks are scripts which trigger specific automated actions based on an event performed in a git repository. The git hook name usually indicates the hook’s trigger (e.g. a pre-commit, a post-commit, etc). These hooks can be useful in automating tasks on your git workflow. For instance, based on defined rules, it can help us validate the code or run a specific set of tests before a code is committed. 

Setting the Git Hooks

Git hooks, thankfully, are a built-in feature. Which means we can access them and use them with our modifications as long as we have a git repository initialized. For better understanding, let us set up one.

Create a directory and cd into it:

gh 1 img

Initialize the repository with git and check the contents:

gh 2 img

On doing so, you would notice a hidden directory has just been created. This .git repo is used from git and is responsible for storing all the information related to the repository we created such as commit info, remote repository addresses, version control hashes, etc. Also, it is this folder where the hooks actually reside. If we cd into this folder we would be able to see certain scripts that are already present in the repository.

gh 3 img

One thing to notice here is that these hooks have an extension “.sample”. This means that they are not executable yet and not ready to be used as of this instance.

Let’s Cook inside the Hook!

Now that we know what a Git Hook is, let us take a look at how we can bring it to use, through an example. While we are inside the .git/hooks repository, we can see that there are many hooks present but not yet ready to be used. Let us use the pre-commit hook. But let us create one from scratch, instead of using the one that is provided.

Our objective here would be to validate the git config's global user email and the name respectively. Once all seems to be fine then it would show the user a message “Changes are ready to be committed” when a file is committed to git and another message showing “Changes have been committed” once the user commits the file to git.

Step 1

cd into the .git/hooks repository and create a new file called pre-commit.

gh 4

Step 2

The next step is to make the pre-commit file that we created, executable. Once inside the directory .git/hooks, make the file executable by typing in the command chmod +x pre-commit.

gh 5 img

Step 3

Now we need to create our script. In order for the script to run, we first need to specify our shell. Do this by using #!/bin/bash at the beginning of your script for bash or #!/bin/zsh if using the zsh shell like I am. Specifying an incorrect script would result in an exit code 0 and will be considered a fail.

This pre-commit hook will watch for incorrect commit authors using the script below.

gh 6 img

We will repeat the same steps for our post-commit hook.

  1. Create a post-commit hook.
  2. Make it executable.
  3. Insert your code in the post-commit hook.

If everything works fine then you should be able to see something like this below, when the correct credentials exist.

gh 7 img

In the above example, we have written our scripts in zsh but we can do the same with any other scripting language as well like Python, Javascript, Node.js, etc. The only thing to keep in mind is that we use the correct format at the first line of our executable script.

Dec 30 2020
Dec 30

After teaching folks how to use Composer effectively over the past couple of years, I figured it was time for me to (finally) update DrupalEasy.com to use Composer 2. I figured it would be a pretty easy process, and I was correct.

I've had a few people ask me about the process, so I thought I'd write up what it took to update this site to Composer 2. First a few facts about this codebase:

  • Dependencies are committed to the project Git repository.
  • The site is hosted on Pantheon
  • I use DDEV for local development.

So, the first step, prior to updating to Composer 2, is to ensure all plugins are ready for Composer 2. In this case of the DrupalEasy.com codebase, there were three Composer plugins to review:

  1. composer/installers - compatible with Composer 2 starting with 1.9.0
  2. cweagans/composer-patches - compatible with Composer 2 starting with 1.7.0
  3. topfloor/composer-cleanup-vcs-dirs - the "dev-composer-2" branch is compatible with Composer 2

So, while still using Composer 1, I did the following (on my local, of course):

composer update composer/installers
composer update cweagans/composer-patches

Then I manually edited the composer.json file and changed the version constraint for topfloor/composer-cleanup-vcs-dirs to: "topfloor/composer-cleanup-vcs-dirs": "dev-composer-2" 

Then,

composer update topfloor/composer-cleanup-vcs-dirs

Once that was done, I updated my .ddev/config.yaml file to use:

composer_version: "2"

After restarting DDEV, I verified that all was well by updating a couple of Drupal modules (non-security related maintenance releases) before committing all changes and pushing up to Pantheon.

Obviously, the most important step is to ensure all of your Composer plugins are compatible with Composer 2 - just remember to do this prior to updating to Composer 2!

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web