Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Oct 03 2021
Oct 03

Personalization has started to become a common requirement for most websites. The content of a webpage needs to be personalized based on multiple criteria such as location, user preferences, personal user information, different cookies, etc. 

We will be covering the type of personalization in this document, where a single page caters to multiple audiences/user types. Some typical examples include pages based on user personal information/preferences where:

  • Lists are sorted differently and sort criteria are evaluated from the personalization criteria
  • Additional content blocks are shown
  • Content blocks are replaced
  • Links and menus visibility is controlled 

Whenever we need to implement personalization on a Drupal site, there is a general tendency to choose a javascript based client-side solution. While there is nothing wrong with that, the only caveat I see most of the time is the unavailability of contributed modules/framework, which does most of the plumbing stuff and allows us to configure/implement only the business logic and achieve personalization in minimum effort.

Specific tools like Acquia Lift cater to enterprise-grade personalization, but there are a few Open Source options.

The rationale behind using client-side/javascript implementation to achieve personalization most of the time is to move the actual changing of the document to the client-side, thereby making pages cacheable.

Handling personalization with server-side code and caching it effectively has often proved to be a painful experience. The most critical problem here is maintaining or storing multiple varied cached copies of the personalized page, choosing and serving the right cached content based on the varying condition. Most modern websites nowadays use an external caching system, or some even use a combination of them with CDN/ESI tools to get geographical performance advantages. Some of the common examples of external caching systems include Varnish, Akamai, Fastly, Cloudflare, etc., and this adds to our problem of personalizing pages on the server. And this is where JavaScript/client-side solutions often come to the rescue. It solves the caching issues either by:

  • Loading all the content in one go and then show/hide personalized bits based on relevant conditions. 
  • Loading the common/non-personalized content with page load from server/cache and loading the personalized bits via AJAX calls. 

We had a similar requirement in one of our current projects and to add complexity to the existing problem, we were using Site Studio in that project. Initially, it looked a lot more complex to implement personalization on a Site Studio based site, be it a client-side or server-side solution. At the time of writing, Acquia Lift was also not compatible with Site Studio.

Luckily, we found an impressive integration between the Site Studio and Context module, which Site Studio provides out of the box. Site Studio elements can be made conditionally visible based on contexts. A single context can be composed of multiple conditions that are highly configurable. Multiple contexts can be applied to a single Site Studio element to evaluate its visibility where either all of them or only one of them needs to be true. Drupal core provides some conditions out of the box like user role, language, etc. It also provides a plugin-based API that allows us to create custom conditions. This opened quite many opportunities for us to implement and think about personalizing pages driven via Site Studio elements, components, and templates. 

Drupal condition plugin API allows us to define the cache context and tags for the conditions.

For every visibility context applied to a Site Studio element, the cache context from their composing/internal conditions contributes to the cache context of the host (layout canvas) field and effectively to the (host entity) node page subsequently via cache context bubbling. This solved our problem of maintaining varied cache copies of the same page based on varying conditions within Drupal which we got every time.  

Now, the only problem we were left with was handling cache variations in our external caching system, Varnish. This was also quickly solved since Varnish allows us to store and retrieve varied cached responses with the help of a Vary header. Varnish VCL can be configured to identify/serve personalized page requests based on conditions that can come in the form of request headers, cookies, etc. Acquia cloud varnish VCL is preconfigured to listen and vary page caches based on certain named cookies and usage of vary header in the drupal response.

The same principle holds for almost all the major external caching systems, but an additional effort of identifying the right means to do that and configuring them is required.

Conditions created by drupal condition API can also be used to control the visibility of blocks. Any condition added via a custom module is exposed as a configuration in block visibility settings form and the block responds to these settings. This extends the idea of personalization from nodes to blocks, which covers the two most essential and foundational building blocks of a drupal page.

The integration between the context and Site Studio module and how it helped us personalize Site Studio pages demonstrates the power of the context module.

It gives us enough details to think about its use in other major contributed modules and also possibly think about an alternative way of solving personalization problems in general (outside of Site Studio), which works end-to-end. Do try this with your projects and let us know your experience! 

For more on personalization, see how you cantake advantage of Decision API and the “Relevancy sorting” option of Solr search to implement personalization search results in Drupal with Acquia Lift.

Aug 11 2021
Aug 11

A small leak can sink a great ship. ~ Benjamin Franklin

We have seen the basic setup and configuration for Mautic plugins that leverage the integrations bundle, in the previous blog post. The key part of any IntegrationBundle is handling the authentication mechanism.  

So in this blog post, we will be covering various types of authentication and using one authentication type in the plugin that we built in the last blog post. We will continue developing the same plugin.

IntegrationBundle from Mautic Core supports multiple authentication provider methods- like API-based authentication, Basic Auth, OAuth1a, OAuth2, OAuth2 Two-Legged, OAuth2 Three-Legged, etc. The IntegrationBundle provides all these authentication protocols to be used as Guzzle HTTP Client.

In this blog post, we will implement Basic Auth authentication with a third-party service.

The following steps enable our plugin to have the Basic Auth authentication:

  • Have a form with fields for storing the basic auth credentials Form/Type/AuthType.php.
  • Prepare a “Credentials” class to be used by the Client class.
  • Prepare a “Client” service class to be used by a dedicated APIConsumer class.
  • Use Client service and implement API call-related methods in APIConsumer service class.

Step 1

The plugin depends on third-party APIs to have data to manipulate. And these APIs are gated with the authentication and authorization mechanisms. For the course of this post, we have chosen  Basic Auth as the authentication method. 

Basic Auth needed a username and password to communicate with the API. So we need a form that accepts the username and password as a key. And this key is required when connecting with API endpoints.

Let's create a form and name it  “ConfigAuthType.php” under the “MauticPlugin\HelloWorldBundle\Form\Type” namespace. This class extends the AbstractType class of Symfony. We need to implement the "buildForm()" method to add the required field.  Example code should look like this:

You can see the full version here.

It's now time to tell Mautic to pick up this form during configuration. To do so, we have to define an Integration service class implementing ConfigFormInterface, ConfigFormAuthInterface. The ConfigFormAuthInterface is the interface that lets you specify the configuration form using the getAuthConfigFormName method. 

So we name this class "ConfigSupport" and place this under the "MauticPlugin\HelloWorldBundle\Integration\Support." Here are the snippets from the class ConfigSupport:

You can find the complete ConfigSupport class here.

Time to let the IntegrationBunddle know about our "ConfigSupport" class. To do so, add a service as integration or create a service listing with the mautic.config_integration tag. The following is the code snippet of the Config.php (the plugin configuration file).

Now, at this point, we have the following things ready:

  • A service class to register the configuration support class.
  • A class to provide the configuration.
  • One can view all the code changes for step 1 here in this commit.

Step 2

For the Basic Auth, the Integrations bundle uses the “HttpFactory” class to build the http-client. Now, this class needs an object called “Credentials,” consisting of all the required keys for authentication.

If you notice the “getClient()” method of HttpFactory class under the “Mautic\IntegrationsBundle\Auth\Provider\BasicAuth\” namespace, it needs an object of “AuthCredentialsInterface.”

So our next step will be to create a separate class for credentials and create a new custom client to use those credentials.

For that, create a new class called “Credentials” under MauticPlugin\HelloWorldBundle\Connection

The class should be like given below:

This is a trimmed version of the class, and you can find the full version here.

Now that we have completed the Credentials class, we need to create a client who will make HTTP requests. Typically, we don’t need to create a separate client class if we don’t have additional logic to handle. In such cases, we can just call HttpFactory class and get the client like:

In our case, apart from fetching data, we need to cache it and polish it to be easily used for Mautic’s Lead entity.

So we will create a new class called “Client” under the namespace MauticPlugin\HelloWorldBundle\Connection.

The job of the “Client” class is to get the object of ClientInterface (\GuzzleHttp\ClientInterface).

If you need the full class details, you can just follow this link here. Because we are kind and we want to share more, we will quickly review a few methods that interest us and work with the Credentials class we wrote previously.

Here in the “getClient()” method, we are calling the “getCredentials()” method, which is creating a “Credentials” object using API keys.

By using the credentials object, we will get the client via the HttpFactory service call.

So at the end of this phase, we have the following things:

  • Credentials object to pass into getClient() method.
  • New Client class to manipulate get() method and fetch other configuration.
  • New Config.php file inside the “HelloWorldBundle/Integrations” folder to bring configuration and different integration settings.
  • Commit.

Before you go ahead, make sure you go through and analyze what kind of Mautic development services Axelerant offers.

Step 3

We are now ready with the entire setup to store credentials and send the request. Now, it is time to use them in any other class or anywhere that we want to use.

In our current plugin, we have created a separate class called “ApiConsumer.” The reason is, we have several other get methods and API calls, so consolidating all the API methods into a single class is easier to manage.

To use our Client service, created via Client.php, we need to create a service that can use this class. That way, we can reuse this class without worrying about anything else.

Create a new service called “helloworld.connection.client” and add it to the Config.php in the other services section.

Similarly, we need to add additional services for the ApiConsumer class to call from other services.

You can refer to the source code to view the entire ApiConsumer class. Here is a snippet of the get() method.

As you can see, we are directly using the Client service’s reference and call the get method from the Client.php.

So at this point, we are done with the third step, where we used our authentication mechanism to fetch the data from the API.

You can refer to the commit to see the code for this step.

Conclusion

Now that we have the plugin ready to communicate with third-party API to churn out more leads, let us thank IntegrationBundle's authentication support. 

You can find about different authentication supports here.

Also, we have the third blog post coming up about how to manage and sync data coming from API. So stay tuned!!

Aug 11 2021
Aug 11

Introduction

The concept of Headless CMS has been a rage for quite some time. At Axelerant, we have been using Drupal as a Headless CMS in many projects. Headless drupal provides a JSON API for accessing the published content of Drupal, including menus via the Drupal Decoupled Menus module.

Since we will be building a cross-platform menu so it becomes necessary to talk about the mobile application ecosystem, which has changed considerably since the introduction of cross-platform technologies like React Native and Flutter. These technologies have made mobile application development a lot more accessible to web developers, both of them have generated strong momentum in recent years. React native has been easier to get started with for web developers due to its React roots but Flutter on other hand uses Dart, which also draws its syntax heavily from JavaScript, however, still has some differences.

In this tutorial, we will use the Flutter framework to render a material design-styled menu across Android, iOS, Web, Windows, and macOS. 

You might be inclined to ask why we choose Flutter instead of using React Native. The simple answer is we feel that Flutter is more polished as a framework. For more in-depth comparisons between the two frameworks, you can check this.
 

Getting Started

Head over to flutter.dev and follow instructions to install flutter on your machine & also install VS Code if you haven’t got that already. Let us create a flutter project by running:
flutter create drupal_flutter_menu

Open the drupal_flutter_menu  folder in vs code. The moment you open it inside vs code, you will be prompted for installing flutter and dart plugins, well go ahead and install them.

On the Drupal side, we need a Drupal instance running with Decoupled Menus module installed and enabled Before we move further let us first look at the JSON returned by Drupal menu API if you navigate to the Drupal menu endpoint (https:///system/menu/main/linkset)  and look for any menu, in this case, “main” menu, then the response JSON will look something like following:

The output will vary depending on what links are present in your specific Drupal menu.

If you look closely at this peculiar-looking JSON representation, it is a special media type called application/linkest+json which has been recently introduced by IETF. This media type is a special way to represent a set of links and their relations in JSON format. In order to know more about this representation head here. Our next step would be to parse this JSON in our flutter code and then create a simple Abstract Data type that will represent the parsed drupal menu, but wait wouldn’t it be better that we have something prebuilt that makes over lives easy, well we have already gone ahead and created a simple flutter package drupal_linkset_menu which takes a drupal menu API URL or a JSON string and returns a Menu object and then render it in Flutter.

Let’s add this package by running from the command line.
flutter pub add drupal_linkset_menu

This command will add the package to our package.yml file. The package.yml file is just like your composer.json file which is used to manage dependencies. Your updated package.yml should look like this,
The source code for a Flutter-based app resides inside the lib folder. We will be only working with the specially named file main.dart inside this folder. Let us delete all the code in the main.dart file and Replace with the following code, which will display “Hello World” in the center of the screen.

In order to run, click on the run and debug button inside the Run and debug section side menu section in vs code, choose dart and flutter on the next step, and then choose chrome on the next step.

Another way is to just type following the terminal /cmd:
flutter run -d chrome

Using Drupal Menu API To Create A Cross Platform Menu

This will run the app inside chrome browser, if you want to run on android you need to have android SDK and for iOS, you need to have Xcode installed, if both of the things are installed then you can use:

flutter run android && flutter run ios

to run on corresponding platforms, for more information on this head over to flutter.dev

Using Drupal Menu API To Create A Cross Platform MenuUsing Drupal Menu API To Create A Cross Platform Menu

Everything in flutter is a Widget! There are stateless widgets & stateful widgets, we will be working with stateless widgets today.

The code that we have put in the main.dart files does the following:

  1. It creates a Material app. Material is a visual design language that is standard on mobile and the web. Flutter offers a rich set of Material widgets.
  2. The main method uses arrow (=>) notation. Use arrow notation for one-line functions or methods.
  3. The app extends StatelessWidget, which makes the app itself a widget.
  4. The Scaffold widget, from the Material library, provides a default app bar, and a body property that holds the widget tree for the home screen. The widget subtree can be quite complex.
  5. A widget’s main job is to provide a build() method that describes how to display the widget in terms of other, lower-level widgets.
  6. The body for this example consists of a Center widget containing a Text child widget. The Center widget aligns its widget subtree to the center of the screen


Now update the code in main.dart  with the following code: 

 Also, add a package called url_launcher by typing:
flutter pub add url_launcher

This package will allow us to open a URL, when any link in the menu is clicked.

Let us break down step by step what the code adds:

  1. In the MyApp widget’s build method instead of showing a “Hello world” text at the center, we have introduced a new widget called HomePage that will show two menus “main” & “footer” menu of our drupal site.
  2. The HomePage widget is another widget that houses the necessary build method that describes how to show the two menus and a couple of helper functions.
  3. The getMenu function is responsible for interacting with the drupal_linkset_menu packages helper method called getDrupalMenuFromURL which takes API URL and the menu name/id and returns a Menu object which is used to construct the UI.
  4. The two functions buildMenu & buildMenuItem are used to recursively build the ui for the menu,A special flutter material inbuilt widget called ExpansionTile is used to create the menu items.
  5. The build method of HomePage contains a Column widget that lays out children in a vertical fashion, it is analogous to how flexbox works on the web. The column has two FutureBuilder widgets that call the getMenu function, till the getMenu function returns a Menu object, a CircularProgessIndicator widget is shown, and when the Menu object becomes available the menu is created.
  6. In buildMenuItem we are using a GestureDetector to listen to taps and when a mouse click or tap is performed on a menu item the URL is launched.

Run again or hot reload again by pressing “r” on your command line to see the changes.

Conclusion

The aim of this tutorial was to give a sense of how quickly we can build a native cross-platform app with Flutter and the new Decoupled Menu API. Now you might be wondering that we didn’t talk about running the project on Windows and macOS- the support for both these platforms is still in beta but as an exercise you can still run the project on Windows and macOS by changing the Flutter stable branch for which more information can be found here.  

All the code for this project can be found on GitHub.

Aug 11 2021
Aug 11

Open-source has the power to change the world, but, as we depend on it for democratic innovation, open-source also depends on us to thrive. At Axelerant, we know and own this; hence we’re constantly engaging in different open web communities, including Drupal’s.

Why are we writing this? First of all, we are always keen to shine a light on our team members because our people-first culture makes Axelerant succeed. Second, in a knowledge sharing spirit, we are willing to put out what has worked for us (and what we struggle with) regarding contributing and our community involvement.

We are celebrating Drupal’s 20th Anniversary, and we are proud of being part of that history for over a decade. What better way to celebrate than recognizing and sharing the stories of the people involved, the makers that keep the ball rolling.  

Hussain Aabbas

Hussain Abbas
Director of Drupal Services

"Celebrating our people and the community has been among our values since the beginning. Drupal’s 20th anniversary is one of those occasions where both of these values come together in demonstrating Axelerant’s commitment to be a productive part of the amazing Drupal community through its team."

Here, we want to share a few stories from team members who recently contributed and inspired us with their Drupal journey.

Lessons learned in our Monthly Contribution Meetups

We started Monthly Contribution Meetups in 2019 to foster a culture of mentoring and giving back. Our goal is to get more people contributing to Drupal consistently and provide the tools to those who want to do it for the first time. These meetings are an excellent space to seek out support, share findings, learn, and bring the opportunity to know other team members, their Drupal journeys, and motivations. From these sharings, we continue to grasp the familiar obstacles people encounter when contributing, ideas on how to surpass them, and the benefits that come with getting involved. 

screenshot of Axelerant team's zoom meeting

November’s monthly contribution meetup

Thirst for learning overcomes time constraints

Hansa-Pandit

Hansa Pandit
Frontend Engineer - L2

“I was first introduced to Olivero reading about it on different blogs. That caught my eye. I read the documentation, got my set up ready, jumped right into a coding sprint, and assigned myself an issue. I wanted to work on a feature, so when the theme went into the core, I would be able to say: that is the part I built.”

Hansa has been on Drupal.org for over two years, and besides other contributions, she’s been actively involved with the Olivero theme initiative

Time management was a big challenge for Hansa, especially since she gave Olivero the same priority level as other work-related projects. But the logic was clear; she knew that if she was investing her time towards contribution, she needed to benefit from it by learning.

And she declares the experience made her technically stronger, “I learned a lot of new skills. Other projects I worked on supported specific client's needs. Still, for Olivero, we had to make sure we were theming every single module supported by Drupal while making sure we met all the accessibility standards.”

And Olivero is now in core, we are proud of Hansa, and we celebrate her and everyone involved in this achievement.  

Find the right initiative, and don’t do it for the credit

mohit-aghera

Mohit Aghera 
PHP/Drupal Architect - L1

It is important to focus on learning and exploring instead of doing it for the credits: “I decided to focus on this initiative because I was interested in learning about writing test cases for Drupal Core. It was a quick way to get introduced to this, and also a great opportunity to explore almost every feature of the core, instead of focusing on a specific module.”

Mohit is one of our most experienced Drupal engineers and contributors; hence he’s continuously mentoring the team. In our last meetup, he explained his motivations and experience with the Bug Smash Initiative; “it’s a great initiative to devote energy to, because it is well managed. Maintainers do an excellent job triaging issues,” he argued. We often hear that not knowing where to start or feeling overwhelmed by the issue queue translates into demotivation within weeks. Counting on careful planning and mentoring makes life easier for everyone, which is why finding the right initiative becomes essential.  

A second factor to consider while contributing is the right motivation. We always remind ourselves of the opportunities that come with contributing for personal branding, sharing your work, showcasing a visible portfolio, and “ultimately if you want to learn Drupal Core, contributing is the best way to do it” he insists. 

Clear expectations help first-time contributors

Abhay-Saraf

Abhay Saraf
PHP/Drupal Engineer - L2

When asked what could be done differently to motivate others to join these sprints, he told us, “being clear about expectations and providing resources that display a step by step before the event would make the experience less intimidating.”

As founding members of the Drupal India Association, we also look to align our mentoring and contribution efforts with India’s larger Drupal community. Organizing and hosting monthly contribution weekends is one way to boost a sustainable contribution culture, and Abhay recently joined this initiative for the first time. From his experience, we confirmed that meeting folks, running into smiling faces, and having the space to give back without the pressure of getting lost or making a mistake is fundamental to onboard newcomers. “I had a good experience because I already had a list of prioritized issues. I could work with a free mind since I knew that I'd get the guidance needed if I had any doubts. Also, I liked the flexibility of this event, it goes on for a day, you can dedicate any amount of time you can, even if it is just an hour, it would still be worthwhile,” he shared.

Contribution = Recognition = More contribution

Gaurav-Kapoor

Gaurav Kapoor 
PHP/Drupal Engineer - L2

Gaurav's efforts were rewarded with a scholarship to attend DrupalCon Amsterdam 2019. Through this contribution journey, he gained vast Drupal knowledge, “now I focus on mentoring and sharing knowledge, so others can also leverage all you can gain from contributing,” he says. 

Gaurav’s Drupal journey started right after college when he decided to leverage his spare time by joining a two-person startup. After learning Drupal, he soon realized that contributing to the community would build the company’s reputation as trusted experts, and that was the initial driver. Eventually, what sparked a community spirit was getting noticed and recognized. He’s been ranked among the top 30 contributors and recognized in Dries’post about Who sponsors Drupal development? for the past three years.

Events and the power of networking

kunal-kursija.jpg

Kunal Kursija
PHP/Drupal Engineer - L3

Kunal has the habit of surfing through different channels that list upcoming events (DrupicalDrupal.orgDrupal Slack), so when we found out about BADCamp 2020’s call for papers, he decided to go for it. A two-way process started, “I began to review everything I had learned recently or topics I wanted to learn about”, from there Kunal came up with a list of topics and submitted them.

 

Speaking at events has many benefits, especially to those interested in being seen as an authority in their fields. Presenting sessions nourishes the community with knowledge and best practices and builds the speaker’s reputation and network. That was certainly the case for Kunal. “I first heard about BADCamp while attending DrupalCamp London. Someone I met there told me BADCamp is one of the best Drupal events. That image struck me and has stayed with me since then.” 

 “Of course, it was exciting to learn my session had been selected. I was disappointed I couldn’t attend the event in person. However, I enjoyed getting introduced to other BADCamp speakers, and it was great to participate in such a big and important event.”

To many more years of Drupal

We recognize our monthly meetups serve the purpose of keeping an ongoing conversation around contributions, inspire and support team members and promote those who actively get involved. Our team works with a contribution-first approach, and this practice grants us a place at the top of the ranking of organizations supporting Drupal. And yet, there's more we need to do to build up a sustainable contributing culture. We still find that most people that haven't contributed before can judge the onboarding process as too arduous, and time constraints follow soon after. Even with mentorship support available, the steep learning curve poses a hurdle to conquer.

We are continually discussing and exploring initiatives to encourage contribution, from creating a role for a full-time contributor to gamification aspects around tracking contributions or mentoring team members on the bench between projects. 

Today we introduced a selected few stories, evidence that sustains again and again that the key ingredient and the strength of this 20-year-old open-source project are people.

We are excited to be part of this celebration and would love to hear about your contribution strategies and ideas. What’s your preferred way to give back to Drupal?

Don’t forget to join the celebration on social media!

P.S. See you at the Global Contribution Weekend happening 29-31 January 2021.

Aug 11 2021
Aug 11

In our recent project, we had a requirement from one of our clients where we need to validate data in CSV files based on custom requirements. This validated CSV would need to be imported into Drupal 8 into various content types.  

In this article, we will look at the requirement, the library, the architecture of the custom module, the different components of the module with some code samples and finally adding some ideas on how this module can be made more reusable and even contributed.

Introduction

Our client is a well known international NGO with offices worldwide, each with different types of data management systems and frameworks. They wanted a centralized system to manage the data from each of these offices. Having concluded that Drupal 8 was the ideal solution to implement that centralized system, the challenge was to set up a migration pipeline to bring in data from all of the offices and their varying frameworks. Consequently, the files generated by these systems needed to be validated for specific constraints before being imported into our Drupal system.

Challenges and Goals

Following are the goals that the system should meet:  

  1. The CSV files were in a custom format and there were multiple files with different structures and needed to be handled accordingly. Each column needed to have another validator. 
  2. The files needed to be validated for errors before they could be imported and the errors needed to be logged with line numbers and relevant error messages. 
  3. The validation had to be triggered automatically when the files were downloaded from a central location. 
  4. Notification emails had to be sent on successful and failed validation to the IT admins. 
  5. After successfully validating the files, the validator needed to trigger the next step of the process, which is importing the files.

The main challenges

  1. The validation had to cross-reference the incoming data with existing data and also with data in different files (referential integrity checks). 
  2. We also had to check the uniqueness of certain columns in the CSV files. Doing this in a database is pretty easy and straightforward, but this had to be done before inserting it into the database.

Step 1: Choosing a CSV reader library

The first step was to figure out a PHP based CSV reader library. League CSV was found to be the best option due to the below reasons:

  1. It was managed by composer and was already being used by the Migrate module in Drupal core and hence no additional code needed to be added for the library to work.
  2. The library covered many common scenarios like iterating through rows of the CSV, getting the field values and headers, and streaming large CSV files.
  3. And finally, it was implemented in an object-oriented way.

Step 2: Architectural requirements

Below are the requirements we had concerning the architecture of the code:

  1. The code needs to work as an independent service to call it at different locations of code and thereby invoke validation wherever required.
  2. The validations need to be as generic as possible so that the same validation rule can be reused for different fields in the same CSV or in others.
  3. We need to have an extensible way to specify the validation to be done for each field. For example, whether a specific field can be allowed to be blank.

Step 3: Designing the components of the validator

To satisfy the above architectural requirements, we designed the validator module into the following sub-components:

The main service class

Below are the main responsibilities of this class:

  1. Load the CSV library and loop through each of the files in a particular folder.
  2. Use the methods supplied by the CSV league to read the file into our variables. For example, each row of the file will be stored in an array with an index containing each column data.
  3. During processing, the filename is taken in and checked to see if the validator method in the Validator class matching the filename exists.  
  4. If the method exists, then validation is done for the file and errors are logged into the error log table.
  5. If there are no errors, the class triggers the next event, which is migration using a predefined custom event via the Event API of Drupal. 
  6. This also passes the status of the import to the calling class so that emails can be triggered to the site admins.

The Validators class

Here, we basically assign constraints for each file type in a method. The input to the validator class would be a complete row.  

The Constraints class

This class contains the individual constraints that check if a particular type column meets the required criteria. These constraints are methods that take in the column value as a parameter and return an error message if it does not meet the criteria for that column type. This class will be invoked from the validators class for each column in every row.

The Error log

As its name suggests, the validator needed to capture the errors and log them somewhere. We defined a custom table using the database hooks provided by Drupal. A custom view was defined in code to read the data from this table. The errors captured by the constraint class were logged into the database using this logger.

Eventsubscriber and mail notification

We needed the validation to be auto-triggered when the files were downloaded. To achieve this, we tapped into Drupal’s EventSubscriber and Response APIs. 

Referential Integrity checks

Most of the columns did not have any relation with existing data and could be validated on the fly. However, some of the data had to be validated if it has corresponding references either in the database or in another CSV file. We did this as follows.

  1. For those values which act as a parent, dump them into a temporary table, which will be cleared after validation is completed.
  2. When we arrive at another CSV with a column that references values dumped above, then we query the above table to check if the value is present. If yes, return TRUE.
  3. If the value is not present in the temporary table, then we search the Drupal database as the value might have been imported as part of the previous import. If not, then we throw a referential error for that row in the CSV.

The code snippets are available here.

We used the migrated data as a source for a headless backend using REST. For more details on the specifications, refer to our blog on how to validate API response using OpenAPI3.

Future scope and ideas to extend this as a module by itself

We have written the module with an architecture where the validators can be reused but require some coding effort. Below are changes that can be done to make this module a contribution.

  1. Add configurations to have a list of files that need to be validated.
  2. Each file will have an option to add the fields that need to be validated and the type of data (similar to what you have when creating content type).
  3. Based on the above list of files and field types, we can validate any number of CSVs with any number of columns. 
  4. We would need to modify the above classes to fetch the columns' data type and call respecting constraints for each CSV.

As a result of doing the above changes, anyone will be able to use this module to validate CSV files with their own columns.

Hope this blog helped you with this module and how it can be made more reusable and even contributed. Share your experience in the comments below! 

Aug 11 2021
Aug 11

As expected, Drupal 9.1 was released on schedule at the closure of 2020. We have already talked about the Drupal 9 release and how it’s a testament to the predictable and reliable nature of the Drupal release cycle. Drupal 9.1 takes a step forward by adding more features and releasing them as predicted.

In this blog, we will be discussing the new improvements and more that will follow. 

Is it worth upgrading?

The Drupal 9.1 stable release was out as expected on Dec 2nd, 2020. We previously advocated that if you are on Drupal 8.9, you needn’t hurry to upgrade to Drupal 9.0 as you would not see many new features. But that’s changed.

Drupal 9.1 adds exciting features and updates along with support for PHP 8 (we have previously written about making Drupal 9 compatible with PHP 8).

It’s also worth upgrading as Drupal 9.1 brings significant changes in the user interface for both sighted users and assistive technology.

New features

Olivero theme

The much-awaited beta experimental frontend theme Olivero has been added to the Drupal core. As a replacement to Bartik, this is a modern and clear theme planned to become the new default Drupal theme later.

This particular theme is named after Rachel Olivero (1982-2019), the head of the organizational technology group at the National Federation of the Blind. She was a well-known accessibility expert and a Drupal community contributor.

Additions to the Claro theme

Claro was added as an experimental theme in Drupal 8.8. And now, Drupal 9.1 has added designs for various key pages like the extensions administration page, views administration, and status report. Also, the media library received Claro styled designs too.

Composer 2 and PHP 8 support

Drupal 9 fully works with Composer 2, and it is strongly recommended to update. Many of the popular plugins have also been updated. If the one you use doesn’t have updates, please help the plugin author with a PR to add the support (it’s quite easy). The new release comes with a significant improvement in performance and also reduces memory usage.

Drupal 9.1 has added support for PHP 8. PHP 8 brings in a lot of newer languages, and even though Drupal core isn’t using any of them (it still supports PHP 7.3), you could use features like union types and more in your custom code. Further, it’s likely that PHP 8 could be a requirement for Drupal 10 due to release in 2022.

Additionally, the user experience has been improved by making the content load faster as the images rendered by Drupal with known dimensions will now be set to lazy-load automatically. 

How to update from a previous version of Drupal

Now, this begs an important question: how will the current users of Drupal 7 or 8 migrate to Drupal 9.1? And also, if users have already migrated to Drupal 9, is there anything for them to execute with this release?

Every version of Drupal demands a different approach to migration. The idea is to pick the right Drupal migration strategy. Let’s look at how to migrate from different versions in this section. 

Upgrade from Drupal 7

Drupal 7 users can easily continue to migrate to Drupal 8.9 or migrate to 9.0 or 9.1 directly. Migrating directly to Drupal 9/9.1 will help them skip a step. The upgrade path for multilingual sites remains stable in Drupal 8.9, 9.0, and 9.1!

For more on how to upgrade from Drupal 7, check out the ultimate guide to Drupal migration

Upgrade from Drupal 8

For Drupal 8 users, there’s still time to step up to the latest 8.9 version until the end of Drupal 8, i.e., in November 2021. The bug fixes will continue, and the next one is scheduled for January 6, 2021. 

Sites on Drupal 8.8 will no longer receive security coverage. This means moving to Drupal 8.9/9 becomes crucial from this update onwards. 

According to Drupal.org, of the top 1000 most used drupal.org projects, 85 percent are updated for Drupal 9, so there is a high likelihood that most of the modules and themes you rely on are compatible.

Upgrade from Drupal 9

Drupal 9.1 is a minor release of Drupal 9. It can be updated from Drupal 9 versions for utilizing these new features without breaking backward compatibility (BC) for public APIs. While Drupal 9 will keep requiring Symfony 4, Drupal 9.1 has adjustments required to support Symfony 5 already. 

All these updates are underway to make Drupal 9 forward-compatible with Symfony 5 and 6 (not yet released). And also, as Drupal 10 is planned for mid-2022, these new upgrades target an excellent growth curve.

Running the update

We will only talk about updating from Drupal 8.9 or Drupal 9 in this section. Updating multiple versions is possible but needs additional care and consideration, which we won’t cover in this section.

  • First of all, if you are already using the Olivero theme in your project, remove that by running this command. We need to do this as Drupal 9.1 includes Olivero in the core.

$ composer remove drupal/olivero

  • To begin an upgrade from Drupal 8.9 or Drupal 9, run the following command:

$ composer require drupal/core:^9.1
drupal/core-composer-scaffold:^9.1 --update-with-dependencies

  • If your project is using drupal/core-recommended, use that instead of Drupal/core in the command above. Also, for the above, your project must be using the recommended Drupal Composer template. It is quite likely that the command might throw some dependency related errors. 

Since there are a wide variety of possible dependency issues, we won’t cover everything here. But to get started, try replacing the --update-with-dependencies flag with --update-with-all-dependencies flag in the command above and try again.

Drupal 9.1 seems to be a promising update for users ready to take the plunge. If you are still not sure, give us a chance to convince you why upgrading to Drupal 9 is crucial now.

Share your Drupal 9 experience with us and watch this space for more insights!

Aug 11 2021
Aug 11

Traditionally, Drupal web applications are built using various entities like Content types, blocks, components using Layout Builder, and then the product is made available to the end-user on the front-end using HTML, CSS and JavaScript. The team usually starts with backend stories related to building various content types, related roles, and permissions, and then the frontend team picks it up to make the site more usable and accessible as per the design requirements. 

Of course, with component libraries like Storybook, Fractal, PatternLab, and with designs in place, the frontend team can start implementing them as component libraries in parallel, which are later integrated with Drupal. 

In this blog, we will be talking about testing the application and the following topics:
 

01. Automated Testing in Drupal

02. Applitools and Drupal

03. Interpreting the automated test execution results

04. Other tools in the market

05. The Applitools Advantage

06. What next?

Automated Testing in Drupal

BehatPHPUnitDrupal Test Traits (DTT), and NightwatchJS are the most widely used tools for automating tests with Drupal. There are several reasons why these tools are popular within the Drupal community, such as all these tools are PHP-based frameworks (apart from NightwatchJS), offer ready to use Drupal extensions/plugins and have huge community support. With these tools, one can automate unit, integration, and acceptance level tests.

But what about automating the visual tests? That’s the missing tip of the pyramid, which we will address through this blog post. 

We have used Cypress to automate the browser and Applitools for AI-based visual validation. Our reasons for using Cypress over other tools are many, including the following:

  1. One can quickly get started with writing actual tests with Cypress as compared to Selenium.
  2. Cypress enables fast-paced test execution.
  3. Our POC with Cypress + Drupal proved that testing the Drupal side of things can also be achieved with Cypress. 
  4. Cypress offers harmonious integration with Applitools. Having said that, please note that Applitools does have SDKs for Selenium PHP and NightwatchJS and many more just in case you have your existing automation functional suites written using any of the other testing frameworks.
  5. Since Cypress is a JS-based framework, developers can also contribute to writing automated tests.

The site to demonstrate the concept is called Drupal Umami, the major advantage being that the site is already constructed and we can directly focus on writing automated visual tests without having to worry about creating these pages.

NOTE: If you are completely new to the concept of automated visual validation testing, then please refer to the course “Automated Visual Testing: A Fast Path To Test Automation Success" on Test Automation University from Angie Jones.
 

Applitools and Drupal

Applitools provides an SDK for Cypress, which makes it very easy for us to integrate automated visual validation tests in the same functional test suite created using Cypress. The steps to configure Applitools with Cypress are straightforward and you can refer to their official documentation for more details. Let’s take a look at the test written for the Homepage. The gist is shown below:

The test in the above example launches the Homepage and verifies the UI using the “checkWindow()” command. The checkWindow() command takes in the following parameters:

  1. tag: The name of the test.
  2. target: Shall the window be targeted or a particular element?
  3. fully: Identify the scope, whether it is the current viewport or the entire window. 

That’s it! And you are ready to execute the first automated visual test. So, let’s execute it using the command `npx cypress run`, assuming that the baseline image was captured correctly on the first run.

Here’s a small screencast for the same.


Interpreting the automated test execution results

Now that the tests have been executed let’s look at the execution results, which brings us to the Applitools dashboard. Here’s the passed test result. 

test results passed

The tests passing is a good thing. However, that’s not the primary reason for having automated tests. You want the tests to correctly detect the discrepancies as close as the point of introduction. For the purpose of this demo, we have intentionally introduced a couple of CSS bugs on the Homepage through Cypress’s invoke command. Once the script launches the Homepage, CSS changes are made at run-time in the browser and then the screenshots are captured as below:

Let’s re-execute our test to see how the tool catches these bugs. The tool has correctly highlighted the three errors (in pink color) below that we introduced on purpose:

  1. “Search bar” in the header has shifted from its original position.
  2. The font color for the “View recipe” button on the banner has changed.
  3. “Find out more” link in the footer.
      reference image
test run image

We confirm that these indeed are bugs, and reject the image marking the test as failed and then report bugs in Jira directly from Applitools. Additionally, the root cause analysis feature from Applitools helps us quickly identify the visual (UI) issues, in this case, caused by CSS changes, as shown in the images below: 

RGB bug margin bug

Until now, it was only about one browser. However, if we really want to leverage the automated tests written for validating the UI, then the true benefit lies in having the ability to execute these tests across several browsers, Operating systems, and devices. Verifying that the UI looks correct on one browser/device doesn’t guarantee that it would look exactly the same on all other browsers/devices because the rendering of the UI might be different on other browsers/devices. 
 

Cross-browser/device/OS testing using the Ultrafast Test Cloud

Now that we have seen and understood how automated visual testing is done with one browser, let’s discuss some points that need to be accounted for to scale up your automated testing:

  1. Configure your suite to execute automated tests across several browsers and devices. However, not only the test authoring time increases but also creates a dependency on the technical staff as the logic for tests to run all tests across several browsers and devices need to be coded.
  2. Linear automated test execution increases execution time, thereby resulting in larger build times and delaying the feedback of the application to the entire team. 
  3. Maintain an in-house grid for parallel execution or purchase additional subscriptions provided by Cloud-based solutions for parallel execution of automated tests.

This brings us to discussing the Applitools Ultrafast Test Cloud, which inherently takes care of the above points.

By using Applitools Ultrafast Test Cloud, you would be able to execute the automated visual validation tests across several browsers, operating systems, and devices of your choice and at lightning speed as although the test runs once on, say Chrome (assuming Chromedriver is configured in the tests), the capturing of the pages occurs in parallel, in the background for all the configured browsers and viewports.

So, let’s write some more tests for Articles and Recipes landing and listing pages on the site. Let us also execute these tests in parallel on several browsers/devices as configured below using Applitools Ultrafast grid solution:


Here are the Ultrafast Grid Test Results across several browsers and devices. 

test results

To be precise, here are the details:

  1. Number of browsers and devices = 7
  2. Total number of functional tests = 6
  3. Total number of visual tests = 7*6 = 42
  4. Time taken to complete execution on Ultrafast grid - 5 minutes 2 seconds

Execute once and see the results on so many browsers and devices. Now, that’s what I call truly automating your visual tests.

Also, notice that using the Applitools Batch feature, we have logically grouped the Test Results to make it extremely readable. 
 

Other tools in the market

There are many other Open Source tools like BackstopJSShoovGeminiVisual regression service for WebdriverIO to name only a few, but none of the tools has the Applitools advantage and we will look at a few of many reasons in the coming section.
 

The Applitools Advantage

  1. The AI-driven image comparison algorithm is incredibly accurate and avoids false positives, which otherwise occurs in a pixel-to-pixel based comparison. The amount of time it would take to troubleshoot false positives, especially on full-page screenshots, would be time and cost-prohibitive. Pixel-based is ok for verifying small components across a short period of time; otherwise, it breaks down.
  2. Seamless integration with your existing functional automation suite through several Web, mobile, screenshot, desktop, and codeless SDKs available for testing and automation frameworks like Cypress, Selenium, Nightwatch, WebdriverIO, Appium and also languages like PHP, Java, Javascript, C#, Python only to name a few.
  3. With the help of Ultrafast Test Cloud, your entire web application can be tested for visual accuracy at a fast speed (as the tests run only once whereas the visual rendering validations happen in parallel on several browsers and devices in the background) with guaranteed reliability and security.
  4. Applitools also provides out of the box integration with the Storybook component library for React, Vue and Angular.
  5. Learn more about the following Applitools Eyes integrations on their site:
    1. Integration with GitHub
    2. Integration with Microsoft Azure
    3. Integration with GitLab
    4. Integration with BitBucket
    5. Integration with Jira
    6. Integration with Email and Slack for notifications
       

What next?

Signup for a free account with Applitools and feel free to clone this repository to try it out on your own. Integrate automated visual validation tests in your project that will help you build and release visually perfect web applications or websites confidently at a faster rate.

Want to know more about automated testing? Learn how early Automated Testing design helped us in upgrading a Higher Education platform. The OpenScholar's product with its own complete layer was built above Drupal and needed the customizations to be thoroughly tested. The objective was not just to verify functionality accurately but also to achieve faster feedback at all levels of the implementation phase.

Aug 11 2021
Aug 11

In the last article, we discussed the changes required to get Drupal 9.1 running on PHP 8. At that time, we got the Drupal 9.1 dev release working on PHP 8.0.0 RC4 with a few patches. Since then, a lot has changed with many of those patches being committed and Drupal 9.2 dev open for development. But we’ll talk about all of that at a later date. Today, let’s look at getting some of the common PHP extensions and configure it to run with Drupal.

We left off at a point where we have plain Drupal 9.1 running on a plain PHP 8 RC4 setup. Drupal doesn’t require any extensions, not in PHP core, and that means we only had to enable extensions like gd, MySQL, and others to have Drupal 9.1 running. With that, we were able to install Umami and use the site without any problems at all. To enable those extensions, we only needed our docker-php-ext-enable script, which is part of the PHP base Docker imageSee the Dockerfile in the reference repository for the source code (lines 41-52). Installing other extensions that are not part of the PHP core is not quite that simple. Think of it this way: if a module is present in Drupal core, you can install it right after downloading Drupal. But if it is a contrib module, you have to download and install it separately. It’s the same thing with PHP extensions.

Why test with extensions?

Just as you probably wouldn’t have a Drupal site with at least one contrib module, you probably wouldn’t have a PHP installation without a few of the common extensions. Drupal core utilizes some of these extensions when they are available (such as APCu and YAML), which yields better performance. This means that even though the extensions are not technically required, you would most likely have them.

I started with extensions, which I almost always install on sites I work. These are APCu, YAML, and Redis. Drupal core doesn’t use Redis, but I almost always install the Redis module for caching, which requires this module. It made sense to test if it worked on PHP 8 (both the module and the extension). As for the other two extensions, Drupal core uses APCu and YAML extensions for better performance if they are available. Again, it is a good idea to test Drupal with these extensions installed.

Installing extensions

Typically, we would use PECL to install any extensions we needed. PECL is a repository for PHP extensions, very much like a composer for PHP packages. With PECL, we would just need to run a command such as pecl install Redis to install the extension. You can see this being used in lines 53-58 in the Dockerfile.

pecl install apcu redis yaml

This is not as simple with PHP 8. PHP 7.4 removed default support for PECL and the official Docker image removed the command in PHP 8 images (it applied an explicit option to keep it for PHP 7.4).

Alternative tool to build extensions

I found another tool called pickle, which was intended to replace PECL but became dormant as well. I noticed some activity on the project, including a relatively recent release, and I tried that first.

The tool worked very well for APCu and Redis extensions. However, for YAML, it failed because it could not parse YAML's beta version number (2.2.0b2). I found that this was fixed in a recent commit but that would have meant that I would need to build pickle in my Docker image rather than just downloading and using it. I was not looking to go that route.

Building the extension manually

This left me with only one option: building the extensions myself. Fortunately, this turned out to be much simpler than I thought. You can see the steps required for each extension in lines 54-67 in the reference repository’s Dockerfile. For each extension, there are essentially just two steps:

  1. Clone their source code of the extension
  2. Run phpize, make, and make install to build the extension

We need the PHP source available to use the above tools and this is easily achieved using a helper script in the Docker image. You can see it being used in line 39 in the reference repository. Once we build the extensions, we clean up the PHP source to keep our Docker image size small. This is what the complete step looks like:

docker-php-source extract;
git clone https://github.com/krakjoe/apcu.git; cd apcu;
phpize; make; make install; cd ..;
# ... Install other extensions same way
docker-php-source delete;

Enabling extensions

Now that the extensions are installed, we can use the same script (docker-php-ext-enable) as earlier to enable the extensions. In our reference repository, you can see this done on lines 69-72. Thanks to these helper scripts, we now have our extensions enabled and configured for both the PHP module (for Apache) and the CLI. This can be verified by running the following command:

php -m

The above command will list all the enabled PHP extensions (internal ones as well) and you should be able to find apcu, redis, and yaml in the list.

Bringing it together

Now, we need to make sure that Drupal works with the above extensions. Since APCu and YAML extensions are used by the core, we should see any issues immediately. We can even verify that Redis is connected and APCu is being used by looking at the status report page, as shown in the following screenshot. 

Tweet from Hussainweb

For Redis, we need to install the Drupal module as well as the Drupal core doesn’t use it directly. We will discuss installing modules in another post.

PHP 8 remains an important milestone in PHP history, not just because of cool new features but also because it established a trusted release cycle promised at the release of PHP 7. A predictable release cycle helps build trust and also consistently brings new features and innovation to the product. We saw that with Drupal 8’s regular six-monthly release cycle and we will see that with PHP as well.

Aug 11 2021
Aug 11

I joined Axelerant with the thrilling purpose of cultivating and fostering their participation and contribution to the Drupal community. And I did, with a defying obstacle: I don’t code and until then I had only heard a few things about Drupal.

As soon as I began this journey, I verified that Drupal (the technology) is completely interlinked with the community that sustains it. To understand one you must know the other. Though you can get a kick out of reading about the advantages of headless Drupal or the improvements that will come with the release of D10, it is obvious that what holds the project together is the willingness of its members to advance, connect, and share knowledge. Hence, the motto “Come for the code, stay for the community.”

Everybody has their first time

In every community, face-to-face encounters are essential to solidify our personal and professional bonds. They are mandatory to get the true sense of a community. Therefore, as soon as I embarked on this endeavor I knew I needed to add an event experience to my Drupal immersion.

Yet, the global crisis unleashed by the COVID-19 pandemic is forbidding us all to attend large and exciting live events. Was I supposed to meet this need online, while surfing a context of widespread virtual fatigue? The truth is that, although we are all a little tired of facing our screens, technology has proven its capability of bringing us close, even eliminating borders. As a result, I decided to sign up for free for my first Drupal Camp, and I was lucky to have debuted with my attendance at the Bay Area Drupal Camp, or BADCamp, which describes itself as the world’s raddest Drupal camp.

Pierina at BAD Camp 2020

Travelling from Buenos Aires to the world’s raddest Drupal camp

The Bay Area Drupal Camp is “An annual celebration of open-source software normally based in Berkeley, California” that has already trained over 3,000 Drupalers. This year, for their very first time, they went the extra mile and went fully virtual from October 14th to 17th.

From day one, the organizers ensured the attendees were aware of the Drupal Code of Conduct, and indeed all the interactions I had and the overall environment were imbued with respect and collaboration.

The first couple of days were exclusively for summits and training. I joined for the last two days of the event to attend sessions. The schedule offered a wide variety of topics for all levels of experience, which allowed me to achieve my goal: understanding the full range of knowledge-sharing and learning that happens in these events, without feeling like an outsider. I was able to participate in meetings related to non-code contributions from which I earned valuable resources.

Thank you for making it happen!

Thanks to the organizers and volunteers who made it happen. Surely it would have been easier to suspend the event until next year, but you took the time and effort to carry it through and the result was impeccable.

609

Registrations

515

Attendees

32

Countries

Congratulations!

What I experienced at BADCamp 2020

Two experienced Drupal contributors from Axelerant participated at BADCamp as speakers: Mohit Aghera and Kunal Kursija. Obviously, I wanted to watch them in action. Their sessions were tagged as “Drupal for beginners”, and they both had over 20 attendees to their meetings. It was very compelling to see how they interacted with the audience, covering concepts as well as practical tips and showcasing live-demos. They answered all questions and provided further examples when needed, and of course, in an open-source collaborative spirit, shared their slides and links to the sample code repositories.

Go ahead, relive the sessions, and check out the resources.

 

 
This will be helpful both for site builders and developers. 

  

Learn about the very tiny, mighty and hidden gems of Drupal, "filters", that process and display the user-submitted content. 

As I was planning my schedule, I literally felt this session title was talking to me. Baddy did a great job explaining specific ways you can contribute to Drupal beyond code. And she managed to make it appealing, sharing her own journey choosing non-code contributions even though she has the needed technical skills. Thanks to her inspiring talk, I realized I can offer valuable support by pitching in with these kinds of contributions. So, if you like to design, you’re good at organizing projects or events, you enjoy writing, reviewing, or translating content, find out how you can help here

This session offered valuable insights towards building the marketing arm of an open-source project, highlighting the importance of storytelling, both within the community and towards external audiences. Suzanne stressed the need to pinpoint whom the Drupal community is talking to, and how to adapt marketing efforts to different personas (from developers encountering Drupal for the first time, but also to marketers and evaluators that won’t ever install Drupal but occupy decision-making positions in organizations). I personally engaged with the idea of telling better stories around code contributions within the community. Good stories are easier to amplify, and knowing the people behind the code makes it straightforward to relate and is always inspiring. Stories boost empathy, motivation and sense of belonging; all things that foster a healthy culture in any community.

The Promote Drupal Initiative already produced valuable resources in this direction: the Drupal brand book (setting design elements but also the tone of voice and the brand personality) and marketing materials such as press releases, pitch decks, and one-pagers. Visit the Promote Drupal page to download the resources and/or if you want to contribute to the Drupal brand.


Overall, I had a rich experience, I witnessed first-hand the power of innovation and knowledge-sharing behind Open Source and I now have proof that the Drupal community is guided by a culture of openness, respect, and collaboration.

P.S 1: If you’re interested in web content accessibility, I recommend you watch this session to learn interesting insights, tools, and distinctions between accessibility, usability, and authentic digital inclusion. Check out the video here.

P.S 2: Thanks again to the BADCamp organizers for including the farm tour and the pet meet & greet, it was a creative and conscious way to mitigate virtual fatigue.

Check out all the recorded sessions of BADCamp 2020 here.

Aug 11 2021
Aug 11

With the launch of Drupal 9 in June 2020, the topic of Drupal migration is fresh on everyone’s mind. We will be delving deeper into the nitty-gritty around the topic in this blog. 

Migration is the process where the content from the old site, converted into the desired format and is saved in the new site. Sometimes, migration is a simple activity of mapping the source content to the destination content types, and sometimes, it is a bit more complicated.

Let's take a comprehensive look at the Drupal migration process in context to the recently launched Drupal 9, and what’s involved in migrating from different versions.

Drupal 9 is here, and regardless of being on Drupal 7 or 8, you can start preparing to upgrade. Find out how the migration process will affect your site and how to prepare!

01. Drupal 7, 8, and 9

02. Migrating Then and Now

03. Drupal to Drupal Migration

04. Migration from external sources

05. What’s More?

 

Drupal 7, 8, and 9

Drupal has earned a reputation as a great choice for enterprise-level websites and web applications. Drupal 7 was launched in 2011, and its improved JavaScript and CSS optimization made it so popular that a large number of businesses are still on it. 
 

Introduction of Drupal 8

Drupal 8 was introduced with numerous benefits, boasting extensive support for accessibility standards, with Semantic HTML5 that creates more usable interactions and adopts more WAI-ARIA practices, and much more.

Find out what to look for in Drupal 9 based on what we saw during the Drupal 8 journey and why migrate to the latest release.

Drupal 8 also said goodbye to PHPTemplate and rely on readable code when theming with Twig. However, these technical advancements came with a bit of a migration challenge.  
 

The Transition

It’s no secret that the migration process from Drupal 7 to 8 involved a number of challenges. Many questioned the need to migrate from Drupal 6/7 as they were hesitant about the migration process itself. The community has taken a huge leap in the migration process from Drupal 7 to Drupal 9. 

Read about the importance of migrating to Drupal 9 for Drupal 6 & 7 users and the consequences of not migrating to Drupal 8 before its end-of-life.

The infamous Drupal 7 to 8 migration challenged teams' technical capabilities and relied on the business priorities too. However, the Drupal 8 to 9 migration has no such hassles. If you are at the latest version of Drupal 8, then transitioning to Drupal 9 is very similar to a minor upgrade. 

Migrating Then and Now

The most important part of migration is planning and preparing. As a business, you should be able to assess the feasibility of performing a successful migration. To give an overview, it starts with analyzing your source data, managing important files and content, lots of testing, and so on.

The community recommends updating the core to the latest version available to ensure the site’s security and uninterrupted functioning. Using the dependency manager for PHP, Composer, you can update and install packages (modules) in Drupal.

Drupal to Drupal Migration

While moving your Drupal 6/7 site to Drupal 8, most of your time would be spent working in the Migration or Migration Group config. Also, you’d want to declare the different Process Plugins to be used and the Destination Plugins for all of the core entity types included in Drupal core. 

Drupal has reached the highest level of scalability and efficiency with object-oriented code, the latest PHP standards, more unified APIs to add power to your site. 

The entire Drupal 8/9 content migration process is also a brief series of easy steps and offers greater flexibility to tweak the content architecture.

Learn the importance of data and how to execute the content migration with a plan and available modules.

Migration from external sources

If you are on a non-Drupal CMS, like WordPress, then the process starts with Setup Migrate Demo Site and Source Data for data migration. The well structured Drupal 8 makes the execution easier and manageable. An average Drupal developer will have no issue understanding the database structures of both sites, and they can write a series of SQL queries that copy the content from any non-Drupal database to the Drupal database.

Also, there are contributed modules to migrate your non-Drupal website to Drupal. For example, the Migrate API provides services for migrating data from a source system to Drupal 8. Even the Drupal theming is a breeze now with the new Twig template engine. 

What’s More?

Drupal 7 and Drupal 8 are reaching end-of-life in November 2021 and 2022, respectively. This will leave Drupal 9 as the standalone version in the community with support. It won’t be too long before Drupal 10 makes an entry after that, so do make sure you complete your migration well in time.

Read about how we migrated millions of content items to Drupal for Lonely Planet. Axelerant team members helped familiarize Lonely Planet’s team with Drupal 8 migration best practices and streamline the migration process.

Jun 09 2021
Jun 09

A small leak can sink a great ship. ~ Benjamin Franklin

We have seen the basic setup and configuration for Mautic plugins that leverage the integrations bundle, in the previous blog post. The key part of any IntegrationBundle is handling the authentication mechanism.  

So in this blog post, we will be covering various types of authentication and using one authentication type in the plugin that we built in the last blog post. We will continue developing the same plugin.

IntegrationBundle from Mautic Core supports multiple authentication provider methods- like API-based authentication, Basic Auth, OAuth1a, OAuth2, OAuth2 Two-Legged, OAuth2 Three-Legged, etc. The IntegrationBundle provides all these authentication protocols to be used as Guzzle HTTP Client.

In this blog post, we will implement Basic Auth authentication with a third-party service.

The following steps enable our plugin to have the Basic Auth authentication:

  • Have a form with fields for storing the basic auth credentials Form/Type/AuthType.php.
  • Prepare a “Credentials” class to be used by the Client class.
  • Prepare a “Client” service class to be used by a dedicated APIConsumer class.
  • Use Client service and implement API call-related methods in APIConsumer service class.

Step 1

The plugin depends on third-party APIs to have data to manipulate. And these APIs are gated with the authentication and authorization mechanisms. For the course of this post, we have chosen  Basic Auth as the authentication method. 

Basic Auth needed a username and password to communicate with the API. So we need a form that accepts the username and password as a key. And this key is required when connecting with API endpoints.

Let's create a form and name it  “ConfigAuthType.php” under the “MauticPlugin\HelloWorldBundle\Form\Type” namespace. This class extends the AbstractType class of Symfony. We need to implement the "buildForm()" method to add the required field.  Example code should look like this:

You can see the full version here.

It's now time to tell Mautic to pick up this form during configuration. To do so, we have to define an Integration service class implementing ConfigFormInterface, ConfigFormAuthInterface. The ConfigFormAuthInterface is the interface that lets you specify the configuration form using the getAuthConfigFormName method. 

So we name this class "ConfigSupport" and place this under the "MauticPlugin\HelloWorldBundle\Integration\Support." Here are the snippets from the class ConfigSupport:

You can find the complete ConfigSupport class here.

Time to let the IntegrationBunddle know about our "ConfigSupport" class. To do so, add a service as integration or create a service listing with the mautic.config_integration tag. The following is the code snippet of the Config.php (the plugin configuration file).

Now, at this point, we have the following things ready:

  • A service class to register the configuration support class.
  • A class to provide the configuration.
  • One can view all the code changes for step 1 here in this commit.

Step 2

For the Basic Auth, the Integrations bundle uses the “HttpFactory” class to build the http-client. Now, this class needs an object called “Credentials,” consisting of all the required keys for authentication.

If you notice the “getClient()” method of HttpFactory class under the “Mautic\IntegrationsBundle\Auth\Provider\BasicAuth\” namespace, it needs an object of “AuthCredentialsInterface.”

So our next step will be to create a separate class for credentials and create a new custom client to use those credentials.

For that, create a new class called “Credentials” under MauticPlugin\HelloWorldBundle\Connection

The class should be like given below:

This is a trimmed version of the class, and you can find the full version here.

Now that we have completed the Credentials class, we need to create a client who will make HTTP requests. Typically, we don’t need to create a separate client class if we don’t have additional logic to handle. In such cases, we can just call HttpFactory class and get the client like:

In our case, apart from fetching data, we need to cache it and polish it to be easily used for Mautic’s Lead entity.

So we will create a new class called “Client” under the namespace MauticPlugin\HelloWorldBundle\Connection.

The job of the “Client” class is to get the object of ClientInterface (\GuzzleHttp\ClientInterface).

If you need the full class details, you can just follow this link here. Because we are kind and we want to share more, we will quickly review a few methods that interest us and work with the Credentials class we wrote previously.

Here in the “getClient()” method, we are calling the “getCredentials()” method, which is creating a “Credentials” object using API keys.

By using the credentials object, we will get the client via the HttpFactory service call.

So at the end of this phase, we have the following things:

  • Credentials object to pass into getClient() method.
  • New Client class to manipulate get() method and fetch other configuration.
  • New Config.php file inside the “HelloWorldBundle/Integrations” folder to bring configuration and different integration settings.
  • Commit.

Step 3

We are now ready with the entire setup to store credentials and send the request. Now, it is time to use them in any other class or anywhere that we want to use.

In our current plugin, we have created a separate class called “ApiConsumer.” The reason is, we have several other get methods and API calls, so consolidating all the API methods into a single class is easier to manage.

To use our Client service, created via Client.php, we need to create a service that can use this class. That way, we can reuse this class without worrying about anything else.

Create a new service called “helloworld.connection.client” and add it to the Config.php in the other services section.

Similarly, we need to add additional services for the ApiConsumer class to call from other services.

You can refer to the source code to view the entire ApiConsumer class. Here is a snippet of the get() method.

As you can see, we are directly using the Client service’s reference and call the get method from the Client.php.

So at this point, we are done with the third step, where we used our authentication mechanism to fetch the data from the API.

You can refer to the commit to see the code for this step.

Conclusion

Now that we have the plugin ready to communicate with third-party API to churn out more leads, let us thank IntegrationBundle's authentication support. 

You can find about different authentication supports here.

Also, we have the third blog post coming up about how to manage and sync data coming from API. So stay tuned!!

Apr 28 2021
Apr 28

Introduction

The concept of Headless CMS has been a rage for quite some time. At Axelerant, we have been using Drupal as a Headless CMS in many projects. Headless drupal provides a JSON API for accessing the published content of Drupal, including menus via the Drupal Decoupled Menus module.

Since we will be building a cross-platform menu so it becomes necessary to talk about the mobile application ecosystem, which has changed considerably since the introduction of cross-platform technologies like React Native and Flutter. These technologies have made mobile application development a lot more accessible to web developers, both of them have generated strong momentum in recent years. React native has been easier to get started with for web developers due to its React roots but Flutter on other hand uses Dart, which also draws its syntax heavily from JavaScript, however, still has some differences.

In this tutorial, we will use the Flutter framework to render a material design-styled menu across Android, iOS, Web, Windows, and macOS. 

You might be inclined to ask why we choose Flutter instead of using React Native. The simple answer is we feel that Flutter is more polished as a framework. For more in-depth comparisons between the two frameworks, you can check this.
 

Getting Started

Head over to flutter.dev and follow instructions to install flutter on your machine & also install VS Code if you haven’t got that already. Let us create a flutter project by running:
flutter create drupal_flutter_menu

Open the drupal_flutter_menu  folder in vs code. The moment you open it inside vs code, you will be prompted for installing flutter and dart plugins, well go ahead and install them.

On the Drupal side, we need a Drupal instance running with Decoupled Menus module installed and enabled Before we move further let us first look at the JSON returned by Drupal menu API if you navigate to the Drupal menu endpoint (https:///system/menu/main/linkset)  and look for any menu, in this case, “main” menu, then the response JSON will look something like following:

The output will vary depending on what links are present in your specific Drupal menu.

If you look closely at this peculiar-looking JSON representation, it is a special media type called application/linkest+json which has been recently introduced by IETF. This media type is a special way to represent a set of links and their relations in JSON format. In order to know more about this representation head here. Our next step would be to parse this JSON in our flutter code and then create a simple Abstract Data type that will represent the parsed drupal menu, but wait wouldn’t it be better that we have something prebuilt that makes over lives easy, well we have already gone ahead and created a simple flutter package drupal_linkset_menu which takes a drupal menu API URL or a JSON string and returns a Menu object and then render it in Flutter.

Let’s add this package by running from the command line.
flutter pub add drupal_linkset_menu

This command will add the package to our package.yml file. The package.yml file is just like your composer.json file which is used to manage dependencies. Your updated package.yml should look like this,
The source code for a Flutter-based app resides inside the lib folder. We will be only working with the specially named file main.dart inside this folder. Let us delete all the code in the main.dart file and Replace with the following code, which will display “Hello World” in the center of the screen.

In order to run, click on the run and debug button inside the Run and debug section side menu section in vs code, choose dart and flutter on the next step, and then choose chrome on the next step.

Another way is to just type following the terminal /cmd:
flutter run -d chrome

Using Drupal Menu API To Create A Cross Platform Menu

This will run the app inside chrome browser, if you want to run on android you need to have android SDK and for iOS, you need to have Xcode installed, if both of the things are installed then you can use:

flutter run android && flutter run ios

to run on corresponding platforms, for more information on this head over to flutter.dev

Using Drupal Menu API To Create A Cross Platform MenuUsing Drupal Menu API To Create A Cross Platform Menu

Everything in flutter is a Widget! There are stateless widgets & stateful widgets, we will be working with stateless widgets today.

The code that we have put in the main.dart files does the following:

  1. It creates a Material app. Material is a visual design language that is standard on mobile and the web. Flutter offers a rich set of Material widgets.
  2. The main method uses arrow (=>) notation. Use arrow notation for one-line functions or methods.
  3. The app extends StatelessWidget, which makes the app itself a widget.
  4. The Scaffold widget, from the Material library, provides a default app bar, and a body property that holds the widget tree for the home screen. The widget subtree can be quite complex.
  5. A widget’s main job is to provide a build() method that describes how to display the widget in terms of other, lower-level widgets.
  6. The body for this example consists of a Center widget containing a Text child widget. The Center widget aligns its widget subtree to the center of the screen


Now update the code in main.dart  with the following code: 

 Also, add a package called url_launcher by typing:
flutter pub add url_launcher

This package will allow us to open a URL, when any link in the menu is clicked.

Let us break down step by step what the code adds:

  1. In the MyApp widget’s build method instead of showing a “Hello world” text at the center, we have introduced a new widget called HomePage that will show two menus “main” & “footer” menu of our drupal site.
  2. The HomePage widget is another widget that houses the necessary build method that describes how to show the two menus and a couple of helper functions.
  3. The getMenu function is responsible for interacting with the drupal_linkset_menu packages helper method called getDrupalMenuFromURL which takes API URL and the menu name/id and returns a Menu object which is used to construct the UI.
  4. The two functions buildMenu & buildMenuItem are used to recursively build the ui for the menu,A special flutter material inbuilt widget called ExpansionTile is used to create the menu items.
  5. The build method of HomePage contains a Column widget that lays out children in a vertical fashion, it is analogous to how flexbox works on the web. The column has two FutureBuilder widgets that call the getMenu function, till the getMenu function returns a Menu object, a CircularProgessIndicator widget is shown, and when the Menu object becomes available the menu is created.
  6. In buildMenuItem we are using a GestureDetector to listen to taps and when a mouse click or tap is performed on a menu item the URL is launched.

Run again or hot reload again by pressing “r” on your command line to see the changes.

Conclusion

The aim of this tutorial was to give a sense of how quickly we can build a native cross-platform app with Flutter and the new Decoupled Menu API. Now you might be wondering that we didn’t talk about running the project on Windows and macOS- the support for both these platforms is still in beta but as an exercise you can still run the project on Windows and macOS by changing the Flutter stable branch for which more information can be found here.  

All the code for this project can be found on GitHub.

Apr 01 2021
Apr 01

As expected, Drupal 9.1 was released on schedule at the closure of 2020. We have already talked about the Drupal 9 release and how it’s a testament to the predictable and reliable nature of the Drupal release cycle. Drupal 9.1 takes a step forward by adding more features and releasing them as predicted.

In this blog, we will be discussing the new improvements and more that will follow. 

Is it worth upgrading?

The Drupal 9.1 stable release was out as expected on Dec 2nd, 2020. We previously advocated that if you are on Drupal 8.9, you needn’t hurry to upgrade to Drupal 9.0 as you would not see many new features. But that’s changed.

Drupal 9.1 adds exciting features and updates along with support for PHP 8 (we have previously written about making Drupal 9 compatible with PHP 8).

It’s also worth upgrading as Drupal 9.1 brings significant changes in the user interface for both sighted users and assistive technology.

New features

Olivero theme

The much-awaited beta experimental frontend theme Olivero has been added to the Drupal core. As a replacement to Bartik, this is a modern and clear theme planned to become the new default Drupal theme later.

This particular theme is named after Rachel Olivero (1982-2019), the head of the organizational technology group at the National Federation of the Blind. She was a well-known accessibility expert and a Drupal community contributor.

Additions to the Claro theme

Claro was added as an experimental theme in Drupal 8.8. And now, Drupal 9.1 has added designs for various key pages like the extensions administration page, views administration, and status report. Also, the media library received Claro styled designs too.

Composer 2 and PHP 8 support

Drupal 9 fully works with Composer 2, and it is strongly recommended to update. Many of the popular plugins have also been updated. If the one you use doesn’t have updates, please help the plugin author with a PR to add the support (it’s quite easy). The new release comes with a significant improvement in performance and also reduces memory usage.

Drupal 9.1 has added support for PHP 8. PHP 8 brings in a lot of newer languages, and even though Drupal core isn’t using any of them (it still supports PHP 7.3), you could use features like union types and more in your custom code. Further, it’s likely that PHP 8 could be a requirement for Drupal 10 due to release in 2022.

Additionally, the user experience has been improved by making the content load faster as the images rendered by Drupal with known dimensions will now be set to lazy-load automatically. 

How to update from a previous version of Drupal

Now, this begs an important question: how will the current users of Drupal 7 or 8 migrate to Drupal 9.1? And also, if users have already migrated to Drupal 9, is there anything for them to execute with this release?

Every version of Drupal demands a different approach to migration. The idea is to pick the right Drupal migration strategy. Let’s look at how to migrate from different versions in this section. 

Upgrade from Drupal 7

Drupal 7 users can easily continue to migrate to Drupal 8.9 or migrate to 9.0 or 9.1 directly. Migrating directly to Drupal 9/9.1 will help them skip a step. The upgrade path for multilingual sites remains stable in Drupal 8.9, 9.0, and 9.1!

For more on how to upgrade from Drupal 7, check out the ultimate guide to Drupal migration

Upgrade from Drupal 8

For Drupal 8 users, there’s still time to step up to the latest 8.9 version until the end of Drupal 8, i.e., in November 2021. The bug fixes will continue, and the next one is scheduled for January 6, 2021. 

Sites on Drupal 8.8 will no longer receive security coverage. This means moving to Drupal 8.9/9 becomes crucial from this update onwards. 

According to Drupal.org, of the top 1000 most used drupal.org projects, 85 percent are updated for Drupal 9, so there is a high likelihood that most of the modules and themes you rely on are compatible.

Upgrade from Drupal 9

Drupal 9.1 is a minor release of Drupal 9. It can be updated from Drupal 9 versions for utilizing these new features without breaking backward compatibility (BC) for public APIs. While Drupal 9 will keep requiring Symfony 4, Drupal 9.1 has adjustments required to support Symfony 5 already. 

All these updates are underway to make Drupal 9 forward-compatible with Symfony 5 and 6 (not yet released). And also, as Drupal 10 is planned for mid-2022, these new upgrades target an excellent growth curve.

Running the update

We will only talk about updating from Drupal 8.9 or Drupal 9 in this section. Updating multiple versions is possible but needs additional care and consideration, which we won’t cover in this section.

  • First of all, if you are already using the Olivero theme in your project, remove that by running this command. We need to do this as Drupal 9.1 includes Olivero in the core.

$ composer remove drupal/olivero

  • To begin an upgrade from Drupal 8.9 or Drupal 9, run the following command:

$ composer require drupal/core:^9.1
drupal/core-composer-scaffold:^9.1 --update-with-dependencies

  • If your project is using drupal/core-recommended, use that instead of Drupal/core in the command above. Also, for the above, your project must be using the recommended Drupal Composer template. It is quite likely that the command might throw some dependency related errors. 

Since there are a wide variety of possible dependency issues, we won’t cover everything here. But to get started, try replacing the --update-with-dependencies flag with --update-with-all-dependencies flag in the command above and try again.

Drupal 9.1 seems to be a promising update for users ready to take the plunge. If you are still not sure, give us a chance to convince you why upgrading to Drupal 9 is crucial now.

Share your Drupal 9 experience with us and watch this space for more insights!

Apr 01 2021
Apr 01

With the launch of Drupal 9 in June 2020, the topic of Drupal migration is fresh on everyone’s mind. We will be delving deeper into the nitty-gritty around the topic in this blog. 

Migration is the process where the content from the old site, converted into the desired format and is saved in the new site. Sometimes, migration is a simple activity of mapping the source content to the destination content types, and sometimes, it is a bit more complicated.

Let's take a comprehensive look at the Drupal migration process in context to the recently launched Drupal 9, and what’s involved in migrating from different versions.

Drupal 9 is here, and regardless of being on Drupal 7 or 8, you can start preparing to upgrade. Find out how the migration process will affect your site and how to prepare!

01. Drupal 7, 8, and 9

02. Migrating Then and Now

03. Drupal to Drupal Migration

04. Migration from external sources

05. What’s More?

 

Drupal 7, 8, and 9

Drupal has earned a reputation as a great choice for enterprise-level websites and web applications. Drupal 7 was launched in 2011, and its improved JavaScript and CSS optimization made it so popular that a large number of businesses are still on it. 
 

Introduction of Drupal 8

Drupal 8 was introduced with numerous benefits, boasting extensive support for accessibility standards, with Semantic HTML5 that creates more usable interactions and adopts more WAI-ARIA practices, and much more.

Find out what to look for in Drupal 9 based on what we saw during the Drupal 8 journey and why migrate to the latest release.

Drupal 8 also said goodbye to PHPTemplate and rely on readable code when theming with Twig. However, these technical advancements came with a bit of a migration challenge.  
 

The Transition

It’s no secret that the migration process from Drupal 7 to 8 involved a number of challenges. Many questioned the need to migrate from Drupal 6/7 as they were hesitant about the migration process itself. The community has taken a huge leap in the migration process from Drupal 7 to Drupal 9. 

Read about the importance of migrating to Drupal 9 for Drupal 6 & 7 users and the consequences of not migrating to Drupal 8 before its end-of-life.

The infamous Drupal 7 to 8 migration challenged teams' technical capabilities and relied on the business priorities too. However, the Drupal 8 to 9 migration has no such hassles. If you are at the latest version of Drupal 8, then transitioning to Drupal 9 is very similar to a minor upgrade. 

Migrating Then and Now

The most important part of migration is planning and preparing. As a business, you should be able to assess the feasibility of performing a successful migration. To give an overview, it starts with analyzing your source data, managing important files and content, lots of testing, and so on.

The community recommends updating the core to the latest version available to ensure the site’s security and uninterrupted functioning. Using the dependency manager for PHP, Composer, you can update and install packages (modules) in Drupal.

Drupal to Drupal Migration

While moving your Drupal 6/7 site to Drupal 8, most of your time would be spent working in the Migration or Migration Group config. Also, you’d want to declare the different Process Plugins to be used and the Destination Plugins for all of the core entity types included in Drupal core. 

Drupal has reached the highest level of scalability and efficiency with object-oriented code, the latest PHP standards, more unified APIs to add power to your site. 

The entire Drupal 8/9 content migration process is also a brief series of easy steps and offers greater flexibility to tweak the content architecture.

Learn the importance of data and how to execute the content migration with a plan and available modules.

Migration from external sources

If you are on a non-Drupal CMS, like WordPress, then the process starts with Setup Migrate Demo Site and Source Data for data migration. The well structured Drupal 8 makes the execution easier and manageable. An average Drupal developer will have no issue understanding the database structures of both sites, and they can write a series of SQL queries that copy the content from any non-Drupal database to the Drupal database.

Also, there are contributed modules to migrate your non-Drupal website to Drupal. For example, the Migrate API provides services for migrating data from a source system to Drupal 8. Even the Drupal theming is a breeze now with the new Twig template engine. 

What’s More?

Drupal 7 and Drupal 8 are reaching end-of-life in November 2021 and 2022, respectively. This will leave Drupal 9 as the standalone version in the community with support. It won’t be too long before Drupal 10 makes an entry after that, so do make sure you complete your migration well in time.

Read about how we migrated millions of content items to Drupal for Lonely Planet. Axelerant team members helped familiarize Lonely Planet’s team with Drupal 8 migration best practices and streamline the migration process.

Apr 01 2021
Apr 01

Though WordPress's easy setup has made quite a name, the Open Source CMS is still far from perfect. There are instances where Open Source users prefer Drupal as their CMS of choice for the advantages it offers. 

For those who are new to the world of CMSs, WordPress is often the natural choice as it is easy to get started with. For those who would like to level up and need more customized functionality from their websites, Drupal meets these needs well. 

Drupal Vs. WordPress

Both at par with strong arguments at their corners, Drupal and WordPress have often locked horns in the CMS space. In fact, this is one of the most searched topics online among users who are wanting to enter into the Open Source space. And since pros and cons vary depending on the user’s context, there is never a clear winner. 

Let’s first look at the advantages Drupal offers over WordPress. 

Advantages of Drupal 8

  • High Security: This is the foremost reason for migrating to Drupal. Several government websites, including the White House’s official website, have been built on Drupal.
  • Easy Search Engine Optimization (SEO): Drupal is a good choice for businesses wanting their websites to rank via SEO. The platform offers numerous modules like Yoast SEO and makes it easy to optimize the site's content.  
  • Speed: A crucial element in holding your audience’s attention is speed. Drupal extends maximum response speed, allowing users to develop high-performing pages with a high-speed index.
  • Flexibility: More choices mean more flexibility to up your website game. Drupal’s custom content types are flexible and make your site unique. As a user, you can employ delicate detailing and integrate useful functions such as Shopify, Twitter, other social media channels, and more to achieve better results. 
  • Multilingual: Since Drupal 8, the multilingual function has been baked into the core itself. For organizations wanting polylingual pages, Drupal offers 100+ languages in its base settings. And as commonly observed, multilingual sites perform much better than websites having English as their primary language. 
  • Taxonomy: Data is the topmost priority for every website owner. From In-depth nesting to categorizing in a common data catalog, Drupal can reliably store a large amount of data. Such capabilities are still not available in other Open Source CMSs.

Advantages of WordPress

According to W3Techs, WordPress powers 37 percent of all the websites on the Internet. Clearly, it is one of the most sought after CMS for enterprises wanting to start up their websites quickly. Let’s look at its advantages:

  • Ease of Use: If you are a non-technical user, WordPress can give your business a good kickstart. It can take you from no website to a good-looking, user-friendly, and functional site in minutes.  
  • Extensibility: With 53,000+ free plugins and 5,000+ free themes, you can transform your website at no extra cost. It’s easy to extend your WordPress site without the need for custom development.
  • Development Costs: WordPress tends to work great for categories like small-to-medium businesses, eCommerce, publications, startups, and nonprofits. WordPress can address these needs at lower development costs. 

While these advantages take over Drupal, there are a few that both the platforms share equally. For example, both Drupal and WordPress offer eCommerce capabilities and are redesigned to be more platform-agnostic. Similarly, you can decouple Drupal as well as WordPress. The drupal site can act as a content API server on the first day itself, while REST API is now bundled in the WordPress core for developers to experiment and use it in a decoupled way.

If you are a small business, WordPress might be the solution for you, but Drupal offers significant advantages for large enterprises. In case you are one of the latter, the next section will guide you through the migration process. 

Migrate to Drupal

Here’s how you can migrate your site to Drupal in a few easy steps:

  • Download the XML file containing “All content” after you log in to WordPress Admin. 
  • Make sure that the XML file is valid using xmllint through the command line. Then make appropriate fixes to the XML file and run xmllint again to ensure that all errors have been rectified.
  • Use the WordPress Migrate module or any other appropriate module to migrate.
  • Log in to Drupal and navigate to the Find Content screen and click on the WordPress Migration tab.
  • Select the valid XML file that you saved earlier.
  • Make sure that the data was properly imported and everything is in place. 

Supporting Modules

While these steps will execute the migration successfully, there are other aspects to consider, like exporting content files, URLs, extra files, etc. 

Modules can assist you in completing the migration from WordPress to Drupal:

You are at your discretion to choose between the two leading CMSs. Both platforms have their sets of advantages, as listed above. In case you are still confused, get details on how Drupal 8 changed the WordPress Vs. Drupal debate.

Mar 01 2021
Mar 01

Open-source has the power to change the world, but, as we depend on it for democratic innovation, open-source also depends on us to thrive. At Axelerant, we know and own this; hence we’re constantly engaging in different open web communities, including Drupal’s.

Why are we writing this? First of all, we are always keen to shine a light on our team members because our people-first culture makes Axelerant succeed. Second, in a knowledge sharing spirit, we are willing to put out what has worked for us (and what we struggle with) regarding contributing and our community involvement.

We are celebrating Drupal’s 20th Anniversary, and we are proud of being part of that history for over a decade. What better way to celebrate than recognizing and sharing the stories of the people involved, the makers that keep the ball rolling.  

Hussain Aabbas

Hussain Abbas
Director of Drupal Services

"Celebrating our people and the community has been among our values since the beginning. Drupal’s 20th anniversary is one of those occasions where both of these values come together in demonstrating Axelerant’s commitment to be a productive part of the amazing Drupal community through its team."

Here, we want to share a few stories from team members who recently contributed and inspired us with their Drupal journey.

Lessons learned in our Monthly Contribution Meetups

We started Monthly Contribution Meetups in 2019 to foster a culture of mentoring and giving back. Our goal is to get more people contributing to Drupal consistently and provide the tools to those who want to do it for the first time. These meetings are an excellent space to seek out support, share findings, learn, and bring the opportunity to know other team members, their Drupal journeys, and motivations. From these sharings, we continue to grasp the familiar obstacles people encounter when contributing, ideas on how to surpass them, and the benefits that come with getting involved. 

screenshot of Axelerant team's zoom meeting

November’s monthly contribution meetup

Thirst for learning overcomes time constraints

Hansa-Pandit

Hansa Pandit
Frontend Engineer - L2

“I was first introduced to Olivero reading about it on different blogs. That caught my eye. I read the documentation, got my set up ready, jumped right into a coding sprint, and assigned myself an issue. I wanted to work on a feature, so when the theme went into the core, I would be able to say: that is the part I built.”

Hansa has been on Drupal.org for over two years, and besides other contributions, she’s been actively involved with the Olivero theme initiative

Time management was a big challenge for Hansa, especially since she gave Olivero the same priority level as other work-related projects. But the logic was clear; she knew that if she was investing her time towards contribution, she needed to benefit from it by learning.

And she declares the experience made her technically stronger, “I learned a lot of new skills. Other projects I worked on supported specific client's needs. Still, for Olivero, we had to make sure we were theming every single module supported by Drupal while making sure we met all the accessibility standards.”

And Olivero is now in core, we are proud of Hansa, and we celebrate her and everyone involved in this achievement.  

Find the right initiative, and don’t do it for the credit

mohit-aghera

Mohit Aghera 
PHP/Drupal Architect - L1

It is important to focus on learning and exploring instead of doing it for the credits: “I decided to focus on this initiative because I was interested in learning about writing test cases for Drupal Core. It was a quick way to get introduced to this, and also a great opportunity to explore almost every feature of the core, instead of focusing on a specific module.”

Mohit is one of our most experienced Drupal engineers and contributors; hence he’s continuously mentoring the team. In our last meetup, he explained his motivations and experience with the Bug Smash Initiative; “it’s a great initiative to devote energy to, because it is well managed. Maintainers do an excellent job triaging issues,” he argued. We often hear that not knowing where to start or feeling overwhelmed by the issue queue translates into demotivation within weeks. Counting on careful planning and mentoring makes life easier for everyone, which is why finding the right initiative becomes essential.  

A second factor to consider while contributing is the right motivation. We always remind ourselves of the opportunities that come with contributing for personal branding, sharing your work, showcasing a visible portfolio, and “ultimately if you want to learn Drupal Core, contributing is the best way to do it” he insists. 

Clear expectations help first-time contributors

Abhay-Saraf

Abhay Saraf
PHP/Drupal Engineer - L2

When asked what could be done differently to motivate others to join these sprints, he told us, “being clear about expectations and providing resources that display a step by step before the event would make the experience less intimidating.”

As founding members of the Drupal India Association, we also look to align our mentoring and contribution efforts with India’s larger Drupal community. Organizing and hosting monthly contribution weekends is one way to boost a sustainable contribution culture, and Abhay recently joined this initiative for the first time. From his experience, we confirmed that meeting folks, running into smiling faces, and having the space to give back without the pressure of getting lost or making a mistake is fundamental to onboard newcomers. “I had a good experience because I already had a list of prioritized issues. I could work with a free mind since I knew that I'd get the guidance needed if I had any doubts. Also, I liked the flexibility of this event, it goes on for a day, you can dedicate any amount of time you can, even if it is just an hour, it would still be worthwhile,” he shared.

Contribution = Recognition = More contribution

Gaurav-Kapoor

Gaurav Kapoor 
PHP/Drupal Engineer - L2

Gaurav's efforts were rewarded with a scholarship to attend DrupalCon Amsterdam 2019. Through this contribution journey, he gained vast Drupal knowledge, “now I focus on mentoring and sharing knowledge, so others can also leverage all you can gain from contributing,” he says. 

Gaurav’s Drupal journey started right after college when he decided to leverage his spare time by joining a two-person startup. After learning Drupal, he soon realized that contributing to the community would build the company’s reputation as trusted experts, and that was the initial driver. Eventually, what sparked a community spirit was getting noticed and recognized. He’s been ranked among the top 30 contributors and recognized in Dries’post about Who sponsors Drupal development? for the past three years.

Events and the power of networking

kunal-kursija.jpg

Kunal Kursija
PHP/Drupal Engineer - L3

Kunal has the habit of surfing through different channels that list upcoming events (DrupicalDrupal.orgDrupal Slack), so when we found out about BADCamp 2020’s call for papers, he decided to go for it. A two-way process started, “I began to review everything I had learned recently or topics I wanted to learn about”, from there Kunal came up with a list of topics and submitted them.

 

Speaking at events has many benefits, especially to those interested in being seen as an authority in their fields. Presenting sessions nourishes the community with knowledge and best practices and builds the speaker’s reputation and network. That was certainly the case for Kunal. “I first heard about BADCamp while attending DrupalCamp London. Someone I met there told me BADCamp is one of the best Drupal events. That image struck me and has stayed with me since then.” 

 “Of course, it was exciting to learn my session had been selected. I was disappointed I couldn’t attend the event in person. However, I enjoyed getting introduced to other BADCamp speakers, and it was great to participate in such a big and important event.”

To many more years of Drupal

We recognize our monthly meetups serve the purpose of keeping an ongoing conversation around contributions, inspire and support team members and promote those who actively get involved. Our team works with a contribution-first approach, and this practice grants us a place at the top of the ranking of organizations supporting Drupal. And yet, there's more we need to do to build up a sustainable contributing culture. We still find that most people that haven't contributed before can judge the onboarding process as too arduous, and time constraints follow soon after. Even with mentorship support available, the steep learning curve poses a hurdle to conquer.

We are continually discussing and exploring initiatives to encourage contribution, from creating a role for a full-time contributor to gamification aspects around tracking contributions or mentoring team members on the bench between projects. 

Today we introduced a selected few stories, evidence that sustains again and again that the key ingredient and the strength of this 20-year-old open-source project are people.

We are excited to be part of this celebration and would love to hear about your contribution strategies and ideas. What’s your preferred way to give back to Drupal?

Don’t forget to join the celebration on social media!

P.S. See you at the Global Contribution Weekend happening 29-31 January 2021.

Mar 01 2021
Mar 01

Traditionally, Drupal web applications are built using various entities like Content types, blocks, components using Layout Builder, and then the product is made available to the end-user on the front-end using HTML, CSS and JavaScript. The team usually starts with backend stories related to building various content types, related roles, and permissions, and then the frontend team picks it up to make the site more usable and accessible as per the design requirements. 

Of course, with component libraries like Storybook, Fractal, PatternLab, and with designs in place, the frontend team can start implementing them as component libraries in parallel, which are later integrated with Drupal. 

In this blog, we will be talking about testing the application and the following topics:
 

01. Automated Testing in Drupal

02. Applitools and Drupal

03. Interpreting the automated test execution results

04. Other tools in the market

05. The Applitools Advantage

06. What next?

Automated Testing in Drupal

BehatPHPUnitDrupal Test Traits (DTT), and NightwatchJS are the most widely used tools for automating tests with Drupal. There are several reasons why these tools are popular within the Drupal community, such as all these tools are PHP-based frameworks (apart from NightwatchJS), offer ready to use Drupal extensions/plugins and have huge community support. With these tools, one can automate unit, integration, and acceptance level tests.

But what about automating the visual tests? That’s the missing tip of the pyramid, which we will address through this blog post. 

We have used Cypress to automate the browser and Applitools for AI-based visual validation. Our reasons for using Cypress over other tools are many, including the following:

  1. One can quickly get started with writing actual tests with Cypress as compared to Selenium.
  2. Cypress enables fast-paced test execution.
  3. Our POC with Cypress + Drupal proved that testing the Drupal side of things can also be achieved with Cypress. 
  4. Cypress offers harmonious integration with Applitools. Having said that, please note that Applitools does have SDKs for Selenium PHP and NightwatchJS and many more just in case you have your existing automation functional suites written using any of the other testing frameworks.
  5. Since Cypress is a JS-based framework, developers can also contribute to writing automated tests.

The site to demonstrate the concept is called Drupal Umami, the major advantage being that the site is already constructed and we can directly focus on writing automated visual tests without having to worry about creating these pages.

NOTE: If you are completely new to the concept of automated visual validation testing, then please refer to the course “Automated Visual Testing: A Fast Path To Test Automation Success" on Test Automation University from Angie Jones.
 

Applitools and Drupal

Applitools provides an SDK for Cypress, which makes it very easy for us to integrate automated visual validation tests in the same functional test suite created using Cypress. The steps to configure Applitools with Cypress are straightforward and you can refer to their official documentation for more details. Let’s take a look at the test written for the Homepage. The gist is shown below:

The test in the above example launches the Homepage and verifies the UI using the “checkWindow()” command. The checkWindow() command takes in the following parameters:

  1. tag: The name of the test.
  2. target: Shall the window be targeted or a particular element?
  3. fully: Identify the scope, whether it is the current viewport or the entire window. 

That’s it! And you are ready to execute the first automated visual test. So, let’s execute it using the command `npx cypress run`, assuming that the baseline image was captured correctly on the first run.

Here’s a small screencast for the same.


Interpreting the automated test execution results

Now that the tests have been executed let’s look at the execution results, which brings us to the Applitools dashboard. Here’s the passed test result. 

test results passed

The tests passing is a good thing. However, that’s not the primary reason for having automated tests. You want the tests to correctly detect the discrepancies as close as the point of introduction. For the purpose of this demo, we have intentionally introduced a couple of CSS bugs on the Homepage through Cypress’s invoke command. Once the script launches the Homepage, CSS changes are made at run-time in the browser and then the screenshots are captured as below:

Let’s re-execute our test to see how the tool catches these bugs. The tool has correctly highlighted the three errors (in pink color) below that we introduced on purpose:

  1. “Search bar” in the header has shifted from its original position.
  2. The font color for the “View recipe” button on the banner has changed.
  3. “Find out more” link in the footer.
      reference image
test run image

We confirm that these indeed are bugs, and reject the image marking the test as failed and then report bugs in Jira directly from Applitools. Additionally, the root cause analysis feature from Applitools helps us quickly identify the visual (UI) issues, in this case, caused by CSS changes, as shown in the images below: 

RGB bug margin bug

Until now, it was only about one browser. However, if we really want to leverage the automated tests written for validating the UI, then the true benefit lies in having the ability to execute these tests across several browsers, Operating systems, and devices. Verifying that the UI looks correct on one browser/device doesn’t guarantee that it would look exactly the same on all other browsers/devices because the rendering of the UI might be different on other browsers/devices. 
 

Cross-browser/device/OS testing using the Ultrafast Test Cloud

Now that we have seen and understood how automated visual testing is done with one browser, let’s discuss some points that need to be accounted for to scale up your automated testing:

  1. Configure your suite to execute automated tests across several browsers and devices. However, not only the test authoring time increases but also creates a dependency on the technical staff as the logic for tests to run all tests across several browsers and devices need to be coded.
  2. Linear automated test execution increases execution time, thereby resulting in larger build times and delaying the feedback of the application to the entire team. 
  3. Maintain an in-house grid for parallel execution or purchase additional subscriptions provided by Cloud-based solutions for parallel execution of automated tests.

This brings us to discussing the Applitools Ultrafast Test Cloud, which inherently takes care of the above points.

By using Applitools Ultrafast Test Cloud, you would be able to execute the automated visual validation tests across several browsers, operating systems, and devices of your choice and at lightning speed as although the test runs once on, say Chrome (assuming Chromedriver is configured in the tests), the capturing of the pages occurs in parallel, in the background for all the configured browsers and viewports.

So, let’s write some more tests for Articles and Recipes landing and listing pages on the site. Let us also execute these tests in parallel on several browsers/devices as configured below using Applitools Ultrafast grid solution:


Here are the Ultrafast Grid Test Results across several browsers and devices. 

test results

To be precise, here are the details:

  1. Number of browsers and devices = 7
  2. Total number of functional tests = 6
  3. Total number of visual tests = 7*6 = 42
  4. Time taken to complete execution on Ultrafast grid - 5 minutes 2 seconds

Execute once and see the results on so many browsers and devices. Now, that’s what I call truly automating your visual tests.

Also, notice that using the Applitools Batch feature, we have logically grouped the Test Results to make it extremely readable. 
 

Other tools in the market

There are many other Open Source tools like BackstopJSShoovGeminiVisual regression service for WebdriverIO to name only a few, but none of the tools has the Applitools advantage and we will look at a few of many reasons in the coming section.
 

The Applitools Advantage

  1. The AI-driven image comparison algorithm is incredibly accurate and avoids false positives, which otherwise occurs in a pixel-to-pixel based comparison. The amount of time it would take to troubleshoot false positives, especially on full-page screenshots, would be time and cost-prohibitive. Pixel-based is ok for verifying small components across a short period of time; otherwise, it breaks down.
  2. Seamless integration with your existing functional automation suite through several Web, mobile, screenshot, desktop, and codeless SDKs available for testing and automation frameworks like Cypress, Selenium, Nightwatch, WebdriverIO, Appium and also languages like PHP, Java, Javascript, C#, Python only to name a few.
  3. With the help of Ultrafast Test Cloud, your entire web application can be tested for visual accuracy at a fast speed (as the tests run only once whereas the visual rendering validations happen in parallel on several browsers and devices in the background) with guaranteed reliability and security.
  4. Applitools also provides out of the box integration with the Storybook component library for React, Vue and Angular.
  5. Learn more about the following Applitools Eyes integrations on their site:
    1. Integration with GitHub
    2. Integration with Microsoft Azure
    3. Integration with GitLab
    4. Integration with BitBucket
    5. Integration with Jira
    6. Integration with Email and Slack for notifications
       

What next?

Signup for a free account with Applitools and feel free to clone this repository to try it out on your own. Integrate automated visual validation tests in your project that will help you build and release visually perfect web applications or websites confidently at a faster rate.

Want to know more about automated testing? Learn how early Automated Testing design helped us in upgrading a Higher Education platform. The OpenScholar's product with its own complete layer was built above Drupal and needed the customizations to be thoroughly tested. The objective was not just to verify functionality accurately but also to achieve faster feedback at all levels of the implementation phase.

Mar 01 2021
Mar 01

In the last article, we discussed the changes required to get Drupal 9.1 running on PHP 8. At that time, we got the Drupal 9.1 dev release working on PHP 8.0.0 RC4 with a few patches. Since then, a lot has changed with many of those patches being committed and Drupal 9.2 dev open for development. But we’ll talk about all of that at a later date. Today, let’s look at getting some of the common PHP extensions and configure it to run with Drupal.

We left off at a point where we have plain Drupal 9.1 running on a plain PHP 8 RC4 setup. Drupal doesn’t require any extensions, not in PHP core, and that means we only had to enable extensions like gd, MySQL, and others to have Drupal 9.1 running. With that, we were able to install Umami and use the site without any problems at all. To enable those extensions, we only needed our docker-php-ext-enable script, which is part of the PHP base Docker imageSee the Dockerfile in the reference repository for the source code (lines 41-52). Installing other extensions that are not part of the PHP core is not quite that simple. Think of it this way: if a module is present in Drupal core, you can install it right after downloading Drupal. But if it is a contrib module, you have to download and install it separately. It’s the same thing with PHP extensions.

Why test with extensions?

Just as you probably wouldn’t have a Drupal site with at least one contrib module, you probably wouldn’t have a PHP installation without a few of the common extensions. Drupal core utilizes some of these extensions when they are available (such as APCu and YAML), which yields better performance. This means that even though the extensions are not technically required, you would most likely have them.

I started with extensions, which I almost always install on sites I work. These are APCu, YAML, and Redis. Drupal core doesn’t use Redis, but I almost always install the Redis module for caching, which requires this module. It made sense to test if it worked on PHP 8 (both the module and the extension). As for the other two extensions, Drupal core uses APCu and YAML extensions for better performance if they are available. Again, it is a good idea to test Drupal with these extensions installed.

Installing extensions

Typically, we would use PECL to install any extensions we needed. PECL is a repository for PHP extensions, very much like a composer for PHP packages. With PECL, we would just need to run a command such as pecl install Redis to install the extension. You can see this being used in lines 53-58 in the Dockerfile.

pecl install apcu redis yaml

This is not as simple with PHP 8. PHP 7.4 removed default support for PECL and the official Docker image removed the command in PHP 8 images (it applied an explicit option to keep it for PHP 7.4).

Alternative tool to build extensions

I found another tool called pickle, which was intended to replace PECL but became dormant as well. I noticed some activity on the project, including a relatively recent release, and I tried that first.

The tool worked very well for APCu and Redis extensions. However, for YAML, it failed because it could not parse YAML's beta version number (2.2.0b2). I found that this was fixed in a recent commit but that would have meant that I would need to build pickle in my Docker image rather than just downloading and using it. I was not looking to go that route.

Building the extension manually

This left me with only one option: building the extensions myself. Fortunately, this turned out to be much simpler than I thought. You can see the steps required for each extension in lines 54-67 in the reference repository’s Dockerfile. For each extension, there are essentially just two steps:

  1. Clone their source code of the extension
  2. Run phpize, make, and make install to build the extension

We need the PHP source available to use the above tools and this is easily achieved using a helper script in the Docker image. You can see it being used in line 39 in the reference repository. Once we build the extensions, we clean up the PHP source to keep our Docker image size small. This is what the complete step looks like:

docker-php-source extract;
git clone https://github.com/krakjoe/apcu.git; cd apcu;
phpize; make; make install; cd ..;
# ... Install other extensions same way
docker-php-source delete;

Enabling extensions

Now that the extensions are installed, we can use the same script (docker-php-ext-enable) as earlier to enable the extensions. In our reference repository, you can see this done on lines 69-72. Thanks to these helper scripts, we now have our extensions enabled and configured for both the PHP module (for Apache) and the CLI. This can be verified by running the following command:

php -m

The above command will list all the enabled PHP extensions (internal ones as well) and you should be able to find apcu, redis, and yaml in the list.

Bringing it together

Now, we need to make sure that Drupal works with the above extensions. Since APCu and YAML extensions are used by the core, we should see any issues immediately. We can even verify that Redis is connected and APCu is being used by looking at the status report page, as shown in the following screenshot. 

Tweet from Hussainweb

For Redis, we need to install the Drupal module as well as the Drupal core doesn’t use it directly. We will discuss installing modules in another post.

PHP 8 remains an important milestone in PHP history, not just because of cool new features but also because it established a trusted release cycle promised at the release of PHP 7. A predictable release cycle helps build trust and also consistently brings new features and innovation to the product. We saw that with Drupal 8’s regular six-monthly release cycle and we will see that with PHP as well.

Mar 01 2021
Mar 01

I joined Axelerant with the thrilling purpose of cultivating and fostering their participation and contribution to the Drupal community. And I did, with a defying obstacle: I don’t code and until then I had only heard a few things about Drupal.

As soon as I began this journey, I verified that Drupal (the technology) is completely interlinked with the community that sustains it. To understand one you must know the other. Though you can get a kick out of reading about the advantages of headless Drupal or the improvements that will come with the release of D10, it is obvious that what holds the project together is the willingness of its members to advance, connect, and share knowledge. Hence, the motto “Come for the code, stay for the community.”

Everybody has their first time

In every community, face-to-face encounters are essential to solidify our personal and professional bonds. They are mandatory to get the true sense of a community. Therefore, as soon as I embarked on this endeavor I knew I needed to add an event experience to my Drupal immersion.

Yet, the global crisis unleashed by the COVID-19 pandemic is forbidding us all to attend large and exciting live events. Was I supposed to meet this need online, while surfing a context of widespread virtual fatigue? The truth is that, although we are all a little tired of facing our screens, technology has proven its capability of bringing us close, even eliminating borders. As a result, I decided to sign up for free for my first Drupal Camp, and I was lucky to have debuted with my attendance at the Bay Area Drupal Camp, or BADCamp, which describes itself as the world’s raddest Drupal camp.

Pierina at BAD Camp 2020

Travelling from Buenos Aires to the world’s raddest Drupal camp

The Bay Area Drupal Camp is “An annual celebration of open-source software normally based in Berkeley, California” that has already trained over 3,000 Drupalers. This year, for their very first time, they went the extra mile and went fully virtual from October 14th to 17th.

From day one, the organizers ensured the attendees were aware of the Drupal Code of Conduct, and indeed all the interactions I had and the overall environment were imbued with respect and collaboration.

The first couple of days were exclusively for summits and training. I joined for the last two days of the event to attend sessions. The schedule offered a wide variety of topics for all levels of experience, which allowed me to achieve my goal: understanding the full range of knowledge-sharing and learning that happens in these events, without feeling like an outsider. I was able to participate in meetings related to non-code contributions from which I earned valuable resources.

Thank you for making it happen!

Thanks to the organizers and volunteers who made it happen. Surely it would have been easier to suspend the event until next year, but you took the time and effort to carry it through and the result was impeccable.

609

Registrations

515

Attendees

32

Countries

Congratulations!

What I experienced at BADCamp 2020

Two experienced Drupal contributors from Axelerant participated at BADCamp as speakers: Mohit Aghera and Kunal Kursija. Obviously, I wanted to watch them in action. Their sessions were tagged as “Drupal for beginners”, and they both had over 20 attendees to their meetings. It was very compelling to see how they interacted with the audience, covering concepts as well as practical tips and showcasing live-demos. They answered all questions and provided further examples when needed, and of course, in an open-source collaborative spirit, shared their slides and links to the sample code repositories.

Go ahead, relive the sessions, and check out the resources.

 

 
This will be helpful both for site builders and developers. 

  

Learn about the very tiny, mighty and hidden gems of Drupal, "filters", that process and display the user-submitted content. 

As I was planning my schedule, I literally felt this session title was talking to me. Baddy did a great job explaining specific ways you can contribute to Drupal beyond code. And she managed to make it appealing, sharing her own journey choosing non-code contributions even though she has the needed technical skills. Thanks to her inspiring talk, I realized I can offer valuable support by pitching in with these kinds of contributions. So, if you like to design, you’re good at organizing projects or events, you enjoy writing, reviewing, or translating content, find out how you can help here

This session offered valuable insights towards building the marketing arm of an open-source project, highlighting the importance of storytelling, both within the community and towards external audiences. Suzanne stressed the need to pinpoint whom the Drupal community is talking to, and how to adapt marketing efforts to different personas (from developers encountering Drupal for the first time, but also to marketers and evaluators that won’t ever install Drupal but occupy decision-making positions in organizations). I personally engaged with the idea of telling better stories around code contributions within the community. Good stories are easier to amplify, and knowing the people behind the code makes it straightforward to relate and is always inspiring. Stories boost empathy, motivation and sense of belonging; all things that foster a healthy culture in any community.

The Promote Drupal Initiative already produced valuable resources in this direction: the Drupal brand book (setting design elements but also the tone of voice and the brand personality) and marketing materials such as press releases, pitch decks, and one-pagers. Visit the Promote Drupal page to download the resources and/or if you want to contribute to the Drupal brand.


Overall, I had a rich experience, I witnessed first-hand the power of innovation and knowledge-sharing behind Open Source and I now have proof that the Drupal community is guided by a culture of openness, respect, and collaboration.

P.S 1: If you’re interested in web content accessibility, I recommend you watch this session to learn interesting insights, tools, and distinctions between accessibility, usability, and authentic digital inclusion. Check out the video here.

P.S 2: Thanks again to the BADCamp organizers for including the farm tour and the pet meet & greet, it was a creative and conscious way to mitigate virtual fatigue.

Check out all the recorded sessions of BADCamp 2020 here.

Mar 01 2021
Mar 01

Understanding the Menu API in Drupal

In Drupal 8 the menu system, in comparison to Drupal 7 has become much more flexible, and the areas of functionality are now separated into different systems. 

The routing system now handles the following:

  • Association of path with the Controller
  • Combines access checking
  • Parameter up-casting
  • Serves as a basis for path access

The menu system is now a collection of: 

  • Different APIs for menu items
  • Local tasks
  • Contextual links defined by modules. 

While we were working on one of our client's projects, we came across a requirement where we had to have an admin for every country and allow them to add and edit the details. 

We decided to have the Country as a Vocabulary and the Countries as the terms in the vocabulary. The details of the Country are made available through fields in the vocabulary. Having done that, now we have every Country mapped to every Country admin. The term edit page should be made available to them as a menu link so that it’s easier to edit the details of the respective country, as shown in the screenshot below.

edit country details of backend

Defining Menu links

Menu links in Drupal 8 should be defined inside the module file following the convention module_name.links.menu.yml. Since menu-links are plugins themselves, they are discovered using the YAML discovery type.

Sample menu link definition: 

lines of code

Here the title key is required, and the route_name specifies the route the menu-link would point to.

By only specifying the parent link which is in a menu, we no longer need to mention the menu_name, as clearing the cache will get the menu link added to our menu.

Adding dynamic values to the menu link

With the use case that we discussed, we wanted to add a menu link to the edit page of a Country term that maps to the Country of the currently logged user.

The route to term edit page is entity.taxonomy_term.edit_form with the path /taxonomy/term/{taxonomy_term}/edit

Here the {taxonomy_term} is the route parameter that should be made available to the menu-link to switch paths dynamically.

lines of code

To make this route parameter available dynamically, we need to extend the MenuLinkDefault class, containing the required information for the default interaction.

Providing the route parameters using the Menu Link plugin class

lines of code

Using the getRouteParameters() function, we are passing the term id to the path - /taxonomy/term/{taxonomy_term}/edit

Now it’s available to the menu-link and changes dynamically when the user logs in.

Disabling menu-links dynamically

lines of code

In the above example, we wanted to enable the menu-link only if the country value is present. Here, passing the empty string could lead to a page with a broken link. 

So, we used the isEnabled() function to override the default functionality. Now, sending FALSE when the condition meets will disable the menu-link all together to the logged-in user.

This is one of the ways in which the menu-link could be altered dynamically. Other preferred methods could be to opt for hooks, commonly used  hook_menu_links_discovered_alter() for statically defined menu-links, hook_link_alter to alter the parameters for links. Here is the list of hooks the Menu API had to offer.

Jan 07 2021
Jan 07

In our recent project, we had a requirement from one of our clients where we need to validate data in CSV files based on custom requirements. This validated CSV would need to be imported into Drupal 8 into various content types.  

In this article, we will look at the requirement, the library, the architecture of the custom module, the different components of the module with some code samples and finally adding some ideas on how this module can be made more reusable and even contributed.

Introduction

Our client is a well known international NGO with offices worldwide, each with different types of data management systems and frameworks. They wanted a centralized system to manage the data from each of these offices. Having concluded that Drupal 8 was the ideal solution to implement that centralized system, the challenge was to set up a migration pipeline to bring in data from all of the offices and their varying frameworks. Consequently, the files generated by these systems needed to be validated for specific constraints before being imported into our Drupal system.

Challenges and Goals

Following are the goals that the system should meet:  

  1. The CSV files were in a custom format and there were multiple files with different structures and needed to be handled accordingly. Each column needed to have another validator. 
  2. The files needed to be validated for errors before they could be imported and the errors needed to be logged with line numbers and relevant error messages. 
  3. The validation had to be triggered automatically when the files were downloaded from a central location. 
  4. Notification emails had to be sent on successful and failed validation to the IT admins. 
  5. After successfully validating the files, the validator needed to trigger the next step of the process, which is importing the files.

The main challenges

  1. The validation had to cross-reference the incoming data with existing data and also with data in different files (referential integrity checks). 
  2. We also had to check the uniqueness of certain columns in the CSV files. Doing this in a database is pretty easy and straightforward, but this had to be done before inserting it into the database.

Step 1: Choosing a CSV reader library

The first step was to figure out a PHP based CSV reader library. League CSV was found to be the best option due to the below reasons:

  1. It was managed by composer and was already being used by the Migrate module in Drupal core and hence no additional code needed to be added for the library to work.
  2. The library covered many common scenarios like iterating through rows of the CSV, getting the field values and headers, and streaming large CSV files.
  3. And finally, it was implemented in an object-oriented way.

Step 2: Architectural requirements

Below are the requirements we had concerning the architecture of the code:

  1. The code needs to work as an independent service to call it at different locations of code and thereby invoke validation wherever required.
  2. The validations need to be as generic as possible so that the same validation rule can be reused for different fields in the same CSV or in others.
  3. We need to have an extensible way to specify the validation to be done for each field. For example, whether a specific field can be allowed to be blank.

Step 3: Designing the components of the validator

To satisfy the above architectural requirements, we designed the validator module into the following sub-components:

The main service class

Below are the main responsibilities of this class:

  1. Load the CSV library and loop through each of the files in a particular folder.
  2. Use the methods supplied by the CSV league to read the file into our variables. For example, each row of the file will be stored in an array with an index containing each column data.
  3. During processing, the filename is taken in and checked to see if the validator method in the Validator class matching the filename exists.  
  4. If the method exists, then validation is done for the file and errors are logged into the error log table.
  5. If there are no errors, the class triggers the next event, which is migration using a predefined custom event via the Event API of Drupal. 
  6. This also passes the status of the import to the calling class so that emails can be triggered to the site admins.

The Validators class

Here, we basically assign constraints for each file type in a method. The input to the validator class would be a complete row.  

The Constraints class

This class contains the individual constraints that check if a particular type column meets the required criteria. These constraints are methods that take in the column value as a parameter and return an error message if it does not meet the criteria for that column type. This class will be invoked from the validators class for each column in every row.

The Error log

As its name suggests, the validator needed to capture the errors and log them somewhere. We defined a custom table using the database hooks provided by Drupal. A custom view was defined in code to read the data from this table. The errors captured by the constraint class were logged into the database using this logger.

Eventsubscriber and mail notification

We needed the validation to be auto-triggered when the files were downloaded. To achieve this, we tapped into Drupal’s EventSubscriber and Response APIs. 

Referential Integrity checks

Most of the columns did not have any relation with existing data and could be validated on the fly. However, some of the data had to be validated if it has corresponding references either in the database or in another CSV file. We did this as follows.

  1. For those values which act as a parent, dump them into a temporary table, which will be cleared after validation is completed.
  2. When we arrive at another CSV with a column that references values dumped above, then we query the above table to check if the value is present. If yes, return TRUE.
  3. If the value is not present in the temporary table, then we search the Drupal database as the value might have been imported as part of the previous import. If not, then we throw a referential error for that row in the CSV.

The code snippets are available here.

We used the migrated data as a source for a headless backend using REST. For more details on the specifications, refer to our blog on how to validate API response using OpenAPI3.

Future scope and ideas to extend this as a module by itself

We have written the module with an architecture where the validators can be reused but require some coding effort. Below are changes that can be done to make this module a contribution.

  1. Add configurations to have a list of files that need to be validated.
  2. Each file will have an option to add the fields that need to be validated and the type of data (similar to what you have when creating content type).
  3. Based on the above list of files and field types, we can validate any number of CSVs with any number of columns. 
  4. We would need to modify the above classes to fetch the columns' data type and call respecting constraints for each CSV.

As a result of doing the above changes, anyone will be able to use this module to validate CSV files with their own columns.

Hope this blog helped you with this module and how it can be made more reusable and even contributed. Share your experience in the comments below! 

Jan 07 2021
Jan 07

Drupal is a popular web-based content management system designed for small to large enterprises with needs such as complex workflows, multilingual content, and enterprise integrations. An increasing number of organizations move to Drupal from their current systems every year and with richer features being added to Drupal 9 and planned for 10, the growth will only accelerate. This means that migrations to Drupal remain an ever-popular topic.

Drupal provides a powerful and flexible migration framework that allows us to “write” migrations in a declarative fashion.

The migration framework supports a variety of sources and the ability to specify custom sources and destinations. Furthermore, the framework provides a powerful pipelined transformation process that allows us to map source content to destination fields declaratively.

Thanks to this framework, migration is more of a business challenge rather than a technical one. The overall process (or workflow) of the migration may differ depending on various business needs and attributes of the current (source) system. Depending on the type of migration, we may plan to reuse in-built migrations (in core or contrib), selectively adapt migrations from different sources, or entirely write new migrations. Further, depending on the source, we may choose to migrate incrementally or at one-time.

Many similar decisions go into planning an overall migration strategy and we’ll talk about the following here:
 

01. Migration Concepts

02.Understanding the content

03. Drupal to Drupal migration

04. Migration to Drupal from another system

05. Migration from unconventional sources
 

Migration Concepts

The Drupal migration framework is composable, which is why it can be used flexibly in many scenarios. The basic building entity (not to be confused with Drupal entities) is called, migration. Each migration is responsible for bringing over one discrete piece of content from the source to the destination. This definition is more technical than a business one as a “discrete piece” of content is determined by Drupal’s internal content model and may not match what you might expect as an editor.

For example, an editor may see a page as a discrete piece of content, but the page content type may have multiple files or paragraph fields or term references, each of which has to be migrated separately. In this case, we would have a separate migration for files, paragraph fields, and so on, and then eventually for the page itself. The benefit of defining migrations this way is that it allows the migrate framework to handle each of these pieces of the content itself, providing features like mapping IDs and handling rollbacks.

Correspondingly, a migration specifies a source, a destination, and a series of process mappings that define the transformations that a piece of content may go through while being mapped from a source field to a destination field. These are called plugins (because of their internal implementation). We might use different source plugins depending on the source system with the common ones provided by Drupal core (for sources such as Drupal, SQL databases, etc.).

There are dozens of contributed modules available for other systems such as WordPress, CSV files, etc. Similarly, process plugins are diverse and influential in allowing a variety of transformations on the content within the declarative framework. On the other hand, destination plugins are limited because they only deal with Drupal entities.

Incremental Migrations

The Drupal migrate framework supports incremental migrations as long as the source system can identify a certain “highwater mark” which indicates if the content has changed since a recent migration.

A common “highwater mark” is a timestamp indicating when the content was last updated.

If such a field is not present in the source, we could devise another such field as long as it indicates (linearly) that a source content has changed. If such a field cannot be found, then the migration cannot be run incrementally, but other optimizations are still available to avoid a repeat migration.

Handling dependencies and updates

The migrate framework does support dependencies between different migrations, but there are instances where there might be dependencies between two content entities in the same migration. In most cases, the migrate framework can transparently handle this by creating what are known as “stubs.” In more complex cases, we can override this behavior to gain more adequate control on stub creation.

As discussed in the previous section, it is better to use “highwater marks” to handle updates but may not be available in some cases. For these, the migrate framework stores a hash of the source content to track if the migration should be run. Again, this is handled transparently in most cases but can be overridden when required.

Rollbacks and error management

As long as we follow the defined best practices for the migrate framework, it handles fancier features such as rollbacks, migration lookups, and error handling. Migrate maintains a record of each content piece migrated for every migration, its status, hashes, and highwater marks. It uses this information to direct future migrations (and updates) and even allow rollbacks of migrations.
 

Understanding the content

Another important part of the equation is the way content is generated. Is it multilingual? Is it user-generated content? Can content be frozen/paused during migration? Do we need to migrate the revision history, if available? Should we be cleaning up the content? Should we ignore certain content?

Most of these requirements may not be simple to implement, depending on the content source. For example, the source content may not have any field to indicate how the content is updated and in those cases, an incremental migration may not be possible. Further, if it’s impossible to track updates to source content using simple hashes, we may have to either ignore updates or update all content on every migration. Depending on the size of the source and transformations on the content, this may not be possible and we have to fall back to a one-time migration.

The capabilities of the source dictate the overall migration strategy.

Filtering content is relatively easy. Regardless of the source, we can filter or restructure the content within the migration process or in a custom wrapper on a source plugin. These requirements may not significantly impact the migration strategy.

Refactoring the content structure

A migration can, of course, be a straightforward activity where we map the source content to the destination content types. However, a migration is often a wonderful opportunity to rethink the content strategy and information flow from the perspective of end-users, editors, and other stakeholders. As business needs change, there is a good chance that the current representation of the content may not provide for an ideal workflow for editors and publishers.

Therefore, it is essential to look at the intent of the site and user experience it provides to redefine what content types make sense then and in the near future. At this stage, we also look at common traits that distinguish the content we want to refactor and write mappings accordingly. With this, we can alter the content structure to split or combine content types, split or combine fields, transform free-flowing content to have more structure, and so on. The possibilities are endless, and most of these are simple to implement.

Furthermore, in many cases, the effort involved in actually writing the migration is not significantly different.
 

Drupal to Drupal migration

This is usually the most straightforward scenario. The core Drupal migrate framework already includes source plugins necessary for reading the database of an older version of Drupal (6 or 7). In fact, if the intention is to upgrade to Drupal 8 or 9 from Drupal 6 or 7, then the core provides migrations to migrate configuration as well as content. This means that we don’t even need to build a new Drupal site in many simple cases. It is simply a question of setting up a new Drupal 8 (or 9) website and running the upgrade.

Drupal is often not used for simple cases or for any non-trivial site needs rebuilding. 

A typical example is “views,” which are not covered by migrations. Similarly, page manager pages, panels, etc., need to be rebuilt as they cannot be migrated. Further, Drupal 8 has brought improvements, and updated techniques to build sites and the only option, in that case, is to build the functionality with the new techniques.

In some cases, it is possible to bring over the configuration selectively and remove the features you want to rebuild using a different system (or remove them altogether). This mix-and-match approach enables us to rebuild the Drupal site rapidly and also use the migrations provided in core to migrate the content itself. Furthermore, many contributed modules augment or support the core migration, which means that Drupal core can transparently migrate certain content belonging to contributed modules as well (this often happens in the case of field migrations). If the modules don’t support a migration path at all, this would need to be considered separately, similar to migration from another system (as explained in the next section).

Incremental migrations are simpler to write in case of Drupal-to-Drupal migration as the source system is Drupal and it supports all the metadata fields such as timestamps of content creation or updates. This information is available to the migrate framework, which can use it to enable incremental migrations. If the content is stored in a custom source within the legacy Drupal system and it does not have timestamps, a one-time migration may have to be written in that case. See the previous section on incremental migrations for more details.

While Drupal-to-Drupal migrations can be very straightforward and even simple, it is worth looking into refactoring the content structure to reflect the current business needs and editorial workflow in a better way. See the section on “Refactoring the content structure” for more details.
 

Migration to Drupal from another system

Migrating from another popular system (such as WordPress) is often accomplished by using popular contrib modules. For instance, there are multiple contrib modules for migrating from WordPress, each of which migrates from a different source or provides different functionalities. Similarly, contrib modules for other systems may offer a simple way to define migration sources.

Alternatively, the migrate framework can directly use the database source to retrieve the content. Drupal core provides a source that can read all MySQL compatible sources and there are contributed modules that allow reading from other databases such as MSSQL.

Similar to the Drupal migration source, features such as incremental migrations, dependencies, and update tracking may be used here as long as their conditions are satisfied. These are covered in detail in earlier sections. 

Check out the case study that outlines migrating millions of content items to Drupal from another system
 

Migration from unconventional sources

Some systems are complex enough to present a challenge during migration, even with the sophistication of source plugins and custom queries. Or there may be times when the content is not conventionally accessible. In such scenarios, it may be more convenient to have an intermediate format for content such as a CSV file, XML file, or similar formats. These source plugins may not be as flexible as a SQL source plugin (as advanced queries or filtering may not be possible over a CSV data source). However, with migrate’s other features, it is still possible to write powerful migrations.

Due to limitations of such sources, some of the strategies such as incremental migration may not be as seamless; nevertheless, in most cases, they are still possible with some work and automation.

An extreme case is when content is not available in any structured format at all, even as CSVs. One common scenario is when the source is not a system per se, but just a collection of HTML files or an existing web site. These cases are even more challenging as extracting the content from the HTML could be difficult. These migrations need higher resilience and extensive testing. In fact, if the HTML files vary significantly in their markup (it’s expected when the files are hand-edited), it may not be worth trying to automate this. Most people prefer manual migration in this case.
 

Picking a strategy

Wherever possible, we would like to pick all the bells and whistles afforded to us by the migrate framework, but, as discussed previously, a lot of it depends on the source. We would like every migration to be discrete, efficient with incremental migration support, track updates, and directly use the actual source of the content. However, for all of this to be possible, certain attributes of the source content system must be met as explained in the “Understanding the content” section.

The good news is that we often find a way to solve the problem and it is almost always possible to find workarounds using the Drupal migrate framework.

Dec 21 2020
Dec 21

PHP 8 beta 4 is out. In fact, the chances are that by the time you read this, we might even have the first RC.

PHP 8 adds a lot of exciting new features, but at the same time, being a major version, it breaks a lot of previous behaviors and functionalities.

Getting Drupal to work on PHP 8 is not as simple as getting it to work on a new minor release such as PHP 7.4.

The Drupal community began planning to fix the compatibility issues early on. And as releases started rolling out, there were individual issues to address each deprecation, changed method signatures, and other breaking changes. These fixes went into a single issue so that we could run a single test against PHP 8. That is the patch I started with when I wanted to test Drupal 9 with PHP 8. To make it even more fun, I also used Composer 2 for all of these steps.

Why am I writing this?

It’s clear that this article may not have any value at all in some time when Drupal 9 officially supports PHP 8, along with all of its dependencies. Why am I writing this then? For one, I believe writing down things helps clarify the ideas and goals. It serves as documentation that can help throughout the process of experimentation. Secondly, I hope that parts of this article will be useful to people who are trying to upgrade their own complex applications to work with PHP 8.

The challenges I describe here are more relevant to applications that need to support a spectrum of PHP versions, not just one or two.

Drupal 8 supports PHP 7.0 to 7.4 right now and the issue I mentioned earlier also tries to add support for PHP 8 to Drupal 8.9 as well (it looks like it might just happen as well). This makes the challenge of supporting PHP 8 in Drupal even bigger as we have to support several breaking changes simultaneously.

Also, many of the problems may not be relevant to applications that need to run on one version of PHP as they just have to change code to match the changes in PHP 8.

I will also not try to explain all the changes that have gone in to support PHP 8. I’ll just talk about the parts that I analyzed, reviewed, or changed myself. With all that said, let’s begin.

Problem 1: Environment and initial setup

Docker is great for setting up quick environments for testing and development. At Axelerant, we usually use Lando for setting up a project (in fact, our project template tool supports generating a default scaffold for Lando). But that wouldn’t fit my needs here because Lando doesn’t support PHP 8 yet. Anyway, docker-compose is much simpler for something like this. I only need two services to begin with–a web server container (with PHP) and a database.

The Docker and PHP community maintain a great starting point in the form of official PHP Docker images in a variety of flavors: CLI, FPM, and with Apache on Buster. We use the last one here and add various PHP extensions and settings optimized for Drupal. I already maintain a collection of Drupal optimized PHP images and I only adapted that to work with the PHP 8 beta 4 image. The only difference is that as pecl is no longer included with PHP (as of PHP 8), I just removed those lines. That meant that the common PHP extensions such as APCu and YAML wouldn’t be available, but that’s okay for the first attempt. (I eventually added them in the image anyway.)

The other service in the docker-compose file is for MariaDB and I use Bitnami’s Docker version here. There’s an official one, but I am more used to the Bitnami one as Lando uses it. I don’t do any fancy setup with MariaDB as this is just for experimentation. You can see the docker-compose.yml and Dockerfile on Github. Apart from the Docker environment, we also need a Drupal site setup. Fortunately, this was very easy with the templating tool I mentioned earlier. Once the axl-template tool is installed, I just run this command:

init-drupal hussainweb/test-d9p8c2 --core-version "^[email protected]"

Typically, this would have been enough for a good starting point, but the template is optimized for composer 1. It includes certain packages that improve the composer’s performance with Drupal. However, I want to use composer 2 and those packages are not required. In fact, they don’t even work. So, after the init-drupal command above, I remove that package before upgrading the composer to version 2. I also update composer-patches to the latest dev release, which has the updated “require” statements to work with composer 2.

composer remove zaporylie/composer-drupal-optimizations
composer require cweagans/composer-patches:"^[email protected]"

Now, I’m good to update composer to the latest version (2.0 RC1 as of this writing):

composer self-update --2

At the time of this writing, the composer-patches plugin doesn’t work with composer 2. I have a PR open for the final fix (as of now) and I just used my fork as the repository for the package. Normally, in the Drupal world, I would have tried to apply a patch, but the plugin responsible for applying patches is broken here. Anyway, using forks is better. This commit shows how I used my fork, which works properly with composer 2. By the time you’re reading this, you might not have to do this at all.

Another thing to note is that my local machine is still running PHP 7.4. This is important because many of Drupal’s dependencies do not support PHP 8 and cannot be installed (unless we use the --ignore-platform-requirements flag).

This is good enough to get a basic environment running. Spin up the docker containers and find the port that the container exposes (look at docker ps -a). Access the site, and you might see an error message.

Problem with dependencies

Drupal is built on top of many packages and components in the PHP world and they have to support PHP 8 as well. As of right now, most of these components are not easily installable on PHP 8 due to the requirements in their composer.json. However, I ran composer on my local machine, which still runs PHP 7.4 and accordingly, it didn’t complain about PHP 8. This was obviously a risk and we should never do this on a production site (you shouldn’t be using PHP 8 beta on production right now anyway). But for this experiment, I wanted to try running those components on PHP 8 despite their composer.json restrictions. The good news is that none of those components caused any problems in my tests so far.

Of course, this is a blocker to add PHP 8 support for Drupal and is being tracked in this issue.

Problem 2: Fix Drupal errors

First, I faced problems with incompatible method declarations in doctrine/reflection. The problems were in methods getConstants and newInstance and I manually fixed both those instances (it was trivial) and it moved ahead. It turned out that they had already been fixed in the patch and I need not have worried but I moved on.

This fix at least got Drupal’s installation pages loaded and I went as far as the page where we enter the database details. (I eventually configured the .env file and never saw that page again, but that’s beside the point.) At this step, I saw an error related to an invalid method signature for a PDO method. In PHP 8, the method signatures of PDOStatement::fetchAll and PDOStatement::fetch have changed. Unfortunately, Drupal 8 and 9 have a wrapper on this method with the old signature. This now needs to change for PHP 8. But changing it will break support for PHP 7 and this creates our complication.

The solution is rather brilliant hackery by Alex Pott where we introduce two interfaces–one for PHP 7 and the other for PHP 8. Depending on the PHP version, we alias the relevant interface which gets used by the actual class. Then, to handle both method signatures, we have two different traits–again one for PHP 7 and the other for PHP 8. We again alias the relevant trait depending on the PHP version, which gets used in the class. This looks something like this:

if (PHP_VERSION_ID >= 80000) {
  class_alias('\Drupal\Core\Database\Php8StatementInterface',  '\Drupal\Core\Database\StatementInterfaceBase');
}
else {
  class_alias('\Drupal\Core\Database\Php7StatementInterface',   '\Drupal\Core\Database\StatementInterfaceBase');
}
interface StatementInterface extends StatementInterfaceBase, \Traversable {
  // ..
}

Similarly, the traits are aliased and each of the traits calls a new helper method in the actual Statement class (they are just renamed from the previous method). For example, the erstwhile fetchAll method would now become doFetchAll and since it is a different name, it doesn’t matter what signature it has. The fetchAll method would now reside in the relevant trait with the appropriate signature depending on the PHP version and would just call the doFetchAll method. This way, we have two different method signatures in the interfaces and traits depending on the PHP version!

The above changes can be reviewed in the patch at the issue where this is being worked at the time of this writing. When I tested, the patch only contained support for the differing signatures for the fetch method. Subsequently, the signature for the fetchAll method had changed as well and I added support for that in the patch. This solved the problem with installing Drupal. I was surprised and happy that there were no more errors during the rest of the installation and even when I was greeted with my new site’s homepage running Drupal 9.1 and PHP 8!

Before I go on to the next problem, I should note that the issue here with PDOStatement is really because of a mistake in the design.

As a developer, we should never directly depend on the API of something we can’t directly control. PHP version is something that we, as Drupal core developers, can’t control. We can certainly require a minimum version of PHP but we can’t control PHP to control other dependencies.

Intertwining our business logic with PHP’s API brings risks such as this. Like Alex Pott says in a comment in that issue, “These are not our methods. These are from \PDOStatement and their signature is owned by PHP and not by Drupal.”

Fortunately, Drupal boasts a high code coverage in the automated tests and refactoring could be as safe as we can hope. There might still be problems with contributed modules that might have extended this method and rely on Drupal’s implementation on top of the PHP wrapper. In a complex product like Drupal, it is precarious to refactor code like this.

Problem 3: Dynamic routes

I was elated with a working install on PHP 8 and wanted to test further, all the while keeping an eye on the error log. I found a few niche errors in how Drupal behaved on dynamic routes but that turned out to be a problem with the changes in PDOStatement::fetchAll and how it behaves with optional parameters. It was very messy and I am glad that PHP 8 changed the method signature to use variadics. The problem here was how the traits wrapped the call to the relevant Statement class's actual method. I had to use an ugly switch..case block to account for the number of parameters similar to how it was already handled in Drupal core. You can see the changes in this patch.

Problem 4: CKEditor warnings

I noticed multiple warnings being logged by CKEditor when I opened the node add/edit form. The warnings from a method called CKEditorPluginManager::getEnabledButtons and it was due to how a certain parameter was passed into the callback for array_reduce. Fortunately, this turned out to be an easy fix because there was no need at all to pass that parameter by reference and the patch was quickly committed. The issue contains more details and sample code for reproducing the issue.

Next steps

If you want to test this yourself, find my code here and set it up yourself locally. You would need composer and Docker and hope it is self-explanatory, but I will soon add documentation to the repository.

I was able to run Drupal 9 and perform many actions on PHP 8. But not many people need just the Drupal core. To test complex sites, I needed to test as many features as possible in core and also contrib modules. I will write about this in a subsequent post and maybe even do a simple benchmark comparing PHP 7.4 and 8.0 performance.

Check out the next part of this series - Upgrade Drupal To PHP 8: Compiling Extensions and watch out for more! 

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web