Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Nov 11 2020
Nov 11

Part 1| Part 2 | Part 3
Drupal 7 was supposed to go end-of-life in June of this year, when Drupal 9 was released, but owing to the coronavirus pandemic, Drupal maintainers have extended the Drupal 7 end-of-life to November 2021 due to the significant proportion of Drupal site owners who are still on that version. Tag1 Quo, Tag1's extended support service for Drupal, offers you peace of mind when it comes to receiving all of the important security updates that Drupal 7 Extended Support (D7ES) vendors maintain. But Tag1 Quo goes well beyond the minimum requirements of the D7ES program, because we cover all of your contributed modules and notify you right away of security patches that need your attention.

[embedded content]


In this special three-part series, join Tag1's Jeremy Andrews (Founder and CEO), Michael Meyers (Managing Director), and your Tag1 Team Talks host Preston So (Editor in Chief) for a wide-ranging conversation about extended support in Drupal, what Drupal 7 site owners need to know today about extended support, and how Tag1 Quo is the optimal solution among D7ES vendors for your mission-critical Drupal 7 implementations. When it comes to Drupal 7 past end-of-life, Tag1 has your back.

Part 1| Part 2 | Part 3

For a transcript of this talk, see Transcript: Drupal 7 extended support- Pt. 2: An overview of the Tag1 Quo Drupal 7 extended support service & how it works. - Tag1 TeamTalk #028.2.

Photo by Tim Mossholder on Unsplash

Nov 09 2020
Nov 09

Part 1 | Part 2

Drupal is notorious for its "everyone has a voice" approach to open-source development, but it isn't easy to reach consensus across thousands of people with different backgrounds and opinions. In addition, Drupal has witnessed countless paradigm shifts in its lengthy history, both in the surrounding world of web development and in its internal workings. As Drupal has grown to power over two percent of the websites on the entire internet, many new workflows and governance structures have had to be put in place to guarantee the continued longevity of the Drupal community. In addition, with end-of-life quickly approaching for Drupal 7, contributors now have to juggle a widening range of versions.

[embedded content]

In this very special two-part series, join Angie Byron (Senior Director, Product and Community Development at Acquia), Michael Meyers (Managing Director at Tag1 Consulting), and your Core Confidential host Preston So (Editor in Chief at Tag1 Consulting), for a fireside chat with the one and only Angie Byron. We dove into some of the amazing (and not so amazing) things Angie has seen over the course of her fifteen years deeply involved in Drupal core development as well as what people need to know about Drupal core today and in the near future.

Part 1 | Part 2

For a transcript of this video, see Transcript: Core Confidential with Angie Byron (webchick) : The many faces of Drupal over 15 years - Pt. 2.

Photo by Jr Korpa on Unsplash

Nov 02 2020
Nov 02

Part 1 | Part 2

Open-source software development isn't easy. There are few people who know this more intimately well than Angie Byron (webchick), who is one of the best-known community leaders in the Drupal ecosystem and Senior Director, Product and Community Development at Acquia. Over the course of Angie's fifteen years in Drupal contribution, the content management system has undergone a series of disruptive and significant changes that have reinvented the community multiple times over. As cat-herder of over 30,000 developers all around the world, Angie has had her fair share of experiences in Drupal core development.

Drupal is notorious for its "everyone has a voice" approach to open-source development, but it isn't easy to reach consensus across thousands of people with different backgrounds and opinions. In addition, Drupal has witnessed countless paradigm shifts in its lengthy history, both in the surrounding world of web development and in its internal workings. As Drupal has grown to power over two percent of the websites on the entire internet, many new workflows and governance structures have had to be put in place to guarantee the continued longevity of the Drupal community. In addition, with end-of-life quickly approaching for Drupal 7, contributors now have to juggle a widening range of versions.

[embedded content]


In this very special two-part series, join Angie Byron (Senior Director, Product and Community Development at Acquia), Michael Meyers (Managing Director at Tag1 Consulting), and your Core Confidential host Preston So (Editor in Chief at Tag1 Consulting), for a fireside chat with the one and only Angie Byron. We dove into some of the amazing (and not so amazing) things Angie has seen over the course of her fifteen years deeply involved in Drupal core development as well as what people need to know about Drupal core today and in the near future.

Links

How automatic updates finally made it to Drupal

Revisiting DrupalCon Global: Drupal 7 and Drupal 8 End of Life

For a transcript of this video, see Transcript: Core Confidential #3.1 Core Confidential with Angie Byron (webchick) : Learning about 15 Years of Drupal Contribution - Pt. 1.

Part 1 | Part 2

Photo by Jr Korpa on Unsplash

Oct 15 2020
Oct 15

Goose, the load testing software created by Tag1 CEO Jeremy Andrews has had a number of improvements since its creation. One of the most significant improvements is the addition of Gaggles.

A Gaggle is a distributed load test, made up of one Manager process and one or more Worker processes. The comparable concept in Locust is a Swarm, and it's critical for Locust as Python can only make use of a single core: you have to spin up a Swarm to utilize multiple cores with Locust. With Goose, a single process can use all available cores, so the use case is slightly different.

As we discussed in a previous post, Goose Attack: A Locust-inspired Load Testing Tool In Rust, building Goose in Rust increases the scalability of your testing structure. Building Goose in this language enabled the quick creation of safe and performant Gaggles. Gaggles allow you to horizontally scale your load tests, preparing your web site to really take flight.

Distributing workers

Goose is very powerful and fast. Now, it’s so CPU-efficient it is easier to saturate the bandwisth on a network interface; you'll likely need multiple Workers to scale beyond a 1G network interface.

In our Tag1 Team Talk, Introducing Goose, a highly scalable load testing framework written in Rust, we introduced the concept of Workers.

In a Gaggle, each Worker does the same thing as a standalone process with one difference: after the Worker's parent thread aggregates all metrics, it then sends these metrics to the Manager process. The Manager process aggregates these metrics together.

By default, Goose runs as a single process that consists of:

  • a parent thread
  • a thread per simulated "user" (GooseUser), each executing one GooseTaskSet (which is made up of one or more GooseTasks)
  • a Logger thread if metrics logging is enabled with --metrics-file
  • a Throttle thread if limiting the maximum requests per second with --throttle-requests

Distributed in area, not just in number

Sometimes it's useful to generate loads from different places, clouds, and systems. With a Gaggle, you can spin up VMs in multiple clouds, running one or more Workers in each, coordinated by a Manager running anywhere. Typically, the Manager runs on the same VM as one of the Workers -- but it can run anywhere as long as the Workers and Manager have network access to each other. For additional information, see the Goose documentation on creating a distributed load test.

Summarized metrics

While metrics are usually collected on the Workers, a Gaggle summarizes all metrics from its tests by sending them to the Manager. A Goose load test is a custom application written in Rust, and instead of (or in addition to) printing the summary metrics table, it can return the entire structure so your load test application can do anything it wants with them. You can use these load tests and metrics to track the performance of your web application over time.

GooseUser threads track details about every request made and about every task run. The actual load test metrics are defined in the Goose metrics struct. Workers can send metrics to the Manager, which aggregates it all together and summarizes at the end of the load test.

By default Goose displays the metrics collected during the ramp-up period of starting a load test (ie, first there's 1 user, then 2, then 3, etc, until it starts all N users in the test) -- by default it then flushes all metrics and starts a timer so going forward it's only collecting metrics for all simulated users running at the same time. By default it then displays "running metrics" every 15 seconds for the duration of the load test. Finally, when the test finishes (by running a configurable amount of time, or by hitting ctrl-c on the Manager or any Worker) it displays the final metrics.

The parent process merges like information together; instead of tracking each "GET /" request made, it tracks how many of that kind of request was made, how fast the quickest was, how slow the slowest was, total time spent, total requests made, times succeeded, times failed, etc. It stores them in the GooseTaskMetric and GooseRequest structures. Individual GooseUser threads collect metrics instantly, and push them up to the parent thread. The parent thread displays real-time metrics for the running test every 15 seconds.

If the Metrics Log is enabled (by adding --metrics-file=), Goose can record statistics about all requests made by the load test to a metrics file. Currently, each Goose Worker needs to retain their own Metrics log and then they could be manually merged together -- the Manager can not store a Metrics log (if you are interested in the Manager storing the log, follow this issue.) The metrics collected or displayed are the same whether stand alone or in a Gaggle. The documentation on Running the Goose load test explains (and shows) most of the metrics that are collected.

Automated tests confirm that Goose is perfectly tracking actual metrics: the test counts how many requests the web server sees, and confirms it precisely matches the number of requests Goose believes it made, even when split across many Gaggle Workers.

Gaggles in use

During development, Jeremy spun up a Gaggle with 100 Workers. The coordination of the Gaggle itself has no measurable performance impact on the load test because it happens in an additional non-blocking thread. For example, if we start a 5-Worker Gaggle and simulate 1,000 Users, the Manager process splits those users among the Workers: each receives 200 users. From that point on, each Worker is doing its job independently of each other: they're essentially just 5 different stand alone Goose instances applying a load test and collecting metrics.

Communication between the Workers and the Manager is handled in a separate, non-blocking thread. The Manager waits for each Worker to send metrics to it, and aggregates them, showing running metrics and final metrics. For specifics on how the Workers and Manager communicate, read the technical details.

Once a Worker gets its list of users to simulate it tells the Manager "I'm ready". When all Workers tell the Manager they're ready, the Manager sends a broadcast message "Start!" and from that point on Workers aren't ever blocked by the Manager.

Verified consistency of Workers

When running in a Gaggle, each copy of Goose, meaning the Manager and all the Workers, must run from the same compiled code base. We briefly explored trying to push the load test from the Manager to simpler Workers, but Workers run compiled code, making this incredibly non-trivial.

Running different versions of the load test on different Workers can have unexpected side-effects, from panics to inaccurate results. To ensure consistency, Goose performs a checksum to confirm Workers are all running the same load test. Goose includes a --no-hash-check option, to disable this feature, but leaving it enabled is strongly recommended unless you truly know what you're doing.

Taking flight

Using Gaggles with Goose can significantly increase the effectiveness of your load tests.

While standalone Goose can make full use of all cores on a single server, Gaggles can be distributed across a group of servers in a single data center, or across a wide geographic region. Configured properly, they can easily and effectively use whatever resources you have available. Goose tracks the metrics you need in response to those Gaggles, helping you now, and in the future.

Photo by Ian Cumming on Unsplash

Oct 05 2020
Oct 05

At DrupalCon Global 2020, Moshe Weitzman, Senior Architect and Project Lead at Tag1, and the creator of Drush (the Drupal Command Line), presented his case for a more robust command line tool for Drupal administration. Many Drupal developers and website builders rely on command line tools to get their work done.

[embedded content]

Drupal core today

The command line options in Drupal core are implemented as Symfony console commands, and have limited usage compared to similar commands available in Drush.

Core Drush analog DbDumpCommandsql:dumpInstallCommandsite:installServerCommandrunserver

There are several issues with the core commands - DbDumpCommand is only for MySQL and does not include --structure-tables, or --extra-dump. InstallCommand is Sqlite only, without support for multisite or install profile fields.

The proposal

Moshe’s proposal is to use a Composer-based approach to add Drush to every Drupal website by including it in the drupal/core-recommended/ project for Drupal 9. Current Drupal core commands remain Symfony Console commands and exist for testing core. This maintains the existing framework, while adding highly useful commands for general development work.

Adding Drush to the core-recommended project gives people the choice of starting with a plain Drupal core installation, or making the choice to use the community’s recommended modules and tools.

Why this works

Moshe presents a number of reasons why this proposal is an improvement for the community:

  • Improves the out-of-the-box experience by providing a robust command line for all sites.
  • When Drush is part of the package, Drupal documentation can be consistent, especially for installation.
  • Contrib and core developers will have confidence when they ship with Drush commands. There’s no confusion between deciding which commands to write: Drupal console, Symfony console, or Drush commands.
  • All sites can monitor their non-Drupal dependencies by using a standardized package. Drush ships with pm:security and pm:security-php, enabling website owners and developers to quickly see when PHP versions are insecure and need updates.
  • The new deploy command standardizes Drupal deployment. This command applies a best practice to deployment, making Drupal adoption quicker and easier.
  • Adding Drush to core-recommended leverages 10+ years of development as part of standard Drupal.

What to consider

While these items are far from downsides, they must be considered as part of this effort:

  • The Drush team would need to coordinate with Release Management. This happens already, and would not be a real change.
  • Security Team coordination is also imperative. Moshe is on the security team, so this work already happens.
  • Core and Drush have separate issue queues and repos. While some may want everything in the same queue, keeping the queues separated helps keep issues focused.
  • Drush is functionally tested for many installation permutations (PHP version, Drupal core version, DB backend, etc.) It’s tested with a real Drupal installation, not a mocked test case.
  • Developers who want to create sites without this tool can easily opt-out by using composer remove drush/drush.

Adding Drush to core-recommended may help projects by enabling them to add their module commands to Drush core. It would enable Drush commands to be more easily added to core modules.

Photo by Alexandru Acea on Unsplash

Sep 22 2020
Sep 22

At DrupalCon Global 2020, Tag 1 Consulting CEO Jeremy Andrews and Managing Director Michael Meyers talked about the upcoming Drupal 7 and Drupal 8 end of life. In their talk, they discussed:

  • What Drupal “End of Life” (EOL) means
  • When it happens, and how D7 and D8 differ
  • Why widely used versions are retired
  • Practical implications and your options
  • What Drupal vendor Extended Support (ES) Is
  • Why ES only covers D7, and not D8
  • How ES operates, and what it covers

[embedded content]

What Drupal “End of Life” (EOL) means

When a version of Drupal reaches EOL, the Drupal community will no longer work on it or provide free support. Core development stops entirely - no new features are added, no bugs will be fixed. The core issue queues will be locked, you will be prevented from adding anything to Drupal 7 & 8 Core on Drupal.org.

While Drupal 6 Core is locked, contrib is still open. Drupal 7 & 8 will likely do the same, so you can update your modules. Very few 6.x modules were updated after the Drupal 6 EOL, and many maintainers closed their branches as well. Expect that most contrib module owners will close their 7 and 8 branches.

In addition to the issues queue, the testing infrastructure will most likely be decommissioned. The PHP versions supported by these versions of Drupal are also EOL. Each D7ES vendor has a member on the security team, but the Drupal security team will no longer be proactively involved in reviews. This may mean less secure code unless you participate in a D7ES program, or manually track patches as they become available.

When is the EOL date?

Drupal 7 reaches end of life in November of 2022.It was originally scheduled for November of 2021. The date was extended due to the large user base, and the difficulties stemming from the global coronavirus pandemic.

Drupal 8 reaches end of life on November 2, 2021. While it may be confusing that Drupal 8 reaches EOL before Drupal 7, Drupal 8 is dependent on Symfony 3, which reaches the Symfony community’s EOL at that time.

Why widely used versions are retired

There are many reasons to retire older software.

  • Legacy: Drupal 7 is 10 years old, and will be ~12 by EOL in Nov. 2022
  • Bandwidth: Developers can’t support Drupal 7 and 8 while building Drupal 9 and 10
  • Interest: Developers don’t want to focus on 10 year old technology
  • Innovation: Improvement through innovation is the best for Drupal as software

Drupal 7 and 8’s end of life is a challenge. Rebuilding your website on new technologies (which also have their own EOL schedules), can be expensive and time consuming. Drupal 8 to Drupal 9 is an easier upgrade, making its EOL less problematic for users.

Practical implications and your options

Drupal 7 users have several options. The higher cost options are:

  • Don’t migrate. This is a bad choice, because it leaves your website vulnerable to attack, potentially losing data.
  • Migrate to Drupal 9. This is a good choice, keeping your site up to date with a similar ecosystem.
  • Move to a new CMS. Similar in cost to a Drupal 9 migration, with the added cost of training your teams on new technologies.

Lower cost options are:

  • End of life your website.
  • Turn your website into a static website.
  • Keep running Drupal 7, and work with a Drupal 7 vendor in the Extended Support (D7ES) program.

What Drupal vendor Extended Support (ES) is

D7ES ensures Drupal 7 remains secure and safe to run. Companies approved to offer D7ES must provide at least the following services:

  • Security patches for D7 core, including vulnerabilities that are reported for supported versions of Drupal
  • A specific list of contributed modules will be identified, and security patches will be provided for them
  • Vendors must make a commitment to offer these services for at least 3 years; at a minimum, you should receive D7ES through 2025
  • All patches created by D7ES vendors must be open source - if a D7ES vendor fixes any problem with D7, they are obligated to release the fix
  • Vendors must have an active member on the Drupal Security Team

The more the community supports this effort, the more vendors will offer it, and the more fixes they’ll be able to provide. Official vendors are vetted by the Drupal Association and listed on the D7 Vendor Extended Support page. Drupal 7 ES does not follow a formal release schedule, and your website must be on the final EOL version of Drupal 7 in order to participate in the vendor programs.

To learn more about Tag1’s extended support program, see Tag1 Quo - the enterprise security monitoring service.

Photo by Ankhesenamun on Unsplash

Sep 21 2020
Sep 21

In his Drupal4Gov webinar Using tools and Git workflow best practices to simplify your local development, Greg Lund-Chaix, Senior Infrastructure Engineer at Tag1, talks about some of the ways that teams struggle when they become successful and need to learn to scale. He recommends using some basic tools to make your workflow easier. The right tools in your environment can prevent big problems down the line with merge conflicts, code committed to the wrong branch, or other mistakes.

Git is one of the most common version control systems in use today. With a few tools, and a few best practices, you and your team can make your local environments easier and safer to use.

[embedded content]

Rebasing

Learn to rebase. When you’re working on a feature it might take you a few hours, to several days or weeks. The longer you take working on a branch, the more chances your branch is out of sync with the rest of the code base. The cleanest way to prevent problems is to rebase. Rebasing checks where your branch diverged from main, pulls in all the changes, and replays your changes on top of them. Rebasing prevents cluttering up your commit history, too.

Here’s an example of how Greg works:

[laptop] ~ $ git checkout feature123-amazing-stuff
[laptop] ~ $ git pull --rebase origin main
[laptop] ~ $ git add my_amazing_module.module
[laptop] ~ $ git commit
[laptop] ~ $ git pull --rebase origin main
[laptop] ~ $ git push -u origin feature_branch

Greg starts his day by checking out his feature branch. Greg always works in feature branches, as one of his core four rules. He can’t be sure if anyone has committed code since he last checked this branch, so he rebases his branch against the main branch on his codebase’s origin. Everyone else’s changes are pulled in, and Greg’s changes are added back on top of them. Now, if someone has made changes to the same files Greg has, he’s able to see the conflicts before they go into the main branch. A pull --rebase does not add a merge commit, and keeps your commit history cleaner.

Now, Greg knows his changes are clean. During the day, he adds and commits more changes. Just before the end of the day, he rebases again, ensuring his code is as clean and safe as possible before he finally pushes it at the end of the session.

Git prompts for sanity

Anyone who has ever worked on a command line has felt the pain of doing something in the wrong directory, deleting the wrong file, or moving something to the wrong place. Your command line prompt can be a helpful indicator for where you are, what you’re doing, and the status of your local repository.

Git includes a script for showing your repository status in your command prompt. Download the git-prompt.sh file or look for it in your git source, and customize it to your needs. This prompt file does several helpful things:

  • Tells you what branch you’re on
  • Adds a red percent (%) sign when there are untracked files in the repository
  • Adds a red asterisk (*) when there's a changed file in the repository
  • Adds a green plus sign (+) when a file has been added using git add

These indicators flag that there is some change that has not been committed. This may be expected, but it may also be a sign that something has gone wrong - for example, if your indicators show up when you’re on the main branch.

Greg made a secondary script available for prompts, which indicates where you’re making the changes: locally, or on a server. If this might help you, download the servertype-prompt.sh script for yourself. This script depends on git-prompt.sh. Whether or not that prompt displays is an indicator for where you’re working.

Sniff your code

Unlike the more traditional network sniffer, which analyzes your packet traffic on the wire, a code sniffer reviews your code and checks it against a predefined standard. PHP_CodeSniffer “tokenizes PHP files and detects violations of a defined set of coding standards.”

Drupal developers are often familiar with the Coder module, written by Tag1 Senior Architect Doug Green, checks your Drupal code against coding standards and other best practices.

To include these in your codebase, add the following lines to your composer.json file.

composer require --dev drupal/coder
composer require --dev dealerdirect/phpcodesniffer-composer-installer

For a full tutorial on installing Coder and PHP Codesniffer, see Installing Coder Sniffer on drupal.org.

Coder module is designed to work with PHP CodeSniffer, making it easy to integrate with your continuous integration platform. Setting your team up with this kind of integration enables automated coding standard reviews on every pull request.

Other useful tools

Pre-commit hooks can run checks for you before you make a commit locally. You can set up a hook to run codesniffer before you commit - if your codesniffer fails, your commit fails, too! This can prevent some of the more obvious mistakes from even making into your local code. See an example Drupal pre-commit hook by Marco Marctino (mecmartini), or full documentation at pre-commit.

Tig is a visual command line tool that lets you see commit and log information for your repository. It has an interface that enables you to walk the list of commits, and see the details more easily.

Users who are less comfortable or familiar with the command line may find a graphical interface to be friendlier. Many GUIs are available for Git. If you’re struggling with the command line, check out a GUI and make your life easier.

About Drupal 4 Gov

Drupal 4 gov is an open source community for developers and IT professionals with an interest in making the government more open to open source.

It encompasses many open source projects but we have our beginnings in the Drupal project.

Drupal4gov offers:

Photo by MJ Tangonan on Unsplash

Sep 14 2020
Sep 14

In his Drupal 4 Gov webinar Using tools and Git workflow best practices to simplify your local development, Tag1 Senior Infrastructure Engineer Greg Lund-Chaix talks about some of the ways that teams struggle when they become successful and need to learn to scale. One of his primary focuses for teams is helping them learn how to improve their development workflow.

Local development environments give developers the tools to quickly prototype and run their code, and make testing and debugging easier. You can use local environments to create code that can be pushed to the repository, and shared with the team easily and cleanly - enabling discussions and peer review early in the development process.

One of Greg’s key tenets is “No editing on the servers.” Working locally keeps mistakes from reaching the primary codebase. Using a local environment is a best practice.

Tag1 developers often use these two tools:

These tools are open source, extensible, and have many available integrations. They make it quicker and easier to create a local environment than using a tool like MAMP, or configuring your own setup with MySQL, Apache, and so on.

Tools like DDEV and Lando also ensure your local environment matches your production environment, preventing long troubleshooting sessions when something works locally, but not on production.

How to get up and running with DDEV

Here’s a quick guide to getting started with DDEV.

[embedded content]


  1. Install DDEV.
  2. From a command line, start with a basic Composer-ized installation of Drupal. Greg created a template repository you can use as a starting point. You can have as little as a directory for custom code, the composer.json file, and a README.txt file.
  3. Enter composer install. Composer will download and complete a basic Drupal installation.
  4. At the prompt, enter ddev config.
  5. The command prompt updates. DDEV checks the directory name, and assumes the project name is based on that directory. Update the name here if that is incorrect, or press the Enter key to accept the default.
  6. DDEV checks the contents of the directory. If you’ve installed Drupal, it recognizes the installation. You may also select a different Project Type from the list DDEV supplies.
  7. DDEV setup is complete. To run DDEV, type ddev start.

DDEV starts the Docker containers. If you have never run DDEV before, this may take a few minutes while it pulls the containers down from the repository. When ready, a message similar to this displays:

DDEV setup is complete when the `Successfully started` message displays.

Click, or copy and paste the link into your browser to load your new Drupal website.

Two commands, one Drupal website!

What next?

When DDEV runs for the first time, it creates the .ddev/config.yaml file in the directory. Git ignores this file by default; consider adding it to your repository. This ensures everyone who checks out the repository has the same configuration.

View the file in your choice of text viewers.

An example of DDEV's `config.yaml` file.

This file is customizable, enabling you, your developers, and your DevOps teams to make changes that ensure your local environment matches your production servers.

From here, Greg suggests using Ansible, Puppet, or another configuration management tool to deploy your code to production.

Now you should be able to run DDEV, and explore its uses!

About Drupal 4 Gov

Drupal 4 gov is an open source community for developers and IT professionals with an interest in making the government more open to open source. This blog was based on Greg’s Drupal 4 Gov presentation.

It encompasses many open source projects but has its beginnings in the Drupal project.

Drupal 4 gov offers:

Photo by Lysander Yuen on Unsplash

Aug 12 2020
Aug 12

Throughout Drupal's history, contributors have rallied around Drupal.org as the single source of truth for both the code running Drupal and the infrastructure powering Drupal's issue queues, source control, and automated testing. As the Drupal Association continues on its journey to integrate GitLab features with Drupal.org, we're beginning to see the first glimpses of how Drupal contribution and issue management will evolve thanks to cutting-edge functionality like merge requests and issue forks in GitLab. But what happens with all of the surrounding tooling for Drupal.org, including DrupalCI and the longstanding issue queues themselves?

Soon, for the first time, Drupal contributors will be able to create merge requests and issue forks that make reviewing, suggesting changes to, and testing code much easier across the board. All activity will be reported back to the issue on Drupal.org, but contributors will be able to provide direct code comments and code review on the merge request as well. With Drupal.org's massive store of metadata, landing on the optimal solution required significant discovery and evaluation of how Drupal contribution would change in light of the move to GitLab. And with an initial beta launched just before DrupalCon Global this year, there are already more than a hundred projects in the Drupal ecosystem participating in the program.

In another special edition of our Tag1 Team Talks series, learn about what features will be part of Drupal.org's support for merge requests and issue forks, what the future of DrupalCI looks like in light of modern approaches to automated testing, and how you can not only take advantage of the ongoing beta but also how you can get involved as a contributor to one of the most important initiatives in Drupal's history. Join Tim Lehnen (Chief Technology Officer, Drupal Association), Neil Drumm (Senior Technologist, Drupal Association), Michael Meyers (Managing Director, Tag1 Consulting), and your host Preston So (Editor in Chief, Tag1 Consulting) for a fireside chat about the future of contribution and collaboration on Drupal.org.

[embedded content]

---

Links

For a transcript of this video, see Transcript: The inside story on Drupal.org's coming support for merge requests and issue forks - Tag1 TeamTalk #022.

Photo by Clay Banks on Unsplash

Aug 10 2020
Aug 10

Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7

Slow or intermittent connections are an all-too-common case that many users face when attempting to work with applications. Offline-enabled applications are a particularly challenging use case because they require synchronization and a local understanding of data. Thankfully, with the help of Yjs, an open-source real-time collaboration framework, and IndexedDB, a browser-based local database optimized for offline content, anyone can build an offline-enabled application that also includes features for rich collaboration. When combined with the functionality provided by y-indexeddb, the adapter for IndexedDB in Yjs, developers can make use of a wide variety of features that aid progress towards an authentically offline-first application.

A short while ago, I (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) had the unique opportunity to moderate an enthralling dialogue on the Tag1 Team Talks show about how Yjs empowers offline-first applications with the support of my Tag1 co-workers Kevin Jahns (creator of Yjs and Real-Time Collaboration Systems Lead at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), and Michael Meyers (Managing Director at Tag1). In this blog series, we take a look at how to build offline applications with Yjs. In this installment, we cover how offline editing could function in Drupal and the important question of how Web Workers figure into the equation.

How offline editing could work in Drupal

Before we kick off our discussion of offline editing and its prospective incorporation into Drupal, I encourage you to check out the first and second installments of this blog series about offline editing with Yjs in order to gain a complete understanding of what is involved when building offline applications, particularly the spectrum of possibilities enabled by technologies like Service Workers and IndexedDB. The previous installment of this series also includes discussion of y-indexeddb, the Yjs adapter for IndexedDB and a key bridge for offline functionality.

Illustrating offline editing examples

During our Tag1 Team Talks episode, Fabian encouraged the audience to consider a real-world use case when it comes to content management that many practitioners of content management systems (CMS) are doubtlessly familiar with. Consider a situation in which you have dozens of tabs open, and you are editing content in your CMS of choice. Now consider the scenario in which you leave your computer and return later to continue working, forgetting in the process that you have a Drupal article open in your CMS on your browser.

Then, in the Drupal CMS, you return to your administrative interface, logging in and editing your content, which is all present thanks to offline persistence of your data locally. Everything you have modified in the content is already present automatically; you don’t have to do anything else. Now, you can access your content on another tab in your browser of a different draft that you had edited previously. This illustrates the importance of counting on conflict resolution when it comes to managing offline editing on varied systems.

Now consider another scenario in which we are editing a Drupal.org issue. Upon a typical computer crash and power outage, we lose access to the internet and thus connectivity to the most up-to-date status of an issue on Drupal.org. If the Drupal.org issue were to be connected locally to our IndexedDB database via y-indexeddb, all of the content we had created for a particular issue would still be available, and therefore we wouldn’t lose any content in the process of reconnecting.

This is a significant problem that often seems insurmountable, especially given the fact that Drupal’s Autosave module, which permits automatic saves on keystrokes in Drupal’s editorial interface, can prove problematic in this environment. After all, what happens when a save to the server is impossible due to the lack of a stable connection? Though other APIs exist, such as HTML5 Local Storage, this approach only works to some extent and does not alleviate the fact that multiple tabs containing different modifications to the same content may be open at the same time on a user’s browser.

Service Workers and IndexedDB

Kevin offered another compelling use case during our comprehensive conversation on our recent Tag1 Team Talks episode about enabling Yjs offline editing. While a web browser crashing or losing internet connectivity is nothing new, flight passengers have unique situations that require substantial delays in synchronizing once again with the server. (For instance, your correspondent is, at this very moment, writing this blog post offline on a plane without any access to the internet, even though other modifications could be occurring in the content I’m currently editing.)

Fortunately, because Service Workers have access to your local IndexedDB database, the browser will be able to successfully sync your content to your server such that other applications can see what you edited, as soon as you reconnect. This means that you can create offline-enabled applications that are progressively enhanced and also support use cases in Drupal that require collaboratively edited documents to be made possible through the emerging web technologies we have described over the course of this series.

Supporting collaborative editing in Drupal

Consider, for example, the case of a document in Drupal that is collaboratively edited over the course of time by many users. You, however, are a user who has just gone offline. Nonetheless, you immediately see content display in Drupal, because there is no need to wait for the server, because all local content is served from the IndexedDB database that is currently housing your data offline. And instead of exchanging the entire document, y-indexeddb will ensure that only the parts that are modified are exchanged.

In the process, the server will send to you an update of the content that was created in the meantime by other users while you were disconnected, and you will, in a corresponding fashion, submit to the server an update representing the content you modified and updated over the course of your offline editing. Thus, we can see that y-indexeddb is a compelling enhancement to any web application that requires collaboration if you wish to build an offline-first application that also comes with effective performance optimizations out of the box. As Fabian joked on our recent Tag1 Team Talks episode, such offline-first functionality is nothing short of material from a James Bond movie, because if there’s no internet, you can still ensure that data is sent to the server in any secret agent scenario.

What about Web Workers?

Service Workers and IndexedDB are the stars when it comes to enabling offline-enabled applications with collaborative editing, but how do Web Workers figure in the picture? Web Workers, after all, are an additional browser API available in addition to Service Workers that allow applications to perform computationally complex calculations. For instance, if you need to perform major reconciliation on a Yjs document that is being collaboratively edited, you can initialize a Web Worker that retrieves a Yjs document from the local IndexedDB database and performs said calculations before executing an extremely expensive synchronization job to match up with the data held by the server.

In addition, if you wish to render an HTML page based on the data that is present in the Yjs document and subsequently send it to the main thread, Web Workers make these heavy computations possible with as much efficiency as possible. And in the process, this also allows you to exchnage data through IndexedDB across browser contexts. In short, IndexedDB is a local database that multiple processes can access concurrently and leverage to accomplish different necessary tasks.

Extending Service Workers with WebSockets

The aforementioned Service Worker in question, as Kevin admits, may also wish to establish a WebSocket connection to the server in order to synchronize the data or even leverage a different technology besides WebSockets to sync the data as needed. The main thread, that is, your website, doesn’t necessarily need to sync data through WebSockets in a real-time fashion. During our Tag1 Team Talks episode, Fabian illustrated a case in which many changes on your local environment need to be synchronized without conflicts to the server, something achievable thanks to Yjs.

Conflict resolution through Yjs still engenders some latency, and after all, as end users, we are loath to wait even more than a few seconds for content to be updated. Many of us react with annoyance when presented with a spinning loader reflecting the extensive content synchronization required. If you place all of this capacity in a Web Worker instead, you can proceed to modify your document and view live changes that have been synchronized with the server, and all of this occurs in the background. A Web Worker gives you a means for all the interactions to occur without blocking the interaction that users conduct, a very important consideration for performance enthusiasts.

Conclusion

Modern web development has wrought many changes in the front-end landscape, but perhaps no emergence of new web technologies has been as influential for enterprise organizations as the advent of offline-enabled applications that allow collaborators to synchronize changes to the server as needed but retain all functionality when connectivity is unavailable. Fortunately, with the increasing popularity of Yjs, an open-source real-time collaboration framework, and IndexedDB, a local database solution for housing data needed offline, developers have never had it easier when it comes to implementing offline applications that also support collaborative editing.

In this blog post, we traced some of the ways in which offline applications can be at the forefront rather than an afterthought in Drupal’s own ecosystem. We also analyzed how Web Workers can enhance an offline application to provide even more functionality. In the next installment in this multi-part blog series about offline editing with Yjs, we discuss how y-webrtc, the subject of a previous Tag1 Team Talks episode, intersects with the features that Yjs provides for offline collaboration.

Special thanks to Fabian Franz, Kevin Jahns, and Michael Meyers for their feedback during the writing process.

Links

About Yjs Demos Peer-to-peer shared types

Yjs paper

IndexedDB API

Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7

Photo by NESA by Makers on Unsplash

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web