Feb 25 2020
Feb 25

Table of Contents

Defining decoupled Drupal
--The pros and cons remain largely unchanged
--A competitive advantage for commercial Drupal
Aside: Our stories in decoupled Drupal
GraphQL as a game-changer
--GraphQL queries at scale
--What about JSON:API vs. GraphQL?
--GraphQL v4 and custom schemas
Conclusion

Over the last five years, decoupled Drupal has grown from a fringe topic among front-end enthusiasts in the Drupal community to something of a phenomenon when it comes to coverage in blog posts, tutorials, conference sessions, and marketing collateral. There is now even a well-received book by this author and a yearly conference dedicated to the topic. For many Drupal developers working today, not a day goes by without some mention of decoupled architectures that pair Drupal with other technologies. While Drupal’s robust capabilities for integration are nothing new, there have been comparatively few retrospectives on how far we’ve come on the decoupled Drupal journey.

Recently, your correspondent (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) sat down for a no-holds-barred discussion about decoupled Drupal on Tag1 Team Talks with three other early insiders in the decoupled Drupal mindspace: Sebastian Siemssen (Senior Architect and Lead React Developer at Tag1 and maintainer of the GraphQL module), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), and Michael Meyers (Managing Director at Tag1).

During the conversation, we spoke about the unexpected ways in which decoupled Drupal has evolved and where it could go in the near and far future. In this two-part blog series, we look back and peer forward into decoupled Drupal’s trailblazing and disruptive trajectory in the Drupal ecosystem, starting with Sebastian, who created Drupal’s GraphQL implementation, and Michael, who has witnessed firsthand the paradigm shift in Drupal’s business landscape thanks to the benefits decoupled Drupal confers.

Defining decoupled Drupal

As it turns out, defining decoupled Drupal can be tricky, given the diverse range of architectural approaches available (though these are covered comprehensively in Preston’s book). In its simplest form, decoupled Drupal is an idea centered around the notion of employing Drupal as a provider of data for consumption in other applications. In short, the “decoupling” of Drupal occurs when developers opt to utilize Drupal data by issuing requests to APIs that emit JSON (or XML) rather than through Drupal’s native presentation layer.

As Fabian notes in the webinar, there are multiple “flavors” of decoupled Drupal that have emerged. Fully decoupled Drupal implementations aim to replace the entire presentation layer of Drupal with a distinct application. Progressively decoupled Drupal, an idea first promulgated by Drupal project lead Dries Buytaert in a 2015 blog post, recommends leveraging API calls to provide for components in the existing Drupal front end rather than an entirely separate presentation layer.

The pros and cons remain largely unchanged

Decoupled Drupal presents both advantages and disadvantages that are worth considering for any implementation, and these are also covered in Decoupled Drupal in Practice. Fully decoupled Drupal, for instance, means jettisoning a great deal of what makes Drupal such an effective web framework by limiting its feature set substantially to solely the APIs for retrieving data and an administrative interface for content management.

Though decoupling Drupal inevitably means losing certain functionality, such as contextual links, in-place editing, and layout management, it does confer the opportunity to create something entirely custom atop the data model that Drupal provides, as Sebastian states in our Tag1 Team Talks episode. Nonetheless, while decoupled Drupal does introduce unprecedented flexibility, it also goes beyond a simple separation of the presentation layer from the data layer by rendering Drupal more replaceable. The notion of replaceability does belie an undercurrent of anxiety surrounding decoupled Drupal, and it presents risks for Drupal’s position in the CMS market.

A competitive advantage for commercial Drupal

Nonetheless, from the commercial standpoint and the customer perspective, there is no question that decoupled Drupal is a boon for ambitious digital experiences. For instance, during our conversation, Michael argues that decoupled Drupal actually represents a competitive advantage for Drupal in the commercial space rather than an erosion of its value. The narrative of integration in which Drupal has always trafficked has long been influential in swaying skeptical stakeholders. In short, in this view, Drupal’s core REST API, JSON:API, and GraphQL only strengthen this flexibility to integrate.

As a matter of fact, Tag1 Consulting’s yearslong work with Symantec supports this notion. Together with Tag1, Symantec embarked on a migration from version to version of Drupal in a nontraditional way that was only made possible thanks to decoupled Drupal technologies. By providing multiple front ends across Drupal versions, Symantec and Tag1 succeeded in both accelerating and effectively managing the migration to Drupal 8. Michael notes that from a client standpoint at Tag1, decoupled Drupal has been a valuable and sought-after asset; if anything, it has only increased evaluators’ interest in the platform.

Aside: Our stories in decoupled Drupal

Sebastian, Fabian, and I also reminisced about our respective stories in the decoupled Drupal landscape and how it has impacted the trajectory of our work even today. Arguably among the first in the Drupal community to leverage a decoupled architecture, Fabian admits building a decoupled site that consumed unstyled HTML snippets as far back as 2009, with other data coming from multiple sources. We can certainly forgive the fact that he opted to use Flash for much of the front end in that build.

Early in 2015, Sebastian found that the release of GraphQL by Facebook as a new specification made everything “click” for him as one of the crucial gaps in Drupal’s APIs. That same year, Sebastian began working on GraphQL for PHP, basing much of his ideas on the just recently open-sourced GraphQL JavaScript library. Thanks to Sebastian’s tireless work, the GraphQL module for Drupal first appeared as a codebase on Drupal.org in March 2015 and has since skyrocketed in interest and popularity.

My own journey is much less glamorous, as I assisted Drupal project lead Dries Buytaert with his initial thought leadership on the subject of decoupled Drupal back in 2015. At the time there was considerable hand-wringing about what disruptive advances in the front-end development world could mean for Drupal. Over the course of time, my perspectives have evolved considerably as well, and I believe there are many excellent use cases for monolithic implementations, something I stress in my book about decoupled Drupal. Today, I help run the biggest and oldest conference on decoupled and headless CMS architectures, Decoupled Days, which is now entering its fourth year.

GraphQL as a game-changer

One of the key questions for decoupled Drupal practitioners has been how to leverage the best that Drupal has to offer while also accessing some of the best ideas emerging from the API and front-end development spaces. For instance, among Drupal’s most notable selling points is the fact that not only does Drupal benefit from an excellent administrative interface; it also offers a robust customizable data model that can be crafted and fine-tuned. Entity API and Field API, for instance, have long been some of the most vaunted components of the back-end Drupal developer experience.

Tag1 recently found this with a project that employs Laravel, a more lightweight PHP framework for websites that is far less opinionated than Drupal. Light layers of entities and fields were required for the Laravel project, and Fabian remarks in our recent webinar that these are particularly straightforward to implement and validate in Drupal. GraphQL adds another fascinating dimension to this by allowing Drupal to handle the heavy lifting of important features like field validation while permitting client-tailored queries.

GraphQL queries at scale

Sebastian describes GraphQL as a reversal of the traditional relationship between the server and client. Rather than the client being forced to adhere to what the server provides, the server declares the data possibilities that it is capable of fulfilling, and based on these catalogued possibilities, the client defines exactly what it needs on a per-request basis. The client sends the query to the server in a particular format, which Sebastian characterizes as much like the “JSON you want returned from the API but only the keys.” Thereafter, the GraphQL API inserts the values and returns a response with precisely the same shape.

This brings us to one of the most fascinating aspects of GraphQL within the Drupal CMS, which often utilizes deeply nested relational data structures that are uniquely well-suited for modeling into a graph — which is the very core of what GraphQL does. One of the best aspects of GraphQL, moreover, is its capabilities for schema and query validation, with which we can prevent distributed denial-of-service (DDoS) attacks that attempt to overload queries with many nested layers by checking and analyzing the complexity of queries ahead of time, a task that is much easier thanks to the predictability of GraphQL schemas.

What about JSON:API vs. GraphQL?

While the story of GraphQL has been long-running in the Drupal ecosystem, it’s important to note the other big player in Drupal’s web services: JSON: API, which was just this year introduced into Drupal core. One of the biggest reasons why JSON:API has gained prominence is its relative stability and its high amount of documentation. Sebastian argues that React developers coming from their ecosystem are more likely to be familiar with GraphQL already, which also helps elevate its status within the decoupled Drupal ecosystem.

GraphQL v4 and custom schemas

One of the most anticipated releases of the GraphQL module is GraphQL v4 for Drupal, which means several significant changes for the ever-evolving GraphQL module. In the latest version of the GraphQL module, the GraphQL schema is fully in the control of Drupal developers, which is a substantial change from previous releases. After all, one of the best selling points for using GraphQL in the first place is schema customizability.

According to Sebastian, this means that you can decouple Drupal on the API access and contract level rather than foisting the data model and data internals of Drupal on the resulting GraphQL API, which may cause confusion among developers of consumer applications like React implementations.

Conclusion

Perhaps the most intriguing development in the long-running saga of decoupled Drupal is the diversification and proliferation of a wide range of approaches, like GraphQL, that improve the developer experience in Drupal significantly. Besides its obvious benefits of greater flexibility on the side of developers, moreover, Drupal users and agency customers are also discovering the advantages of decoupled Drupal for a variety of use cases.

As we recently discussed during our conversation, a mix of both nostalgia and forward thinking, decoupled Drupal is here to stay. Whereas in years past this statement may have caused considerable anxiety in the Drupal community, today it is emblematic of the ongoing explosion of options and capabilities that decoupled Drupal engenders for Drupal developers. But in the second installment in this two-part series, Fabian, Sebastian, and I discuss some of the anxieties and worries we share about decoupled Drupal, before attempting to predict what could come next for this fast-evolving paradigm.

Special thanks to Fabian Franz, Michael Meyers, and Sebastian Siemssen for their feedback during the writing process.

Photo by Jesse Bowser on Unsplash

Feb 19 2020
Feb 19

Content collaboration has long been table stakes for content management systems like WordPress and Drupal, but what about real-time peer-to-peer collaboration between editors who need direct interaction to work on their content? The WordPress Gutenberg team has been working with Tag1 Consulting and the community of Yjs, an open-source real-time collaboration framework, to enable collaborative editing on the Gutenberg editor. Currently an experimental feature that is available in a Gutenberg pull request, shared editing in Gutenberg portends an exciting future for editing use cases beyond just textual content.

Yjs is both network-agnostic and editor-agnostic, which means it can integrate with a variety of editors like ProseMirror, CodeMirror, Quill, and others. This represents substantial flexibility when it comes to the goals of WordPress to support collaborative editing and the potential for other CMSs like Drupal to begin exploring the prospect of shared editing out of the box. Though challenges remain to enable truly bonafide shared editing off the shelf in WordPress and Drupal installations, Gutenberg is brimming with possibility as the collaboration with Tag1 continues to bear significant fruit.

In this Tag1 Team Talks episode that undertakes a technical deep dive into how the WordPress community and Tag1 enabled collaborative editing in the Gutenberg editor, join Kevin Jahns (creator of Yjs and Real-Time Collaboration Systems Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and your host Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for an exploration of how CMSs around the landscape can learn from Gutenberg's work to empower editors to collaborate in real-time in one of the most exciting new editorial experiences in the CMS world.

[embedded content]

Feb 18 2020
Feb 18

In the first part of our two-part blog series on Drush 10, we covered the fascinating history of Drush and how it came to become one of the most successful projects in the Drupal ecosystem. After all, many of us know many of the most common Drush commands by heart, and it’s difficult to imagine a world without Drush when it comes to Drupal’s developer experience. Coming on the heels of Drupal 8.8, Drush 10 introduces a variety of new questions about the future of Drush, even as it extends Drush’s robustness many years into the future.

Your correspondent (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) had the unique opportunity to discuss Drush’s past, present, and future with Drush maintainer Moshe Weitzman (Senior Technical Architect at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), and Michael Meyers (Managing Director at Tag1), as part of the Tag1 Team Talks series at Tag1 Consulting, our biweekly webinar and podcast series. In the conclusion to this two-part blog series, we dig into what’s new in Drush 10, what you should consider if you’re making a choice between Drush and Drupal Console, and what the future for Drush might hold in store for Drupal’s first CLI.

What’s new in Drush 10

Drush 10 is the version of Drush optimized for use with Drupal 8.8. It embraces certain new configuration features available as part of the upcoming minor release of Drupal, including the Exclude and Transform APIs as well as config-split in core. Nevertheless, the maintainers emphasize that the focus of Drush 10 was never on new additive features; instead they endeavored to remove a decade’s worth of code from Drush and prepare it for many years to come.

To illustrate this fact, consider that Drush 9 was a combination of both old APIs from prior versions of Drush and all-new APIs that Drush’s maintainers implemented to modernize Drush’s commands. Therefore, while Drush 9 commands generally make use of the newly available APIs, if you call a site with Drush 9 installed from Drush 8, it will traverse all of the old APIs. This was a deliberate decision by Drush’s maintainers in order to allow users a year to upgrade their commands and to continue to interoperate with older versions. As a result of the removals of these older approaches, Drush 10 is extremely lean and extremely clean, and it interoperates with sites having Drush 9 but not those with earlier versions.

How should developers in the Drupal community adopt Drush 10? Moshe recommends that users upgrade at their earliest convenience through Composer, as Drush’s maintainers will be able to offer the best support to those on Drush 10.

Why Drush over Drupal Console?

One key question that surfaces frequently concerning Drupal’s command-line ecosystem is the distinction between Drush and a similar project, Drupal Console, and when to use one over the other. Though Drush and Drupal Console accomplish a similar set of tasks and share similar architectures because they both depend on Symfony Console, there are still quite a few salient differences that many developers will wish to take into account as they select a command-line interface to use with Drupal.

Commands, for instance, are one area where Drush and Drupal Console diverge. Command authors will find that commands are written quite differently. Drush leverages an annotated command layer on top of Symfony Console where developers employ annotations to write new commands. Drupal Console instead utilizes Symfony Console’s approach directly, with a few methods attached to each command. However, this is a minor consideration, as there is little to no difference in the CLI’s functionality, and it is merely a stylistic preference.

Drush and Drupal Console also differ significantly in their approaches to testing. Whereas Drupal Console performs unit testing, Drush prefers functional testing, with a full copy of both Drupal and Drush in their test suite. All Drush CLI commands are run on a real, fully functional Drupal site, whereas Drupal Console opts to leverage more mocking. There are admittedly many advantages to both approaches. But perhaps the most important distinction is of a less technical variety: Drupal Console has seen a bit less contribution activity as of late than Drush, which is an important factor to consider when choosing a CLI.

The future of Drush: Drush in core?

Though Moshe and Greg have committed themselves to maintaining and supporting Drush in the future, there are doubtlessly many questions about Drush’s roadmap that will influence decision-making around Drupal.

Drush’s inclusion in core has long been a key talking point with a variety of potential resolutions. Drupal already has two CLI commands in it unrelated to Drush, namely the site-install and quick-start commands, which are seldom used as they have limited coverage of key use cases. For instance, site-install only installs Drupal successfully on SQLite databases and lacks consideration for configuration. Drush’s maintainers are keen on considering a version of Drush in core, and an active discussion is ongoing.

Moreover, now that the starter template for Drupal projects is now deprecated in favor of core-recommended, there is an opportunity for Drush 10 to serve as a key dependency in those starter templates, initially as a suggested dependency and eventually as a required one. Some of the key commands that a hypothetical Drush in core would encompass include enabling and uninstalling modules as well as clearing caches and logging in as a user. In the not-too-distant future, a Drupal user could start a Drupal project and immediately have Drush and all its commands available from the very outset.

Conclusion

Drush 10 is an inflection point not only in the history of Drupal but in how Drupal developers interact with Drupal on a daily basis. Thanks to its leaner, faster state, Drush 10 marks a new era for remote interactions with Drupal. Because Drush 10 has tracked closely to the Drupal 8 development cycle, many of the core changes present in Drupal 8.8 are reflected in Drush 10, and the ongoing discussion surrounding the potential of Drush in core will doubtlessly continue apace.

For many of us in the Drupal community, Drush is more than a cherished tool; it is one of the primary entry points into Drupal development. With the help of your contributions, Drush can reach even greater heights. Moshe recommends that new contributors get started with improving Drush’s documentation and content concerning Drush, whether it comes in the form of blog posts or step-by-step tutorials that make learners’ experiences much better. The Drush maintainers are always happy to link to compelling content about Drush, to address bugs and issues in Drush’s issue queue, and to offer co-maintainership to prolific contributors.

While this was an exhaustive look at Drush 10, it by no means includes all of the insights we gathered from Moshe, and we at Tag1 Consulting encourage you to check out our recent Tag1 Team Talk about Drush 10 to learn even more about Drush’s past, present, and future.

Special thanks to Fabian Franz, Michael Meyers, and Moshe Weitzman for their feedback during the writing process.

Photo by Bill Oxford on Unsplash.com

Feb 13 2020
Feb 13

Table of contents

What is Tag1 Quo
How does Tag1 Quo work
What makes Tag1 Quo unique
--Immediate notice of vulnerabilities
--Backports of LTS patches
--Automated QA testing for Drupal 7 LTS
--Customer-driven product development
Conclusion

One of the challenges of securing any Drupal site is the often wide range of modules to track, security advisories to follow, and updates to implement. When it comes to Drupal security, particularly older versions of Drupal such as Drupal 6 and Drupal 7, even a slight delay in patching security vulnerabilities can jeopardize mission-critical sites. Now that Drupal 7 and Drupal 8 are fast approaching their end of life (EOL) in November 2021 (Drupal 6 reached end of life on February 24, 2016), the time is now to prepare your Drupal sites for a secure future, regardless of what version you are using.

Fortunately, Tag1 Consulting, the leading Drupal performance and security consultancy, is here for you. We’ve just redesigned Tag1 Quo, the enterprise security monitoring services trusted by large Drupal users around the world, from the ground up, with an all-new interface and capabilities for multiple Drupal versions from Drupal 6 to Drupal 8. Paired with the Tag1 Quo module, available for download on Drupal.org, and Tag1 Quo’s services, you can ensure the security of your site with full peace of mind. In this blog post, we’ll cover some of the core features of Tag1 Quo and discuss why it is essential for your sites’ security.

What is Tag1 Quo?

Tag1 Quo is a software-as-a-service (SaaS) security monitoring and alerting service for Drupal 6, Drupal 7, and Drupal 8. In addition, it includes long-term support (LTS) for Drupal 6 and is slated to commence backporting security patches for both Drupal 7 and Drupal 8 when both major versions no longer have community-supported backports. The centerpiece of Tag1 Quo integration with Drupal is the Tag1 Quo module, which is installed on your servers and communicates securely with our servers.

In addition, for a fee, we can help you with a self-hosted version of Tag1 Quo for sites hosted on-premise. This does require setup fees and entails higher per-site licensing fees, so we encourage you to reach out to us directly if you’re interested in pursuing this option.

How does Tag1 Quo work?

When a new module update is released on Drupal.org, or when a security advisory is announced that directly impacts your Drupal codebases, the Tag1 Quo system alerts you immediately and provides all of the necessary updates required to mitigate the vulnerability, with a direct link to the code you need to install to address the issue. Not only are these alerts sent over e-mail by default; they can also flow directly into your internal project workflows, including issue tracking and ticketing systems.

Tag1 Quo doesn’t stop there. As part of our long-term support (LTS) offering, when security releases and critical updates emerge, or when new security vulnerabilities are announced for community-supported Drupal versions, Tag1 audits these and determines whether the identified vulnerability also impacts end-of-life (EOL) versions of Drupal such as Drupal 6 and, in November 2021, Drupal 7. If those EOL versions are also susceptible to the vulnerabilities, we backport and test all patches to secure the EOL versions as well and distribute them to you through the Tag1 alert system.

Moreover, when a new security vulnerability is discovered in an EOL version of Drupal without an equivalent issue in a currently supported version, Tag1 creates a patch to rectify the problem and collaborates with the Drupal Security Team (several of whom are part of the Tag1 team) to determine if the EOL vulnerability applies vice-versa to all currently supported versions of Drupal so that they can be patched too. In short, no matter where the vulnerability occurs across all of Drupal’s versions, you can rest easy with Tag1 Quo’s guarantees.

What makes Tag1 Quo unique

Tag1 Quo features a centralized dashboard with an at-a-glance view of all of your Drupal sites and their current status, regardless of where each one is hosted. After all, most enterprise organizations juggle perhaps dozens of websites that need to remain secure. Such a perspective at an organizational level is essential to maintain the security of all of your websites. But the Tag1 Quo dashboard is only one among a range of capabilities unique to the service.

Immediate notice of vulnerabilities

Although several members of the Tag1 team are also part of the Drupal Security Team, and are aware of vulnerabilities as soon as they are reported, the Drupal Security Team’s first policy is to collaborate privately to address the issue before revealing its nature publicly. This is to facilitate progressive disclosure in the form of issuances of public advisories and releases of public patches before nefarious actors are able to attack Drupal sites with success. This is for your safety and for the viability of released patches.

Thanks to our deep knowledge of both projects used by our clients' websites and security advisories, Tag1 has the distinction of being among the very first to notify Tag1 Quo customers as soon as the official announcement is released. Immediately afterwards, Tag1 Quo will prepare you to apply the updates as quickly as possible to ensure your web properties’ continued safety.

Backports of LTS patches

If a fix for a vulnerability is reported for currently supported versions of Drupal but also applies to EOL versions, the patch must be backported for all Drupal sites to benefit from the patch. Unfortunately, this process can be complex and require considerable planning and analysis of the problem across multiple versions—and it can sometimes only occur after the patch targeting supported versions has been architected or completed. This means it may take more time to develop patches for LTS versions of Drupal.

Luckily, we have a head-start in developing LTS patches thanks to our advance notice of vulnerabilities in currently supported versions of Drupal. Despite the fact that we cannot guarantee that LTS updates will be consistently released simultaneously with those targeting supported versions, Tag1 has an admirable track record in releasing critical LTS updates at the same time as or within hours of the issuance of patches for supported Drupal versions.

Automated QA testing for Drupal 7 LTS

Throughout Drupal’s history, the community encouraged contributors to write tests alongside code as a best practice, but this was rarely the case until it became an official requirement for all core contributions beginning with the Drupal 7 development cycle in 2007. Tag1 team members were instrumental in tests becoming a core code requirement, and we created the first automated quality assurance (QA) testing systems distributed with Drupal. In fact, Tag1 maintains the current Drupal CI (continuous integration) systems that perform over a decade of concurrent years of testing within a single calendar year.

Because the Drupal Association has ended support for Drupal 7 tests and decommissioned those capabilities on Drupal.org, Tag1 is offering the Tag1 Quo Automated QA Testing platform as a part of Tag1 Quo for Drupal 7 LTS. The service will run all tests for Drupal 7 core and any contributed module tests that are available. Where feasible and appropriate, Tag1 will also create new tests for Drupal 7’s LTS releases. Therefore, when you are notified of LTS updates, you can rest assured that they have been tested robustly against core and focus your attention on integration testing with your custom code instead, all the while rolling out updates with the highest possible confidence.

Customer-driven product development

Last but certainly not least, Tag1 Quo is focused on your requirements. We encourage our customers to request development in order for us to make Tag1 Quo the optimal solution for your organization. By working closely with you to determine the scope of your feature requests, we can provide estimates for the work and an implementation timeline. While such custom development is outside the scope of Tag1 Quo’s licensing fees, we allot unused Tag1 Quo consulting and support hours to minor modifications on a monthly basis.

Examples of features we can provide for custom code in your codebases includes ensuring your internal repositories are relying on the latest versions of dependencies, and providing insights into your custom code through site status views on your Tag1 Quo dashboard. We can even do things like add custom alerts to notify specific teams and users responsible for these sites and customize the alerts to flow into support queues or other ticketing systems. Please get in touch with us for more information about these services.

Conclusion

The new and improved Tag1 Quo promises you peace of mind and renewed focus for your organization on building business value and adding new features. Gone are the days of worrying about security vulnerabilities and anxiety-inducing weekends spent applying updates. Thanks to Tag1 Quo, regardless of whether your site is on Drupal 6, Drupal 7, or Drupal 8, you can rest assured that your sites will remain secure and monitored for future potential vulnerabilities. With a redesigned interface and feature improvements, there is perhaps no other Drupal security monitoring service better tuned to your needs.

Special thanks to Jeremy Andrews and Michael Meyers for their feedback during the writing process.

Photo by Ian Schneider on Unsplash

Feb 12 2020
Feb 12

An effective administrative interface is table stakes for any content management system that wishes to make a mark with users. Claro is a new administration theme now available in Drupal 8 core thanks to the Admin UI Modernization initiative. Intended to serve as a logical next step for Drupal's administration interface and the Seven theme, Claro was developed with a keen eye for modern design patterns, accessibility best practices, and careful analysis of usability studies and surveys conducted in the Drupal community.

Claro demonstrates several ideas that not only illustrate the successes of open-source innovation but also the limitations of overly ambitious ideas. By descoping some of the more unrealistic proposals early on and narrowing the focus of the Claro initiative on incremental improvements and facilitating the work of later initiatives, Claro is an exemplar of sustainable open-source development.

In this closer look at how Claro was made possible and what its future holds for Drupal administration, join Cristina Chumillas (Claro maintainer and Front-End Developer at Lullabot), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a Tag1 Team Talks episode about the newest addition to Drupal's fast-evolving front end.

[embedded content]


Feb 11 2020
Feb 11

Table of Contents

If you’ve touched a Drupal site at any point in the last ten years, it’s very likely you came into contact with Drush (a portmanteau of “Drupal shell”), the command-line interface (CLI) used by countless developers to work with Drupal without touching the administrative interface. Drush has a long and storied trajectory in the Drupal community. Though many other Drupal-associated projects have since been forgotten and relegated to the annals of Drupal history, Drush remains well-loved and leveraged by thousands of Drupal professionals. In fact, the newest and most powerful version of Drush, Drush 10, is being released jointly with Drupal 8.8.0.

As part of our ongoing Tag1 Team Talks at Tag1 Consulting, a fortnightly webinar and podcast series, yours truly (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) had the opportunity to sit down with Drush maintainer Moshe Weitzman (Senior Technical Architect at Tag1) as well as Tag1 Team Talks mainstays Fabian Franz (Senior Technical Architect and Performance Lead at Tag1) and Michael Meyers (Managing Director at Tag1) for a wide-ranging and insightful conversation about how far Drush has come and where it will go in the future. In this two-part blog post series, we delve into some of the highlights from that chat and discuss what you need to know and how best to prepare for the best version of Drush yet.

What is Drush?

The simplest way to describe Drush, beyond its technical definition as a command-line interface for Drupal, is as an accelerator for Drupal development. Drush speeds up many development functions that are required in order to take care of Drupal websites. For instance, with Drush, developers can enable and uninstall modules, install a Drupal website, block or delete a user, change passwords for existing users, and update Drupal’s site search index, among many others — all without having to enter Drupal’s administrative interface.

Because Drush employs Drupal’s APIs in order to execute actions like creating new users or disabling themes, Drush performs far more quickly than Drupal’s bootstrap itself, because there is no need to traverse Drupal’s render pipeline and theme layer. In fact, Drush is also among the most compelling real-world examples of headless Drupal (a topic on which this author has written a book), because the purest definition of headless software is an application that lacks a graphical user interface (GUI). Drush fits that bill.

The origins and history of Drush

Though many of us in the Drupal community have long used Drush since our earliest days in the Drupal ecosystem and building Drupal sites, it’s likely that few of us intimately know the history of Drush and how it came to be in the first place. For a piece of our development workflows that many of us can’t imagine living without, it is remarkable how little many of us truly understand about Drush’s humble origins.

Drush has been part of the Drupal fabric now for eleven years, and during our most recent installment of Tag1 Team Talks, we asked Moshe for a Drush history lesson.

Drush’s origins and initial years

Though Moshe has maintained Drush for over a decade to “scratch his own itch,” Drush was created by Arto Bendiken, a Drupal contributor from early versions of the CMS, and had its tenth anniversary roughly a year ago. Originally, Drush was a module available on Drupal.org, just like all of the modules we install and uninstall on a regular basis. Users of the inaugural version of Drush would install the module on their site to use Drush’s features at the time.

The Drupal community at the time responded with a hugely favorable reception and granted Drush the popularity that it still sees today. Nonetheless, as Drush expanded its user base, its maintainers began to realize that they were unable to fully realize the long list of additional actions that Drupal users might want, including starting a web server to quickstart a Drupal site and one of the most notable features of Drush today: installing a Drupal site on the command line. Because Drush was architected as a Drupal module, this remained an elusive objective.

Drush 2: Interacting with a remote Drupal site

Drush 2 was the first version of Drush to realize the idea of interacting with a remote Drupal website, thanks to the contributions of Adrian Rousseau, another early developer working on Drush. Today, one of the most visible features of Drush is the ability to define site aliases to target different Drupal sites as well as different environments.

Rousseau also implemented back-end functionality that allowed users to rsync the /files directory or sql-sync the database on one Drupal installation to another. With Drush 2, users could also run the drush uli command to log in as the root user (user 1 in Drupal) on a remote Drupal site. These new features engendered a significant boost in available functionality in Drush, with a substantial back-end API that was robust and worked gracefully over SSH. It wasn’t until Drush 9 that much of this code was rewritten.

Drush 3: From module to separate project

During the development of Drush 3, Drush’s maintainers made the decision to switch from Drush’s status as a module to a project external to Drupal in order to enable use cases where no Drupal site would be available. It was a fundamental shift in how Drush interacted with the Drupal ecosystem from there onwards, and key maintainers such as Greg Anderson, who still maintains Drush today seven versions later, were instrumental in implementing the new approach. By moving off of Drupal.org, Drush was able to offer site installation through the command line as well as a Drupal quickstart and a slew of other useful commands.

Drush 5: Output formatters

Another significant step in the history of Drush came with Drush 5, in which maintainer Greg Anderson implemented output formatters, which allow users to rewrite certain responses from Drush into other formats. For instance, the drush pm-list command returns a list of installed modules on a Drupal site, including the category in which they fit, formatted as a human-readable table.

Thanks to output formatters, however, the same command could be extended to generate the same table in JSON or YAML formats, which for the first time opened the door to executable scripts using Drush. During the DevOps revolution that overturned developer workflows soon afterwards, output formatters turned out to be a prescient decision, as they are particularly useful for continuous integration (CI) and wiring successive scripts together.

Drush 8: Early configuration support

Drush 8, the version of Drush released in preparation for use with Drupal 8 sites, was also a distinctly future-ready release due to its strong command-line support for the new configuration subsystem in Drupal 8. When Drupal 8 was released, core maintainer Alex Pott contributed key configuration commands such as config-export, config-import,config-get, and config-set (with Moshe’s config-pull coming later), all of which were key commands for interacting with Drupal’s configuration.

Due to Drush 8’s early support for configuration in Drupal 8, Drush has been invaluable in realizing the potential of the configuration subsystem and is commonly utilized by innumerable developers to ensure shared configuration across Drupal environments. If you have pushed a Drupal 8 site from a development environment to a production environment, it is quite likely that there are Drush commands in the mix handling configuration synchronicity.

Drush 9: A complete overhaul

About a year ago, Drush’s indefatigable maintainers opted to rewrite Drush from the ground up for the first time. Drush had not been substantially refactored since the early days in the Drush 3 era, when it was extracted out of the module ecosystem. In order to leverage the best of the Composer ecosystem, Drush’s maintainers rewrote it in a modular way with many Composer packages for users to leverage (under the consolidation organization on GitHub).

This also meant that Drush itself became smaller in size because it modularized site-to-site communication in a tighter way. Declaring commands in Drush also underwent a significant simplification from the perspective of developer experience. Whereas foregoing Drush commands were written in PHP as was the case in Drush 8, developers could now write Drush commands in a PHP method within a callback with the lines of Doxygen above the callback housing the name, parameters, and other details of the command. Also in the same release came YAML as the default format for configuration and site aliases in Drush as well as the beginning of Symfony Console as the runner of choice for commands.

Drush 9 introduced a diverse range of new commands, including config-split, which allows for different sets of modules to be installed and different sets of configuration to be in use on distinct Drupal environments (though as we will see shortly, it may no longer be necessary). Other conveniences that entered Drush included running commands from Drupal’s project root instead of the document root as well as the drush generate command, which allows developers to quickly scaffold plugins, services, modules, and other common directory structures required for modern Drupal sites. This latter scaffolding feature was borrowed from Drupal Console, which was the first to bring that feature to Drupal 8. Drush’s version leverages Drupal’s Code Generator to perform the scaffolding itself.

Conclusion

As you can see, Drush has had an extensive and winding history that portends an incredible future for the once-humble command-line interface. From a pet project and a personal itch scratcher to one of the most best-recognized and commonly leveraged projects in the Drupal ecosystem, Drush has a unique place in the pantheon of Drupal history. In this blog post, we covered Drush’s formative years and its origins, a story seldom told among open-source projects.

In the second part of this two-part blog post series, we’ll dive straight into Drush 10, inspecting what all the excitement is about when it comes to the most powerful and feature-rich version of Drush yet. In the process, we’ll identify some of the key differences between Drush and Drupal Console, the future of Drush and its roadmap, and whether Drush has a future in Drupal core (spoiler: maybe!). In the meantime, don’t forget to check out our Tag1 Team Talk on Drush 10 and the story behind Drupal’s very own CLI.

Special thanks to Fabian Franz, Michael Meyers, and Moshe Weitzman for their feedback during the writing process.

Photo by Jukan Tateisi on Unsplash

Feb 05 2020
Feb 05

What happens when you have a connection that isn't working, but you have a mission-critical document that you need to collaborate on with others around the world? The problem of peer-to-peer collaboration in an offline environment is becoming an increasingly pressing issue for editorial organizations and enterprises. As we continue to work on documents together on flights, trains, and buses, offline-first shared editing is now a base-level requirement rather than a pipe dream.

Yjs, an open-source framework for real-time collaboration, integrates gracefully with IndexedDB, the local offline-first database available in browsers, to help developers easily implement offline shared editing for their organization's needs. Paired in turn with other technologies like WebRTC, a peer-to-peer communication protocol, and Yjs connectors, a graceful architecture is possible that not only enables offline shared editing for a variety of use cases beyond textual content but also makes the developer experience as straightforward as possible.

In this technical and topical deep dive into how Yjs and IndexedDB make offline shared editing possible, join Kevin Jahns (creator of Yjs and Real-Time Collaboration Systems Lead at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and your host Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a Tag1 Team Talks episode you don't want to miss about how to enable offline shared editing for web applications and even CMSs like Drupal.

[embedded content]

Jan 29 2020
Jan 29

Preston is a product strategist, developer advocate, speaker, and author of Decoupled Drupal in Practice (Apress, 2018).

A globally recognized voice on decoupled Drupal and subject matter expert in the decentralized web and conversational design, Preston is Editor in Chief at Tag1 Consulting and Principal Product Manager at Gatsby, where he works on improving the Gatsby developer experience and driving product development.

Having spoken at over 50 conferences, Preston is a sought-after presenter with keynotes on five continents and in three languages.

Jan 29 2020
Jan 29

Preston is a product strategist, developer advocate, speaker, and author of Decoupled Drupal in Practice (Apress, 2018).

A globally recognized voice on decoupled Drupal and subject matter expert in the decentralized web and conversational design, Preston is Editor in Chief at Tag1 Consulting and Principal Product Manager at Gatsby, where he works on improving the Gatsby developer experience and driving product development.

Having spoken at over 50 conferences, Preston is a sought-after presenter with keynotes on five continents and in three languages.

Jan 28 2020
Jan 28

Testing is becoming an essential keyword and toolkit for developers and development teams who seek to architect and implement successful and performant websites. Thanks to the unprecedented growth in automated testing tools and continuous integration (CI) solutions for all manner of web projects, testing is now table stakes for any implementation. That said, many developers find automated testing to be an altogether intimidating area of exploration. Fortunately, when paired with a development culture that values quality assurance (QA), you can focus on adding business value instead of fixing issues day in and day out.

Three years ago, Yuriy Gerasimov (Senior Back-End Engineer at Tag1 Consulting) gave a talk at DrupalCon New Orleans about some of the key ideas that Drupal developers need to understand in order to implement robust testing infrastructures and to foster a testing-oriented development culture that yields unforeseen business dividends across a range of projects. In this four-part blog series, we summarize some of the most important conclusions from Yuriy’s talk. And in this third installment, we’ll take a closer look at two of the most essential parts of any testing toolkit: unit testing and functional testing.

Unit testing

Unit testing is a particularly fascinating topic, not only because many developers already know what user testing entails, but also because it enjoys a range of possible technologies that are readily available and that development teams can easily leverage. After all, unit testing is a commonly taught concept in universities. In its most reduced form, unit testing refers to the verification of a feature’s robustness by feeding it a variety of arguments, all of which test the limits of what is possible within the feature. Drupal developers, for instance, have access to a variety of unit tests for both core and contributed modules. And the best part of unit testing is that you can test functions and classes in isolation rather than evaluating large parts of the code.

The most optimal locations for unit tests are functions that are responsible for calculating some result. For instance, if you have a function that receives a set of arguments and that performs a series of calculations based on those arguments, you can feed unit tests arguments that evaluate whether the function works for a variety of possible inputs.

Unit tests as documentation

This also reveals one of the most interesting characteristics of unit tests. Because unit tests are focused on stretching the outer limits of what functions are capable of, they are also a convenient source of robust documentation. Yuriy admits that when he downloads external libraries for the first time in order to integrate them with his existing code, he often scrutinizes the unit tests, because they indicate what the code expects as input. When documentation is challenging to read or nonexistent, unit tests can be an ideal replacement for developers seeking to comprehend the purpose of certain code, because the unit tests are where developers ensure that logic operates correctly.

As an illustration of how unit tests can yield outsized benefits when it comes to well-documented code, consider the case of Drupal’s regular expressions. In Drupal 7, the regular expressions responsible for parsing .info files, which are used across both modules and themes, are extremely lengthy owing to the myriad demands on the files. Regular expressions, after all, are easy to write but difficult to understand. Though we are privileged as Drupal developers in that Drupal breaks regular expressions into separate, assiduously commented lines, many developers in the contributed ecosystem will avoid taking this additional step. For this reason, Yuriy recommends that all developers write unit tests for regular expressions that parse critical input.

Writing testable code is difficult but useful

Unit tests help developers think in a very different way about how to write software in a legible and understandable way. In Drupal 8, for instance, the new dependency injection container is particularly well-suited for unit tests, because we can swap services in and out of it. If you work with a database as an external service, for example, you can easily mock objects and ensure that your services work for those mock objects successfully.

To illustrate this, Yuriy cites the real-world example of the Services module in Drupal 7, which originally consisted of classical Drupal code but was subsequently coupled with unit tests. Yuriy witnessed through his addition of unit tests that he was able to parse different arguments entering certain Services functionality or inspect how routing functioned. With unit tests, ensuring the security of functions is much easier, because they can help you determine all of the arguments necessary to the code so that a call to globals or static variables is rendered unnecessary. Though unit testing requires considerable effort, Drupal 8 makes the process much easier for developers.

And now, with the introduction of PHPUnit into Drupal testing infrastructures, it is now even easier to test your code. The biggest part that was added is the capability to use mock objects that are presently the industry standard for unit testing.

Functional testing

As the name suggests, functional testing analyzes how actual users will interact with your software from a concrete rather than abstract standpoint. If you have an unlimited budget for your project, you can test every single last page on your website, but this is seldom—if ever—feasible from the perspective of project budgets. Whereas unit testing only requires perhaps ten to twenty percent of your development time, functional tests will require several days to write and noticeably more effort to implement.

Selling functional testing

In many of Yuriy’s previous projects, he was able to sell functional testing to customers by justifying the need to ensure that the software would be functional irrespective of the time of day or night. His teams had particular success selling functional testing in commerce projects, because stakeholders in commerce implementations are strongly invested in consumers successfully checking out at all times.

Often, the most challenging aspect of functional testing in commerce projects is the credit card details that customers must inevitably provide to the commerce platform. If you have a testing environment with all possible payment providers, however, customers no longer need to check to make sure that the checkout functions properly while they are at home or at the office. Your clients can simply click a single button in the continuous integration (CI) server and witness for themselves that the user flow is functional or configure notifications so that they are issued solely when that process breaks.

Maintenance costs of functional testing

Functional testing requires considerable maintenance, because it is primarily based on the document object model (DOM) of your website rather than abstract code. If you modify something on the website, you will need to revise your functional tests accordingly, including the selectors that you are targeting with functional tests. Yuriy warns that many may operate under the misconception that functional tests only require a mere twelve hours of implementation, but due to the unique attributes of each project, functional tests may require several days to implement.

As such, automating the process of functional testing is of paramount importance. Yuriy suggests that functional tests should not be made difficult for developers and especially project managers who are concerned about project quality. And as with other types of tests, if functional tests are not run on a regular basis, they are of limited usefulness.

Tools for functional testing

Luckily, there are many tools available for functional testing, the most notable among them being Behat. The Behat ecosystem makes a variety of extensions available, but it is by no means the only functional testing solution available in the landscape. Other solutions exist in the software-as-a-service (SaaS) space, including Selenium, which records clicks and provides a more visual representation of functional tests. For this reason, Yuriy recommends Selenium for junior developers with less exposure to functional testing.

Nevertheless, functional testing can fall victim to the vagaries of project budgets, and prospective expense remains among the most important considerations for customers interested in functional testing. To account for this, it is essential to consider which user flows are the most critical to the success of your implementation and to provide the appropriate level of coverage for those components. It is also a good idea to discuss at length with your customer what outcomes they would like to see and to ensure that the user flows they consider most crucial enjoy adequate coverage.

Conclusion

Thanks to the dual innovations engendered by unit testing and functional testing, you can have both an abstract and concrete view into how your code is performing in terms of quality with little to no overhead. With unit testing, which ensures that each designated function works properly with a variety of inputs that stretch the limits of what it can do, you can protect yourself from potential security vulnerabilities such as a lack of input sanitization. Functional testing, meanwhile, allows you to perform automated trials of your implementation to guarantee that your users are traversing the experience you intend.

In this blog post, we explored the boundaries of what unit testing and functional testing can offer you when it comes to a modern testing infrastructure and test-oriented development culture. In the fourth and final installment of this four-part blog series, we turn our attention to two of the areas of testing experiencing some of the most innovation as of late: visual regression testing and performance testing. As demands on our implementations shift, it is increasingly more important that we understand not only when changes impact user experiences but also how they perform under realistic loads.

Special thanks to Yuriy Gerasimov and Michael Meyers for their feedback during the writing process.

Please check out Part 1 and Part 2!

Jan 23 2020
Jan 23

Over the course of Drupal’s lengthy history, one of the most common feature requests has been automatic updates. A common complaint of Drupal site administrators, especially those who have smaller sites updated less frequently, is the frequently complex and drawn-out process required to update a Drupal site from one minor version to another. Updates can involve a difficult set of highly specific steps that challenge even the most patient among us. Indeed, many in the Drupal community simply choose to ignore the automatic e-mails generated by Drupal.org indicating the availability of a new version, and waiting can lead to compounding security vulnerabilities.

Fortunately, the era of frustration when it comes to automatic updates in Drupal is now over. As one of the roughly dozen Drupal Core Strategic Initiatives, Drupal automatic updates are a key feature that will offer Drupal users better peace of mind when minor releases occur. Over the last several years, Tag1 Consulting, well-known as leading performance and scalability experts in the Drupal community, has worked closely with the Drupal Association, MTech, and the Free and Open Source Software Auditing (FOSSA) program at the European Commission to make automatic updates in Drupal a reality.

Recently, I (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) sat down with Lucas Hedding (Senior Architect and Data and Application Migration Expert at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Tim Lehnen (CTO at the Drupal Association), and Michael Meyers (Managing Director at Tag1) to host a Tag1 Team Talks episode about the story of Tag1’s involvement in the automatic updates strategic initiative. In this blog post, we dive into some of the fascinating background and compelling features in Drupal’s new automatic updates, as well as how this revolutionary feature will evolve in the future.

What are automatic updates in Drupal?

Listed as one of the Drupal Core Strategic Initiatives for Drupal 9, Drupal’s automatic updates are intended to resolve some of the most intractable usability issues in maintaining Drupal sites. Updating Drupal sites can be a challenging, tedious, and costly process. Building an automatic updater for Drupal is a similarly difficult problem, with a variety of potential security risks, but it’s a problem that other ecosystems have solved successfully. Following Dries’ announcement of automatic updates as a strategic priority, several early architectural discussions took place, especially at Midwest Drupal Summit 2018 in Ann Arbor.

Automatic updates in Drupal provide certain key benefits for users of all shapes and sizes who leverage Drupal today, including individual end users, small- to medium-sized agencies, and large enterprises. Advantages that apply to all users across the spectrum include a reduction in the total cost of ownership (TCO) for Drupal sites and especially a decrease in maintenance costs.

As for small- and medium-sized agencies and individual site owners, it can be difficult—and deeply disruptive and anxiety-inducing—to mobilize sufficient resources in a brisk timeframe to prepare for Drupal security releases that typically occur on Wednesdays. For many end users and small consultancies who lack experience with keeping their Drupal sites up to date, high-alert periods on Wednesday can be deeply stressful. And for enterprise users, how to incorporate security updates becomes a more complex discussion: Should we integrate manual updates into our security reviews or keep adhering to continuous integration and continuous deployment (CI/CD) processes already in place?

Where are Drupal’s automatic updates today?

The full roadmap for Drupal’s automatic updates is available on Drupal.org for anyone to weigh in, but in this blog post we focus on its current state and long-term future. Automatic updates in Drupal include updates on production sites as well as on development and staging environments, although some integration with existing CI/CD processes may be required. In addition, automatic updates support both Drupal 7 and Drupal 8 sites.

Because of the ambitious nature of the automatic updates initiative, as well as the desire by the module’s maintainers to undertake a progressive approach from an initial module in the contributed ecosystem to a full experimental module in Drupal core, the development process has been phased from initial architecture to present. Currently, a stable release is available that includes features like public safety alerts and readiness checks.

As for other developments within the scope of available funding from the European Commission, in-place automatic updates have also arrived. If a critical security release is launched, and your site has the automatic updates module installed, you’ll receive an e-mail notification stating that an update is forthcoming in the next several days. Once the update is available, the module will then automatically execute the in-place automatic update if all readiness checks show as green on the Drupal user interface, meaning that no additional action is required on the user’s part.

Key features of Drupal automatic updates

Together with MTech, the Drupal Association, and the European Commission, Tag1 has been heavily involved in architecting the best and most graceful approach, particularly in such a way that it can be generalized for and leveraged by other open-source software projects in the PHP landscape. This includes approaches seen in other ecosystems such as readiness checking, file downloading, and signature verification that generates “quasi-patches” as well as inspiration from the WordPress community. One of the team’s major concerns in particular is ensuring the continuous integrity of update packages such that users can be confident that such packages are installed from a trusted source.

There are three key features available as part of automatic updates in Drupal that will be part of the initial release of the module, and we discuss each of these in turn here.

Public safety messaging

After the noted security vulnerability in 2014 commonly known as “Drupalgeddon,” a notice was posted indicating that a critical release was forthcoming. When it comes to automatic updates, a similar process would occur: Several days before a critical security release for Drupal core or for a contributed project in Drupal, a notice would be posted several days prior and available on every Drupal site.

This sort of public safety messaging allows for an additional communication mechanism before a key update so that site owners can ensure they are ready for an update to land. In Drupal sites, the feed of alerts originate from the same security advisories (SAs) that the Drupal Security Team and Drupal’s release managers issue.

Readiness or “preflight” checks

Every Drupal site with automatic updates installed will also have readiness checks, also known as “preflight” checks, that run regularly every six hours through Drupal’s cron and will inform site owners if their site is prepared for an automatic update. Readiness checks are essential to ensure that sites are not functionally compromised after an automatic update.

For instance, if Drupal core has been hacked by a developer, if a site is running on a read-only filesystem, or if there are foregoing database updates that need to be run, readiness checks will indicate that these issues need resolutions before a site can automatically update. There are eight or nine readiness checks available currently, and some are simple warnings to aid the user (e.g. in situations where cron is running too infrequently to update the site automatically in a timely fashion), while others are errors (e.g. the filesystem is read-only and cannot be written to). Whereas warnings will not impede the commencement of an automatic update, errors will.

In-place updates

The final crucial component for automatic updates is in-place updates, the centerpiece of this new functionality. The in-place updates feature in Drupal’s automatic updates downloads a signed ZIP archive from Drupal.org. Using the libsodium library, the feature then compares the signature of the ZIP file to verify that the resulting archive matches Drupal.org’s official archive.

Thereafter, in-place updates will back up all files that are slated for update and update the indicated files. If the process concludes successfully, the site issues a notification to the user that the site has been upgraded. If something fails during the procedure, in-place updates will restore the available backup.

Common questions about automatic updates

On the recent Tag1 Team Talks episode about automatic updates in Drupal, contributors from Tag1 and the European Commission answered some of the most important questions on every Drupal user’s mind as the initiative continues to roll out automatic updates features.

What about using Composer versus tarballs?

One of the key differences between Drupal implementations today is the use of the Composer command-line interface to handle Drupal’s installed modules in lieu of managing module installations through tarballs. Due to the widening use of Composer in the Drupal community, if a site has updated to Drupal 8.8.0 or later, the site will be using Composer. And if the two key Composer-related files in Drupal codebases (namely composer.json and composer.lock) are not modified, automatic updates will continue to function properly. However, for sites leveraging Composer and subsequently modifying the /vendor directory in Drupal codebases, this question becomes more complicated.

At present, the automatic updates team will release early versions supporting all scenarios for Drupal sites, short of those sites that have modified composer.json and composer.lock directly. By observing users as they gradually adopt automatic updates, the team plans to learn much about how users juggle Drupal dependencies in order to release improved update methods that accommodate Composer much more gracefully.

Are automatic updates part of Drupal core?

As of now, automatic updates are not part of a vanilla Drupal installation, but all major components of the module will be incorporated into Drupal core in due course. The in-place updates feature presents the most overt difficulties.

Before in-place updates can land in core, the automatic update team plans to implement an A/B front-end controller that is capable of swapping between two full codebases and toggle back to the backed-up, out-of-date codebase if the update exposes certain issues mid-flight.

What is the future of automatic updates?

While the European Commission has funded the first twelve months of work over the course of 2019, there is much more work to do. The initial European Commission funding accounts for the three aforementioned key features, namely readiness checking, the delivery of update “quasi-patches,” and a robust package signing system, all focused on security updates, which are the most pressing. However, the current year of development excludes better support for Composer and contributed projects.

The longer-term roadmap for automatic updates includes the A/B front-end controller mentioned in the previous section, more robust support for Composer-powered sites, and other types of updates. These include updates for contributed modules and themes as well as batched successful updates for sites that have fallen particularly behind.

Conclusion

Automatic updates will reinvent how we maintain and upgrade Drupal sites, particularly in the realm of security. Because they allow novice and experienced Drupal users alike to save time without the need to worry about how they will implement updates, this strategic initiative improves the total cost of ownership for Drupal users of all sizes and backgrounds.

No account of the extraordinary initiative that is Drupal’s automatic updates would be complete without appreciation for the sponsors of the developers involved, especially from the Drupal Association, MTech, Tag1 Consulting, and the European Commission’s FOSSA program. Organizations and individuals alike have sponsored automatic updates in Drupal to widen awareness of their brands, to showcase their skills as developers, and to attract other Drupal contributors and resource Drupal teams.

To sponsor the continued success of Drupal’s automatic updates, please consider sponsoring development by contacting the Drupal Association. And for more insight into automatic updates directly from the module’s creators, check out our recent Tag1 Team Talks episode on the topic for information we were unable to fit in this blog post.

Special thanks to Fabian Franz, Lucas Hedding, Tim Lehnen, and Michael Meyers for their feedback during the writing process.

Click the following link for our Tag1 Team Talk on Drupal Automatic Updates!

Jan 22 2020
Jan 22

WebRTC, a protocol that facilitates peer-to-peer communication between two clients via the browser, is now supported by all modern browsers. Since its introduction it has mainly been used for web conferencing solutions, but WebRTC is ideal for a variety of other use cases as well. Because of its wide platform support, creating peer-to-peer applications for the web is now more straightforward than ever. But how do you manage many people working together at the same time on the same data? After all, conflict resolution for peer-to-peer applications remains a challenging problem. Fortunately, with Yjs, an open-source framework for real-time collaboration, developers can now combine WebRTC and Yjs to open the floodgates to a range of future-ready collaborative use cases.

Thanks to WebRTC and Yjs, anyone can build collaborative editing into their web application, and this includes more than just text Yjs enables collaborative drawing, drafting, and other innovative use cases. The advantage of such a peer-to-peer model (in lieu of a client–server model) in the CMS world is that collaborative editing can be added to any editorial interface without significant overhead or a central server handling conflict resolution. By integrating with y-webrtc, the Yjs connector for WebRTC, CMS communities can easily implement collaborative editing and make it natively available to all users, whether on shared hosting or in the enterprise. The future of Drupal, WordPress, and other CMSs is collaborative, and, together, WebRTC and Yjs enable collaborative editing out of the box.

In this deep dive into how Yjs enables peer-to-peer collaboration, join Kevin Jahns (Real-Time Collaboration Systems Lead at Tag1 and creator of Yjs), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a closer look at how you too can build peer-to-peer collaboration into your decentralized application.

[embedded content]
Jan 22 2020
Jan 22

WebRTC, a protocol that facilitates peer-to-peer communication between two clients via the browser, is now supported by all modern browsers. Since its introduction it has mainly been used for web conferencing solutions, but WebRTC is ideal for a variety of other use cases as well. Because of its wide platform support, creating peer-to-peer applications for the web is now more straightforward than ever. But how do you manage many people working together at the same time on the same data? After all, conflict resolution for peer-to-peer applications remains a challenging problem. Fortunately, with Yjs, an open-source framework for real-time collaboration, developers can now combine WebRTC and Yjs to open the floodgates to a range of future-ready collaborative use cases.

Thanks to WebRTC and Yjs, anyone can build collaborative editing into their web application, and this includes more than just text Yjs enables collaborative drawing, drafting, and other innovative use cases. The advantage of such a peer-to-peer model (in lieu of a client–server model) in the CMS world is that collaborative editing can be added to any editorial interface without significant overhead or a central server handling conflict resolution. By integrating with y-webrtc, the Yjs connector for WebRTC, CMS communities can easily implement collaborative editing and make it natively available to all users, whether on shared hosting or in the enterprise. The future of Drupal, WordPress, and other CMSs is collaborative, and, together, WebRTC and Yjs enable collaborative editing out of the box.

In this deep dive into how Yjs enables peer-to-peer collaboration, join Kevin Jahns (Real-Time Collaboration Systems Lead at Tag1 and creator of Yjs), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a closer look at how you too can build peer-to-peer collaboration into your decentralized application.

[embedded content]
Jan 22 2020
Jan 22

Automated tests are rapidly becoming a prerequisite for successful web projects, owing to the proliferation of automated testing tools and an explosion of continuous integration (CI) services that ensure the success of web implementations. Nonetheless, for many developers who are new to the space, automated testing can be an intimidating and altogether novel area that causes more than a few groans at weekly meetings. Luckily, with the right development culture and testing infrastructure in place, your team can focus on implementing new features rather than worrying about the quality of their code.

Yuriy Gerasimov (Senior Back-End Engineer at Tag1 Consulting) delivered a presentation at DrupalCon New Orleans about automated testing and its advantages for web projects of all shapes and sizes. In this four-part blog series, we explore some of the essentials that all developers should be aware of as they explore automated testing as well as the key fundamentals you need to know to incorporate automated testing into your daily workflows. In this second installment, we inspect how to implement a robust testing infrastructure and how to cultivate a development culture favorable to automated testing with the help of code checks.

Key elements of automated testing

Over the course of his career, Yuriy has interacted with a variety of consultancies to gather anecdotal evidence of how automated testing has worked across a range of scenarios. What he discovered is that most orgs tend to implement testing infrastructures and automated testing on a project-by-project basis rather than incorporating it as a part of their regular process for every project. This betrays a fundamental disconnect between the value that automated testing can provide and the inevitable inefficiencies that arise when automated testing is added to individual projects on an ad-hoc basis.

Implementing robust testing infrastructures

One of the classic problems that arise from such a situation is the notion of development teams only implementing a continuous integration (CI) server when customers are able to provide for it in the project budget. In other words, either you can build a server that extends a development culture centered around automated testing, or you risk a scenario in which shared characteristics and shared testing components are absent across projects, requiring you to bootstrap a new infrastructure every time.

Even upon improving quality in your projects thanks to a robust testing infrastructure, code reviews are essential and will allow your interactions between your team’s developers to be elevated to a higher caliber. Unfortunately, however, developers tend not to share substantial knowledge after they complete the features they are tasked to complete. If a developer sees a colleague not following best practices, code reviews can foster improvements in the code thanks to the knowledge that all parties gain in the process. Because of this, Yuriy suggests that development teams leverage a source control provider like GitHub, Bitbucket, or GitLab that incorporates built-in peer reviews functionality.

Fostering a development culture conducive to testing

Development culture is also essential to ensure the success of automated testing. This means that all developers should understand how the testing infrastructure functions in order to guard against regressions. When deployments are not tied to individual team members, for instance, this means that all members of the team understand how deployment occurs and are thus able to implement improvements themselves. For this reason, we discourage blocking deployments on a single function or individual contributor.

The optimal situation is one in which even a project manager who does not write code is capable of initializing deployments and kicking off a series of automated tests. When deployment is automated in this way to the point where even team members uninvolved in development can understand how quality is assessed across the project, this can level up the skill sets of the entire team.

For example, Yuriy recommends that every new developer on a team conduct a deployment themselves in isolation from the rest of the team. By doing so, the least experienced individual contributor may encounter inefficiencies theretofore unaccounted for by other team members and catalyze improvements in the quality of the code. When collaborators who are not yet on-boarded are able to foster advancement in the testing infrastructure across the team, the benefits can not only enrich the automated tests themselves but also cultivate a highly improved development culture across the board.

Considering maintenance costs

Nonetheless, maintenance costs are an important facet of automated testing to consider for any implementation, large or small, because they may be sufficiently exorbitant to encourage recalibration in the testing infrastructure. Some of the key questions to ask include: Do you have enough people to maintain the system from end to end? Do you have a dedicated systems administrator or DevOps specialist who can correct issues when discovered?

After all, testing infrastructures tend to be the components of projects that are scrutinized the least once they are complete—this is part of the blessing and curse of the notion of a “one and done” mindset. In the end, every project has different requirements, and other upcoming projects may demand different systems or other approaches to the same problem. When selecting automated testing systems, therefore, it is essential to consider their impact on the maintenance costs that your team will inevitably incur.

Code checks

Among the easiest and simplest to implement, code checks are static analyses of code that are not only educational about the code’s quality itself but also can perform very quickly unless your codebase includes hundreds of megabytes of code. As Yuriy notes, in that case, you have other problems to solve first. For many Drupal development teams, code checks for adherence to Drupal coding standards are the first line of defense against potential regressions.

By the same token, security checks, which evaluate the risk of potential vulnerabilities, are also critical. Security checks are capable of verifying that certain best practices are followed when it comes to possible attack vectors, such as the use of globals or session variables in key functions or the allowance of input into deeper recesses of Drupal without proper sanitization. These checks are also convenient in that in many cases, less experienced developers can run security checks and understand the implications of the results without consulting a more senior developer. Along these same lines, linters, which check for syntax errors and the like, can be hugely beneficial for front-end developers.

Complexity metrics and copy-paste detection

Another fascinating selling point for code quality is complexity metrics, which are comprised of assessments of how complex the code is. Among the most important of these is cyclomatic complexity. Consider a scenario in which you have written a function that contains a foreach cycle with multiple control structures (many if-statements) nested within. If your function has many levels of nesting, this can present problems not only in code readability and the likelihood of introducing bugs but also in maintenance. Code checks that analyze cyclomatic complexity can help you to uncover situations in which others would have a horrible experience reading your code by limiting the number of levels that code can be nested (e.g. no more than five levels). Such complexity metrics will aid you in isolating certain logic into other functions or exiting early from your cycles to help your code become more legible.

Finally, copy-paste detection is another hugely useful element of code checking that allows for you to encounter inefficiencies in code. Some developers, for better or worse, often copy and paste code from examples or Stack Overflow responses without necessarily considering how it can best be incorporated into the existing codebase. Copy-paste detection can thus inspect the code to detect code pasted in multiple places; if you use the same piece of code in multiple locations, it may be best to abstract it out by isolating it into another function instead.

Conclusion

All told, code checks are often so immediate that they can take mere fractions of a second. For this reason, they are a ubiquitous element of automated testing and can allow developers to become more productive in a short period of time. In this way, your team can create not only a robust underlying testing infrastructure well-suited for repeatability but also ensure the longevity of a testing-oriented development culture that values consistent code quality.

In this blog post, we covered some of the most crucial elements of any automated testing approach for projects both large and small, namely a robust testing infrastructure and a focused development culture for automated testing. In turn, we looked at the outsized benefits that code checks and security checks can have on your codebase. In the next installment of this four-part blog series, we will devote our attention to some of the other essentials in the testing toolbox: unit testing and functional testing.

Special thanks to Yuriy Gerasimov, Jeremy Andrews, and Michael Meyers for their feedback during the writing process.

Jan 22 2020
Jan 22

Automated tests are rapidly becoming a prerequisite for successful web projects, owing to the proliferation of automated testing tools and an explosion of continuous integration (CI) services that ensure the success of web implementations. Nonetheless, for many developers who are new to the space, automated testing can be an intimidating and altogether novel area that causes more than a few groans at weekly meetings. Luckily, with the right development culture and testing infrastructure in place, your team can focus on implementing new features rather than worrying about the quality of their code.

A few years back, Yuriy Gerasimov(Senior Back-End Engineer at Tag1 Consulting) delivered a presentation at at DrupalCon New Orleans about automated testing and its advantages for web projects of all shapes and sizes. In this four-part blog series, we explore some of the essentials that all developers should be aware of as they explore automated testing as well as the key fundamentals you need to know to incorporate automated testing into your daily workflows. In this second installment, we inspect how to implement a robust testing infrastructure and how to cultivate a development culture favorable to automated testing with the help of code checks.

Key elements of automated testing

Over the course of his career, Yuriy has interacted with a variety of consultancies to gather anecdotal evidence of how automated testing has worked across a range of scenarios. What he discovered is that most agencies tend to implement testing infrastructures and automated testing on a client-by-client basis rather than incorporating it as a part of their regular process for every project they come across. This betrays a fundamental disconnect between the value that automated testing can provide and the inevitable inefficiencies that arise when automated testing is added to individual projects on an ad-hoc basis.

Implementing robust testing infrastructures

One of the classic problems that arise from such a situation is the notion of development teams only implementing a continuous integration (CI) server when customers are able to provide for it in the project budget. In other words, either you can build a server that extends a development culture centered around automated testing, or you risk a scenario in which shared characteristics and shared testing components are absent across projects, requiring you to bootstrap a new infrastructure every time.

Even upon improving quality in your projects thanks to a robust testing infrastructure, code reviews are essential and will allow your interactions between your team’s developers to be elevated to a higher caliber. Unfortunately, however, developers tend not to share substantial knowledge after they complete the features they are tasked to complete. If a developer sees a colleague not following best practices, code reviews can foster improvements in the code thanks to the knowledge that all parties gain in the process. Because of this, Yuriy suggests that development teams leverage a source control provider like GitHub, Bitbucket, or GitLab that incorporates built-in peer reviews functionality.

Fostering a development culture conducive to testing

Development culture is also essential to ensure the success of automated testing. This means that all developers should understand how the testing infrastructure functions in order to guard against regressions. When deployments are not tied to individual team members, for instance, this means that all members of the team understand how deployment occurs and are thus able to implement improvements themselves. For this reason, we discourage blocking deployments on a single function or individual contributor.

The optimal situation is one in which even a project manager who does not write code is capable of initializing deployments and kicking off a series of automated tests. When deployment is automated in this way to the point where even team members uninvolved in development can understand how quality is assessed across the project, this can level up the skill sets of the entire team.

For example, Yuriy recommends that every new developer on a team conduct a deployment themselves in isolation from the rest of the team. By doing so, the least experienced individual contributor may encounter inefficiencies theretofore unaccounted for by other team members and catalyze improvements in the quality of the code. When collaborators who are not yet on-boarded are able to foster advancement in the testing infrastructure across the team, the benefits can not only enrich the automated tests themselves but also cultivate a highly improved development culture across the board.

Considering maintenance costs

Nonetheless, maintenance costs are an important facet of automated testing to consider for any implementation, large or small, because they may be sufficiently exorbitant to encourage recalibration in the testing infrastructure. Some of the key questions to ask include: Do you have enough people to maintain the system from end to end? Do you have a dedicated systems administrator or DevOps specialist who can correct issues when discovered?

After all, testing infrastructures tend to be the components of projects that are scrutinized the least once they are complete—this is part of the blessing and curse of the notion of a “one and done” mindset. In the end, every project has different requirements, and other upcoming projects may demand different systems or other approaches to the same problem. When selecting automated testing systems, therefore, it is essential to consider their impact on the maintenance costs that your team will inevitably incur.

Code checks

Among the easiest and simplest to implement, code checks are static analyses of code that are not only educational about the code’s quality itself but also can perform very quickly unless your codebase includes hundreds of megabytes of code. As Yuriy notes, in that case, you have other problems to solve first. For many Drupal development teams, code checks for adherence to Drupal coding standards are the first line of defense against potential regressions.

By the same token, security checks, which evaluate the risk of potential vulnerabilities, are also critical. Security checks are capable of verifying that certain best practices are followed when it comes to possible attack vectors, such as the use of globals or session variables in key functions or the allowance of input into deeper recesses of Drupal without proper sanitization. These checks are also convenient in that in many cases, less experienced developers can run security checks and understand the implications of the results without consulting a more senior developer. Along these same lines, linters, which check for syntax errors and the like, can be hugely beneficial for front-end developers.

Complexity metrics and copy-paste detection

Another fascinating selling point for code quality is complexity metrics, which are comprised of assessments of how complex the code is. Among the most important of these is cyclomatic complexity. Consider a scenario in which you have written a function that contains a foreach cycle with multiple control structures (many if-statements) nested within. If your function has many levels of nesting, this can present problems not only in code readability and the likelihood of introducing bugs but also in maintenance. Code checks that analyze cyclomatic complexity can help you to uncover situations in which others would have a horrible experience reading your code by limiting the number of levels that code can be nested (e.g. no more than five levels). Such complexity metrics will aid you in isolating certain logic into other functions or exiting early from your cycles to help your code become more legible.

Finally, copy-paste detection is another hugely useful element of code checking that allows for you to encounter inefficiencies in code. Some developers, for better or worse, often copy and paste code from examples or Stack Overflow responses without necessarily considering how it can best be incorporated into the existing codebase. Copy-paste detection can thus inspect the code to detect code pasted in multiple places; if you use the same piece of code in multiple locations, it may be best to abstract it out by isolating it into another function instead.

Conclusion

All told, code checks are often so immediate that they can take mere fractions of a second. For this reason, they are a ubiquitous element of automated testing and can allow developers to become more productive in a short period of time. In this way, your team can create not only a robust underlying testing infrastructure well-suited for repeatability but also ensure the longevity of a testing-oriented development culture that values consistent code quality.

In this blog post, we covered some of the most crucial elements of any automated testing approach for projects both large and small, namely a robust testing infrastructure and a focused development culture for automated testing. In turn, we looked at the outsized benefits that code checks and security checks can have on your codebase. In the next installment of this four-part blog series, we will devote our attention to some of the other essentials in the testing toolbox: unit testing and functional testing.

Special thanks to Yuriy Gerasimov, Jeremy Andrews, and Michael Meyers for their feedback during the writing process.

Jan 17 2020
Jan 17

With the release of Drupal 8.8, Drush is also due for an upgrade — to Drush 10. For this venerable command-line interface that many Drupal developers know intimately well, what does the present and future look like? What considerations should we keep in mind when selecting Drupal Console or Drush? What new features are available in Drush 10 that characterize the new CI/CD approaches we see expanding in the Drupal community?

In this Tag1 Team Talk, join the creator and maintainer of Drush Moshe Weitzman (Senior Technical Architect at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Preston So (Editor in Chief at Tag1), and Michael Meyers (Managing Director at Tag1) for a journey through Drush’s history and promising future. We take a deep look at what made Drush what it is today, the most compelling features in Drush 10, and how a hypothetical Drush in core could look.

[embedded content]
Jan 15 2020
Jan 15

Testing has become an important topic in recent years thanks to the explosion of testing technologies and continuous integration (CI) approaches but also due to the need for an ever-widening range of tests for a variety of use cases. For many developers, understanding how to incorporate testing into their development workflows can be daunting due to the many terms involved and, worse yet, the many tools available both in software-as-a-service (SaaS) companies and in open-source ecosystems like Drupal.

Yuriy Gerasimov (Senior Back-End Engineer at Tag1 Consulting) presented a session at DrupalCon New Orleans about modern testing approaches and how to decide on the correct suite of tests for your software development workflows. In this four-part blog series, we analyze the concepts in contemporary testing approaches that you need to know in your day-to-day and why they can not only protect but also accelerate your project progress. In this first installment, we take a look at how to sell testing as an important component of client (your stakeholders) projects, as well as why automated testing is an essential component of any web implementation.

Why testing?

Many people around the web development landscape have heard of testing, but when you ask about real-world examples on real-life projects, the same developers admit that their testing infrastructures are sorely lacking. The most important reason for this is that while testing is a compelling value-add for developers, it can be difficult to incorporate testing as a required line item in projects, especially when business stakeholders are looking to save as much money as possible. How to sell testing to clients continues to be a challenging problem, especially because requesting that a client pay for an additional 100 hours on top of the 500 already incurred can be anathema to their sense of frugality.

After all, many customers will respond by arguing that by choosing you as an agency or developer, they already trust you to foster a high-quality outcome. As such, many clients will ask, “Why do we need to spend extra money on testing?” Without the overt benefit that project components like architectural workshops and actual implementation provide, testing is often the forgotten and most easily abandoned stage of a build.

A real-world example of the need for testing

In his talk at DrupalCon New Orleans, Yuriy describes a large project (prior to his time at Tag1) on which he collaborated with many other people on a development team tasked with finishing the implementation in six months. The project was for a local municipality, with many integration points and features. Every feature needed to work perfectly, including critical features for civic life such as applying for permits, and tolerance for dysfunction was low.

By the end of the project, originally slated for six months, Yuriy’s development team ultimately spent six months developing and an additional six months fixing issues and testing functionality. Fortunately for his team, the municipality had already been through a project whose timeline ballooned out of control, and the team was able to deliver the project within a year as opposed to the previous partner, who spent two years on the same project.

One of the most alarming aspects of the project at the time was that all of the testing the team had done until that moment consisted of manual testing sessions. A meeting was convened, and every developer stood up, responsible for describing the rationale for each feature they had built and demonstrating the feature. Every team member would then test each constituent feature and fix issues on the spot, in the moment.

Learning from past mistakes

As one can imagine, this manual testing approach is highly unsustainable for projects that require tight timelines with a high degree of confidence. Yuriy learned many lessons from the project, and in a subsequent implementation six months later, in which he and his collaborators built an application for people with hearing and speech difficulties, he made considerable changes. The project was complex, with several servers communicating with the application through REST APIs and a high expectation for a user experience that would allow users to click icons representing elocutions that would speak in their place.

From the beginning, Yuriy and his team baked in automated testing up front to test communication with the REST APIs and ensure all requests were functioning properly. They built the project to be scalable because they knew that many users would be using the application simultaneously. In the end, the quality assurance (QA) overhead was minimal, because developers on the team could simply run automated tests and show the result to the client. Even though the size of the project was roughly the same, having built-in automated testing with acceptance criteria was a benefit difficult to overstate.

Defending quality: Selling testing to customers

When testing aficionados attempt to sell testing to customers, they must frame the investment in terms of quality and long term vs. short term costs (failing to deal with this in the short term will actually cost you more in the long term). However, it is admittedly difficult to sell something when its success cannot be measured. After all, from the client perspective, a buyer is selecting a vendor based on the quality with which they implement projects. But there are only anecdotal metrics that indicate whether an organization performs better than another with high-quality projects. For this reason, it is essential that developers interested in selling testing as part of their contracts offer metrics that are comprehensible to the customer.

In the end, the sole concern of customers is that software is delivered without bugs. While ease of maintenance is also important, this is generally considered table-stakes among stakeholders (or a problem for the future). In order to provide a high degree of confidence for issue-free builds, we need metrics for traits like performance and code quality (like adherence to Drupal’s coding standards). Thus, when a customer asks about the justification of a metric such as code quality, we can show the results of tools like code audits, which in Drupal consist of a single Drush command that generates a full report. By performing a code audit on a codebase written by a less experienced team, for example, clients can be sold immediately on the value of your team by seeing the results of a highly critical code audit—and will seldom be opposed to your team winning the contract.

Automated testing

For many developers who are new to the concept of automated testing, the term can unleash a torrent of anxiety and concern about the massive amount of work required. This is why Yuriy recommends, first and foremost, building a testing infrastructure and workflow that demands minimum effort while yielding maximum dividends. Nonetheless, successful automated testing requires a robust testing infrastructure and a healthy development culture that is supportive. Without these elements, success is seldom guaranteed.

Fortunately, the up-front cost of automated testing is low owing to the “one and done” nature of automated testing. Though it’s likely you’ll spend a few weeks building out the infrastructure, there is no need to repeat the same process over and over again. Nevertheless, Yuriy recommends exploring the approaches that other software companies and industries undertake to understand how they tackle similar requirements. For example, automated testing for the C language has been around for many years. Moreover, there is no need to write our own continuous integration (CI) server, thanks to the wide variety of services available on the market, including software-as-a-service (SaaS) solutions that charge as little as $50 per month.

Even if you have written a large number of tests for your project, one of the most important aspects of automated testing may seem particularly obvious. If you don’t run automated tests regularly, you won’t receive any benefits. For instance, it is certainly not adequate to inform your customer that you have implemented automated testing unless you are running said tests weekly or monthly based on the project requirements. Otherwise, the value of the time you have spent implementing automated tests becomes questionable.

Conclusion

As you can see, testing is frequently the most easily forgotten component of web projects due to the extent to which clients question its value. However, armed with the right approaches to selling tests, you too can cultivate a culture of quality assurance (QA) not only within your development team but also for your business’s customers. With the help of automated testing, you can reduce the headaches for your team down the road and justify additional time that means extra money in your pocket.

In this blog post, we covered some of the important aspects of modern testing approaches and why customers are beginning to take a second look at the importance of quality in their projects. In the second installment of this four-part blog series, we’ll turn our attention to implementing testing infrastructures and fostering a development culture favorable to testing within your existing projects. We’ll discuss some of the maintenance costs associated with implementing automated testing and begin to look at two of the most prominent areas of testing: code checks and unit testing.

Special thanks to Yuriy Gerasimov, Jeremy Andrews, and Michael Meyers for their feedback during the writing process.

Nov 11 2019
Nov 11

Table of Contents

What makes a collaborative editing solution robust?
Decentralized vs. Centralized Architectures in Collaborative Editing
Operational Transformation and Commutative Replicated Data Types (CRDT)
Why Tag1 Selected Yjs
Conclusion

In today’s editorial landscape, content creators can expect not only to touch a document countless times to revise and update content, but also to work with other writers from around the world, often on distributed teams, to finalize a document collaboratively and in real time. For this reason, collaborative editing, or shared editing, has become among the most essential and commonly requested features for any content management solution straddling a large organization.

Collaborative editing has long existed as a concept outside the content management system (CMS). Consider, for example, Google Docs, a service that many content creators use to write content together before copy-and-pasting the text into form fields in a waiting CMS. But in today’s highly demanding CMS landscape, shouldn’t collaborative editing be a core feature of all CMSs out of the box? Tag1 Consulting agreed, and the team decided to continue its rich legacy in CMS innovation by making collaborative editing a reality.

Recently, the team at Tag1 Consulting worked with the technical leadership at a top Fortune 50 company to evaluate solutions and ultimately implement Yjs as the collaborative editing solution that would successfully govern content updates across not only tens of thousands of concurrent users but also countless modifications that need to be managed and merged so that content remains up to date in the content management system (CMS). This process was the subject of our inaugural Tag1 Team Talk, and in this blog post, we’ll dive into some of the common and unexpected requirements of collaborative editing solutions, especially for an organization operating at a large scale with equally large editorial teams with diverse needs.

Collaborative editing, simply put, is the ability for multiple users to edit a single document simultaneously without the possibility of conflicts arising due to concurrent actions—multiple people writing and editing at the same time can’t lead to a jumbled mess. At minimum, all robust collaborative editing solutions need to be able to merge actions together such that every user ends up with the same version of the document, with all changes merged appropriately.

Collaborative editing requires a balancing act between clients (content editors), communication (whether between client and server or peer-to-peer), and concurrency (resolving multiple people’s simultaneous actions). But there are other obstacles that have only emerged with the hyperconnectivity of today’s global economy: The ability to edit content offline or on slow connections, for instance, as well as the ability to resynchronize said content, is a baseline requirement for many distributed teams.

The provision of a robust edit history is also uniquely difficult in collaborative editing. Understanding what occurs when an “Undo” or “Redo” button is clicked in single editors without the need for real-time collaboration is a relatively trivial question. However, in collaborative editors where synchronization across multiple users’ changes and batch updates from offline editing sessions need to be reflected in all users’ content, the definition of undo and redo actions becomes all the more challenging.

Moreover, real-time collaborative editing solutions also need to emphasize the collaboration element itself and afford users the ability to understand where other users’ cursors are located in documents. Two of the most fundamental features of any collaborative editing solution in today’s landscape are indications of presence and remote cursors, both characteristics of free-to-use collaborative editing solutions such as Google Docs.

Presence indications allow for users in documents to see who else is currently actively working on the document, similar to the user thumbnails in the upper-right corner of a typical Google Docs document. Remote cursors, meanwhile, indicate the content a user currently has selected or the cursor location at which they last viewed or edited text.

During Tag1’s evaluation of the collaborative editing landscape, the team narrowed the field of potential solutions down to these four: Yjs, ShareDB, CKEditor, and Collab. See below for a comparison matrix of where these real-time collaborative editing solutions stand, with further explanation later in the post.

 

 

Yjs

ShareDB

CKEditor

Collab

License

MIT

MIT

Proprietary (On-Prem Hosted)

MIT

Offline editing

Decentralized

Network-agnostic

Shared cursors

Presence (list of active users)

Commenting

Sync after server data loss

✖ (sync error)

✖ (Unsaved changes are lost)

Can implement other collaborative elements (e.g., drawing)

Scalable


(Many servers can handle the same document)

✔ 

(Locking via underlying DB)


(Hosted)

(Needs central source of truth - a single host for each doc. Puts additional constraints on how doc updates are propagated to “the right server”).

Currently supported editors 

ProseMirror

Quill
Monaco
CodeMirror
Ace

Quill
CodeMirror
Ace

CKEditor

ProseMirror

Implementation

CRDT

OT

OT

Reconciliation

Demos

Editing, Drawing,

3D model shared state

Sync

Editing

Editing

Editing in Tip Tap

Whereas the features within a collaborative editor are of paramount importance to its users, the underlying architecture can also play a key role in determining a solution’s robustness. For instance, many long-standing solutions require that all document operations ultimately occur at a central server instance, particularly in the case of ShareDB and Collab.

While a centralized server does confer substantial advantages as a single source of truth for content state, it is also a central source of failure. If the server fails, the most up-to-date state of the content is no longer accessible, and all versions of the content will become stale. For mission-critical content needs where staleness is unacceptable, centralized servers are recipes for potential disaster.

Furthermore, centralized systems are generally much more difficult to scale, which is an understandably critical requirement for a large organization operating at considerable scale. Google Docs, for example, has an upper limit on users who can actively collaborate. With an increasing number of users, the centralized system will start to break down, and this can only be solved with progressively more complex resource allocation techniques.

For these reasons, Tag1 further narrowed the focus to decentralized approaches that allow for peer-to-peer interactions, namely Yjs, which ensures that documents will always remain in sync irrespective, as document copies live on each user’s own instance as opposed to on a failure-prone central server. This means users can always refer to someone else’s instance in lieu of a single authoritative source that may not be available. Resource allocation is also much easier with Yjs because many servers can store and update the same document. It is significantly easier to scale insofar as there is essentially no limit on the number of users that can work together.

The majority of real-time collaborative editors, such as Google Docs, EtherPad, and CKEditor, use a strategy known as operational transformation (OT) to realize concurrent editing and real-time collaboration. In short, OT facilitates consistency maintenance and concurrency control for plain text documents, including features such as undo/redo, conflict resolution, and tree-structured document editing. Today, it is used to power collaboration features in Google Docs and Apache Wave.

Nonetheless, OT comes with certain disadvantages, namely the fact that existing OT frameworks are very tailored to the specific requirements of a certain application (e.g. rich text editing) whereas Yjs does not assume anything about the communication protocol on which it is implemented and works with a diverse array of applications. Yjs leverages commutative replicated data types (CRDT), used by popular tools like Apple Notes, Facebook’s Apollo, and Redis, among others. As Joseph Gentle, a former engineer on the Google Wave product and creator of ShareDB, once wrote:

“Unfortunately, implementing OT sucks. There’s a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. […] Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time.”

The key distinction between OT and CRDT is as follows: Consider an edit operation in which a user inserts a word at character position 5 in the document. In operational transformation, if another user adds 5 characters to the start of the document, the insertion is moved to position 10. While this is highly effective for simple plain text documents, complex hierarchical trees such as the document object model (DOM) present significant challenges. CRDT, meanwhile, assigns a unique identifier to every character, and all state transformations are applied relatively to objects in the distributed system. Rather than identifying the place of insertion based on character count, the character at that place of insertion retains the same identifier regardless of where it is relocated to within the document. As one benefit, this process simplifies resynchronization after offline editing.

If you want to dive deeper, the Conclave real-time editor (which is no longer maintained and therefore was not considered in our analysis) has another great high-level writeup explaining OT and CRDT. Additionally, you can watch or listen to our deep dive on OT vs. CRDT as part of our recent Tag1 Team Talk, “A Deep Dive into Yjs - Part 1”.

While other solutions such as ShareDB, CKEditor, and ProseMirror Collab are well-supported and very capable solutions in their own right, these technologies didn’t satisfy the specific requirements of our client’s project. For instance, ShareDB relies on the same approach as Google Docs, operational transformation (OT), rather than relying on the comparatively more robust CRDT (at least for our requirements). CKEditor, one of the most capable solutions available today, relies on closed-source and proprietary dependencies. Leveraging an open-source solution was strongly preferred by our client for many reasons, foremost among them to meet any potential need by enhancing the software themselves, and they didn’t want to be tied to a single vendor for what they saw as a core technology to their application. Finally, ProseMirror’s Collab module does not guarantee conflict resolution, which can lead to merge conflicts in documents.

Ultimately, the Tag1 team opted to select Yjs, an implementation of commutative replicated data types (CRDT), due to its network agnosticism and conflict resolution guarantees. Not only can Yjs support offline and low-connectivity editing, it can also store documents in local databases on user devices (such as through IndexedDB) to ensure full availability without a stable internet connection. Because Yjs facilitates concurrent editing on tree structures, not just text, it integrates well with view libraries such as React. Also compelling is its support for use cases beyond simple text editing, including collaborative drawing and state-sharing for 3D models. Going beyond text editing to implement other collaborative features is an important future goal for the project.

Furthermore, Yjs performs transactions on objects across a distributed system rather than on a centralized server, the problem of a single point of failure can be avoided, and it’s extremely scalable with no limitations on the number of concurrent collaborators. Moreover, Yjs is one of the only stable and fully tested implementations of CRDT available, while many of its counterparts leverage OT instead.

Finally, because Yjs focuses on providing decentralized servers and connector technology rather than prescribing the front-end editor, there is no dependency on a particular rich text editor, and organizations can opt to swap out the editor in the future with minimal impact on other components in the architecture. It also makes it easy to use multiple editors. For instance, our project uses ProseMirror for collaborative rich text editing and CodeMirror for collaborative Markdown editing (and other text formats can be added easily).

Real-time collaborative editing surfaces unique difficulties for any organization seeking to implement content workflows at a large scale. Over the course of the past decade, many new solutions have emerged to challenge the prevailing approaches dependent on operational transformation. Today, for instance, offline editing and effective conflict resolution on slow connections are of paramount importance to content editors and stakeholders alike. These key requirements have led to an embrace of decentralized, peer-to-peer approaches to collaborative editing rather than a failure-prone central server.

Tag1 undertook a wide-ranging evaluation of available solutions for collaborative editing, including Yjs, ProseMirror’s Collab module, ShareDB, and CKEditor. In the end, Yjs emerged as the winner due to its implementation of CRDT, as well as its scalability and emphasis on network agnosticism and conflict resolution, both areas where the other solutions sometimes fell short. While any robust evaluation of these solutions takes ample time, it’s our hope at Tag1 that our own assessment guides your own thinking as you delve into real-time collaborative editing for your own organization.

Special thanks to Fabian Franz, Kevin Jahns, Michael Meyers, and Jeffrey Gilbert for their feedback during the writing process.

Nov 11 2019
Nov 11

Table of Contents

What is a Rich Text Editor?
The Modern Rich Text Editor and Emerging Challenges
How we Evaluated Rich Text Editors
Why Tag1 Selected ProseMirror
Conclusion

Among all of the components commonly found in content management systems (CMSs) and typical editorial workflows, the rich text editor is perhaps the one that occupies the least amount of space but presents the most headaches due to its unique place in content architectures. From humble beginnings in discussion forums and the early days of the web and word processing, the rich text editor has since evolved into a diverse range of technologies that support a lengthening list of features and increasingly rich integrations.

Recently, Tag1 embarked on an exploration of rich text editors to evaluate solutions for a Fortune 50 company with demanding requirements. In this blog post, we’ll take a look at what impact the choice of a rich text editor can have down the line, some characteristics of the modern rich text editor, and Tag1’s own evaluation process. In the end, we discuss some of the rationales behind Tag1’s choice of ProseMirror as a rich text editor and some of the requirements leading to a decision that can serve as inspiration for any organization.

At its core, a rich text editor enables content editors not only to insert and modify content but also to format text and insert assets that add to the content in question. They are the toolbars that line every body field in CMSs, allowing for a rich array of functionality also found in word processors and services like Google Docs. Most content editors are deeply familiar with basic formatting features like boldfacing, italicization, underlining, strikethrough, text color, font selection, and bulleted and numbered lists.

There are other features that are considered table-stakes for rich text editors, especially for large organizations with a high threshold for formatting needs. These can include indentation (and outdent availability), codeblocks with syntax highlighting (particularly for knowledge bases and documentation websites for developers), quotations, collapsible sections of text, embeddable images, and last but not least, tables.

While these features comprise the most visible upper layer of rich text editors, the underlying architecture and data handling can be some of the most challenging elements to implement. All rich text editors have varying degrees of customizability and extensibility, and all editors similarly have different demands and expectations when it comes to how they manage the underlying data that ultimately permits rich formatting. In the case of Tag1’s top Fortune 50 customer, for example, the ability to insert React-controlled views and embedded videos into content ended up becoming an essential requirement.

Whereas many of the foregoing rich text editors available in the late 1990s and early 2000s trafficked primarily in basic formatting, larger editorial organizations have much higher expectations for the modern rich text editor. For instance, while many rich text editors historically focused solely on manipulation of HTML, the majority of new rich text editors emerging today manipulate structured data in the form of JSON, presenting unique migration challenges for those still relying on older rich text editors.

Today, there are few to no robust rich text editors available that support swappable document formats between HTML, WYSIWYG, Markdown, and other common formats. Any conversion between HTML, WYSIWYG, and Markdown formats will result in some information loss due to differences in available formatting options. As an illustrative example, a WYSIWYG document can include formatting features that are unsupported in Markdown, such as additional style information or even visually encoded traits such as the width of a table column. While converting a document format to another preserves the majority of information, there will inevitably be data loss due to unsupported features.

Moreover, as rich text editors become commonplace and the expectations of content editors evolve, there is a growing desire for these technologies to be accessible for users of assistive technologies. This is especially true in large companies such as Tag1’s Fortune 50 client, which must provide for content editors living with disabilities. Rich text editors today frequently lack baseline accessibility features such as ARIA attributes for buttons in editorial interfaces, presenting formidable challenges for many users.

Tag1 evaluated a range of rich text editors, including ProseMirror, Draft.js, CKEditor 5, Quill, Slate, and TipTap. Our mission was to find a solution that would be not only robust for content editors accustomed to word processors and Google Docs but also customizable and robust in handling the underlying data. But there were other requirements as well that were particularly meaningful to the client for whom Tag1 performed this evaluation.

An important first requirement was the ability for the chosen rich text editor to integrate seamlessly with collaborative editing solutions like Yjs and Collab out of the box. In addition, because of the wide-ranging use of open-source projects at the organization, a favorable license was of great importance to allow teams to leverage the project in various ways. In addition, other characteristics such as plugin availability, an active contributor community, and some support for accessibility were considered important during the evaluation.

As mentioned previously, other requirements were more unique to the customer in question, including native mobile app support, which would allow for mobile editing of rich text, a common feature otherwise found in many responsive-enabled CMSs; embedding of React view components, which would provide for small but rich dynamic components within the body of an item of content; and the ability to annotate content with comments and other notes of interest to content editors.

The table below displays the final results of the rich text editor evaluation and illustrates why Tag1 ultimately selected ProseMirror as their editor of choice for this project.

* Doesn’t support feature yet, but could be implemented (additional effort & cost) ** Comments are part of the document model (requirements dictate they not be) *** Per CKEditor documentation -- needs to be verified (see review process below) ⑅ In Depth accessibility reviews must be completed before we can grade

Ultimately, Tag1 opted to choose ProseMirror as the rich text editor of choice for their upcoming work with a top Fortune 50 company. Developed by Marijn Haverbeke, the author of CodeMirror, one of the most popular code editors for the web, ProseMirror is a richly customizable editor that also counts on an exhaustive and well-documented API. In addition, Haverbeke is known for his commitment to his open-source projects and responsiveness in the active and growing ProseMirror community. As those experienced in open-source projects know well, a robust and passionate contributor community does wonders to lower implementation and support costs.

Out of the box, ProseMirror as a tool is not particularly opinionated about the aesthetics of its editor or especially feature-rich. But this is in fact a boon for extensibility, as each additive feature of ProseMirror is provided by a distinct module encapsulating that functionality. For instance, while core features that are considered table-stakes among rich text editors today such as basic support for tables and lists are part of the core ProseMirror project, others, like those that provide improved table support and codeblock formatting, are only available through community-contributed ProseMirror modules.

ProseMirror also counts among its many advocates large organizations and publishers that demand considerable heft from its rich text editing solutions. Newspapers like The New York Times and The Guardian, as well as well-known companies like Atlassian and Evernote, are leveraging ProseMirror and its modules. In fact, Atlassian published the entirety of their ProseMirror modules under the highly permissive Apache 2.0 license.

Beyond the fact that many editors are based on ProseMirror as a foundation available through open source, such as Tiptap, the Fiduswriter editor, and CZI-ProseMirror, it was a logical choice for Tag1 and part of Tag1’s commitment to enabling innovation in editorial workflows with a strong and stable foundation at their center. Through an integration between ProseMirror and Yjs, the subject of a previous Tag1 blog post on collaborative editing, all requirements requested by the top Fortune 50 company will be satisfied.

Choosing a rich text editor for your editorial workflows is a decision fraught with small differences with large implications. While basic features such as simple list and table formatting are now ubiquitous across rich text editors like ProseMirror, Draft.js, CKEditor, Quill, and Slate, the growing demands of our customers obligate us to consider ever more difficult requirements than before. At the request of a top Fortune 50 company, Tag1 embarked on a robust evaluation of rich text editors that satisfied some of their unique requirements such as React component embeds, accessibility, and codeblocks with syntax highlighting.

In the end, teams opted to leverage ProseMirror due to its highly active and passionate community, the availability of unique features such as content annotations, native mobile support, and accessibility support. Thanks to its large community and extensive plugin ecosystem, Tag1 and its client can work with a variety of available tools to craft a truly futuristic rich text editing experience for their content editors. As this evaluation indicates, it is always of utmost importance to focus not only on the use cases for required features, but also on the users themselves who will use the product—the content editors and engineers who need to write, format, and manipulate rich text, day in and day out.

Special thanks to Fabian Franz, Kevin Jahns, Jeffrey Gilbert, and Michael Meyers for their feedback during the writing process

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web