Mar 25 2020
Mar 25

Laravel is a PHP framework that has witnessed comparatively less attention than CMS heavyweights like Drupal and WordPress and frameworks like Symfony, but it nonetheless offers several compelling advantages for enterprise website implementations. Recently, a large organization worked with Tag1 to move legacy databases into a more modern approach that, coupled with Laravel and Vue.js, led to considerable improvements in not only developer experience but also user experience.

Moreover, Laravel is an excellent tool for architecting custom applications with complex logic, all without any complexity offloaded onto the developer experience. Thanks to the Laravel ecosystem, you can have a working PHP application with APIs and a functional single-page JavaScript application in Vue.js with minimal overhead. Instead of having to define low-level properties like in Symfony or deal with too many premade assumptions like in Drupal or WordPress, you can focus on user workflows in Laravel and benefit from a more reasonable learning curve as well.

In this Tag1 Team Talks episode, László Horváth (Senior Laravel Developer at Tag1) joins guests Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Director at Tag1), and your host Preston So (Editor in Chief at Tag1 and Senior Director, Product Strategy at Oracle) for a deep dive into why Laravel should be your choice when building a mission-critical PHP architecture that encompasses a decoupled front end in JavaScript and multi-level access control and permissioning.

[embedded content]


Mar 18 2020
Mar 18

Table of Contents

Claro's bright future
--Rethinking the layout of administration pages
--Simplifying complex user interfaces
--A dream state for Claro?
--Why Drupal is a perfect place for UI development
Conclusion: How to get involved

Drupal has seen a range of administrative experiences over its history, starting with early Drupal mainstays like the Garland theme and Seven, perhaps the most recognizable iteration of Drupal's editorial experience in its history. Recently, another administration theme joined the pantheon of administration themes in Drupal core that have had an outsized influence on how users interact with Drupal not only for the first time but for countless times afterwards. The Claro administration theme is a new theme for Drupal 8, now available in Drupal core, which offers a more modern user experience thanks to its emphasis on usability and accessibility during the design and development process.

A short time ago, Cristina Chumillas (Claro maintainer and Front-End Developer at Lullabot), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1) and Michael Meyers (Managing Director at Tag1) joined your correspondent (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a wide-ranging conversation about why Claro is such a compelling new addition to Drupal's editorial experience and how Claro focused on enabling accessibility and user experience outcomes that would permit a narrowly scoped development cycle while also yielding the potential for a variety of other initiatives to extend Claro's design to encompass other use cases. In this multi-part blog series, we explore some of what we learned during this Tag1 Team Talks episode. In this fourth and final installment, we unlock some of the features ahead in Claro's roadmap and share how you too can contribute to the exciting future ahead of the Claro theme.

Claro's bright future

Before we proceed, if you have not yet taken a look at the first, second, and third installments of this blog series on Claro, I highly recommend you do so to better understand some of the context behind the history of Claro, the motivations behind a new administration theme, some of the data in the Drupal community that lent urgency to certain features, and how Claro's design took shape. In this upcoming section, we discuss some of the improvements and future roadmap items that Claro could see in the coming few years.

Rethinking the layout of administration pages

Claro: Drupal Node Add During our Tag1 Team Talks episode, we asked Cristina to share what the most significant improvements are that she would like to see in Claro's future. Cristina responded by citing the layouts of administration pages, namely the forms and other user interface components that constitute much of the editorial experience in Drupal. After all, she asks, "who hasn't wanted to create a node edit form that places something on the right sidebar?" This question is pressing not solely because of the possibilities such layout manipulation would unlock but also due to the fact that developer-friendly administration themes are typically the only way for Drupal users to heavily influence the layout of administration pages in Drupal.

Whereas in other content management systems this requirement might be considered superfluous, in Drupal it is a basic need, as this functionality has been available for quite some time to developers. Cristina also envisions an experience that would allow even greater flexibility by offering users the flexibility to determine a particular layout depending on the content on which an editor is currently working. Due to Drupal's high degree of customization when it comes to content types and their fields, node edit forms could feasibly have anywhere from two to forty fields that, especially in the latter case, test the limits of what is possible within an administration theme's design. In many scenarios, the person who decides how the content model should appear is the site builder, rather than the Drupal community enforcing a rigid approach to content modeling.

Simplifying complex user interfaces

Claro: Drupal Permissions Revamping the appearance of certain user interfaces in Drupal that have reputations for being overly complex is also a key roadmap item for the Claro theme. Views UI, for instance, is regularly cited as one of the most challenging and incomprehensible user interfaces in Drupal despite its powerful strengths. Other pages, such as the modules list page and the user roles and permissions page, are robust for a small number of modules or user roles but quickly become noticeably unwieldy as soon as the list reaches dozens or hundreds of items. Though there was an attempt to redesign these pages in the past, much of the work did not move forward due to the monumental amount of work involved.

Cristina also discusses the prospect of many of these complicated user interfaces becoming initiatives in their own right, and she recommends that motivated contributors add to the Ideas queue to make them a reality and find other interested parties. The first step, in Cristina's view, is to examine the user interface's requirements and determine from there the next best steps to realize ambitious experiences that include JavaScript-driven interactivity and modern design patterns.

A dream state for Claro?

Claro: Extend One of the questions we posed to Cristina during our Tag1 Team Talks episode had to do with what her ideal vision for Claro would look like — what is the dream state for Claro? Cristina responded by recommending a distinction be made between Claro and the concept of modernizing the administrative interface, as Claro has more of a focus on the aesthetic renewal of the administrative interface rather than enabling snazzy features like JavaScript-driven interactivity and animations. Now, concludes Cristina, "we can focus on the rest of the things we'll need."

To make Claro stable for the next release of Drupal, a substantial amount of work remains, but beyond that milestone, the Claro team is interested in pursuing an exploration of how Claro's components could work in complex environments like Views UI and other similar interfaces on Drupal's docket. Stress-testing Claro's components would also enable those components to target more of the foundational cases that user interface developers need, in addition to underpinning initial development of redesigned interfaces. Cristina's hope is that many in the Drupal community view Claro's components as a robust foundation on which diverse user interfaces can be implemented.

Why Drupal is a perfect place for UI development

Cristina argues during our Tag1 Team Talks episode that "Drupal is the perfect place to do lots of things from the UI perspective." And the vision she shared with us in the previous section is only one step on the way to an ambitious future state for Drupal. Cristina shares that she wants to see not only two- and three-column layouts but also JavaScript-driven interactions that are commonplace across the web these days.

WordPress's Gutenberg project has served as a valuable source of inspiration for Claro's architects, but Cristina acknowledges that many of Gutenberg's goals are not viable in the Drupal context due to Drupal's emphasis on modular and structured content that does not match Gutenberg's more full-page approach. For example, Cristina argues that having a drag-and-drop Gutenberg-like user interface on a content entity in Drupal with only five fields could be overkill for the node's requirements. Nonetheless, one of the most compelling next steps for Claro and other front-end modernization efforts in Drupal could be to facilitate several different ways of editing content based on its characteristics.

In the end, Drupal's focus on experimentation is part of what makes the community and ecosystem so rich. Experimenting with new user interfaces and new ways to interact with content is key to the success of any application, and Drupal's administrative interface is no exception. By shedding some of the anachronistic approaches implemented by the Seven theme, Claro represents an exciting next stage in Drupal's front end and user experience. Cristina concludes by restating the primary purpose of Drupal's administration layer: not to be a fancy front end but to provide the most robust solutions possible to manage content in an efficient way.

Conclusion: How to get involved

The Claro administration theme is an exciting new development for Drupal core, and I'm excited to see how users respond across a variety of use cases and personas to the biggest update to Drupal's administrative interface in quite some time. But best of all, you too can get involved in Claro's development and propose and contribute your own ideas. All conversations about Claro take place in Drupal's Slack organization. Design considerations are discussed in the #admin-ui-design channel, whereas the larger-scale work around revamping the administrative interface is centered around the #admin-ui channel. For discussions about JavaScript and its role in driving richer interactions, the #javascript channel has seen discussion around this topic. These channels are the most direct way available to communicate with other contributors online.

As for Drupal.org, Claro has its own component in Drupal core, searchable in issues on Drupal.org and assignable to issues that you create. In addition, the initiative's coordinators are not only active and prolific but also friendly and approachable. Lauri Eskola (lauriii) and Cristina attend Drupal events regularly and also are available at many times on Drupal's communication channels. As for me, I plan to apply the Claro administration theme to the editorial interfaces on my own personal site. Thanks to the Claro theme, Drupal enters a new decade with a modern look and feel and a carefully considered approach that evokes the best of open-source communities: deep collaboration and thoughtful innovation.

Special thanks to Cristina Chumillas, Fabian Franz, and Michael Meyers for their feedback during the writing process.

Photo by Rachel Ren on Unsplash

Mar 17 2020
Mar 17

Table of Contents

Shared editing in Drupal?
     How collaborative editing could become a reality in Drupal
     Yjs across ecosystems
Challenges of building collaborative editing
Other alternatives for shared editing
Conclusion

One of the seemingly intractable difficulties in content management systems is the notion of supporting collaborative editing in a peer-to-peer fashion. Indeed, from an infrastructural standpoint, enabling shared editing in a context where server-side CMSs rule the day can be particularly challenging. All of this may soon change, however, with the combination of Yjs, an open-source real-time collaboration framework, and Gutenberg, the new editor for WordPress. With the potential future outlined by Yjs and collaborative editing in WordPress, we can open the door to other thrilling potentialities such as shared layout building or even collaborative drawing in the CMS context.

On an episode of the Tag1 Team Talks show, your correspondent (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) chatted with Kevin Jahns (creator of Yjs and Real-Time Collaboration Systems Lead at Tag1) and Michael Meyers (Managing Director at Tag1) about how WordPress is enabling shared editing in Gutenberg. In this multi-part blog series, we summarize some of the concepts we discussed. In this third installment of the series, we examine how shared editing could become a reality in Drupal and the unique challenges of implementing shared editing in the CMS context.

Shared editing in Drupal?

If you haven’t already, I recommend checking out the first and second installments of this blog series on shared editing with Gutenberg, because those blog posts outline some of the motivations for collaborative content creation in CMSs and some of the difficulties the intiative faced in supporting shared editing. In addition, over the course of those blog posts we inspected how key features like peer-to-peer collaboration and offline editing need to be part and parcel of any out-of-the-box implementation of collaborative editing in content management systems.

How collaborative editing could become a reality in Drupal

Drupal may have a similar background to WordPress’s, but its shared editing story is comparatively immature. Nevertheless, there are contributed working implementations of shared editing in Drupal, including ProseMirror, a shared editing tool that we covered in a previous Tag1 Team Talks episode. Tag1 Consulting recently evaluated ProseMirror along with a host of other solutions that would enable such use cases in Drupal, including a complete integration with Drupal’s roles and permissions system.

Tag1 recently produced an implementation of shared editing in Drupal for a top Fortune 50 client that leverages a client–server model without peer-to-peer capabilities. In addition, this Tag1 client needed to edit code collaboratively in addition to content, a use case enabled by CodeMirror, complete with code hints that provide next steps for editors. Michael cites this as yet another fascinating example of how Yjs can enable effective collaborative editing in a variety of settings. Fortunately, even Drupal sites are now implementing shared editing across the ecosystem, lending even more weight to the notion of collaborative content creation eventually becoming table stakes in content management.

Yjs across ecosystems

Many existing editors such as ProseMirror now integrate gracefully with Yjs. Fortunately, the y-webrtc plugin in the Yjs ecosystem is a relatively lightweight and straightforward add-on that could be part of the Drupal and WordPress ecosystems in the form of plugins as well. Once communication can be successfully established between peers with these plugins, we can enable collaboration on any shared editor that Yjs supports, including ProseMirror, Quill, and others. This idea will doubtlessly be interesting to people who use different content editors, because this feature can be easily enabled in the CMS.

Challenges of building collaborative editing

As Kevin attests from experience, implementing shared editing is particularly challenging. Someone who builds collaboration with JavaScript may require a different server algorithm in JavaScript to handle communication between client and server. Consider, for instance, the case of implementing text collaboration that leverages operational transformation (OT), a topic covered in a previous Tag1 Team Talks episode. OT is a particularly challenging algorithm to implement, and this same algorithm needs to be made available in the client-side context on rich text editors as well, which is a difficult proposition. In other words, it becomes double the work to support OT in any universal manner.

Most of the time, especially given the advent in recent years of universal JavaScript (also known as isomorphic JavaScript), architects would replicate the same client-side implementation of OT on a server environment. This sharing of code makes testing easier, because you can reuse the same libraries. You can ensure that the encoding of operations is precisely the same as what you would use in the server context. There are a host of issues with any universal approach, and WebSockets have not historically been well-supported in well-worn content management systems like Drupal and WordPress.

Other alternatives for shared editing

One of the important considerations in Gutenberg’s implementation of collaborative editing was determining whether there are any other compelling alternative solutions available. Kevin states during our Tag1 Team Talks episode that CKEditor 5 is an exceptionally well-built tool for editing and seamlessly communicates with CKEditor’s back-end solution, CKSource, to make any content become collaborative. CKEditor, in fact, leveraged the ideas behind collaborative editing much earlier, albeit with a centralized server approach. Unfortunately, however, one of the most significant issues inherent to CKEditor is that you can only use their proprietary CKSource server environment in order to enable shared editing. Nevertheless, CKSource washes away many of the concerns around shared editing such as scalability.

Worse still, CKEditor and CKSource are not viable solutions for content management systems like Drupal and WordPress, because every Drupal or WordPress implementation would need to set up CKSource and pay for a subscription. Kevin also highlights the issue of data ownership and asks whether collaborators can truly own their content if it is always sent to a separate central server. If the server goes offline and the content is lost, there is no way to restore the content, particularly from a local version.

As an example, many companies employ intranets to host content. Consider, for example, a company that routinely works with important government documents and secret information, all of which is kept secure through various means. Now, if you want this content to be collaboratively edited, but the content is comprised of highly privileged information, you need to host it either on a third-party server or pay a platform to host it on-premise. This makes maintainability, security, and software upgrades a potentially intractable challenge to solve.

Michael also cites the fact that CKSource was implemented first and foremost for CKEditor, and without CKEditor as a text editor available by default in a content management system, like it is in Drupal, there is no way to couple shared editing with CKSource for these open-source projects. Because Yjs is not only network-agnostic but also editor-agnostic, the same technology can enable the transformation of any aspect of your application into something collaborative. Whereas Yjs supports collaborative use cases beyond text such as layout building and drafting, CKSource makes this much more difficult.

Nonetheless, Kevin states during our show that CKSource and CKEditor, together, have led the market and offer the most robust end-to-end solution for collaborative text editing specific to those systems leveraging CKEditor. This exemplifies one of the key advantages of Yjs; not only can Yjs enable collaboration beyond text with use cases like drawing or even three-dimensional modeling, its agnosticism also means that integrations with a rich variety of editors is not only possible but easy to implement.

Conclusion

The concept of collaborative editing is nothing new to content editors around the world, but it is certainly a new idea for content management systems like Drupal and WordPress, which have never had such a feature available in the form of shared editing. Nonetheless, the motivations are clear, particularly in this modern age when offline editing and decentralized applications are in vogue. Thanks to the open-source real-time collaboration framework Yjs, however, Drupal and WordPress may soon have new approaches available to support shared editing out of the box. With the help of Yjs’ creator Kevin Jahns, Tag1 has enabled collaborative editing to become a reality in Gutenberg, one of the most important projects ever to emerge on the WordPress landscape.

In this third installment of this multi-part blog series on shared editing in Gutenberg, we examined some of the challenges that face a similar approach in Drupal. Nonetheless, Drupal and WordPress, along with Yjs, both benefit from rich ecosystems that could enable the sort of exciting innovation that comes with the prospect of shared editing. In the next and final installment of this series, we discuss the future of collaborative editing in Gutenberg, the reaction of the Gutenberg team, and how you yourself can get involved.

Special thanks to Kevin Jahns and Michael Meyers for their feedback during the writing process.

Photo by CoWomen on Unsplash

Mar 11 2020
Mar 11

Table of Contents

Accessibility in Claro
--Color
--Size and spacing
--Enabling better accessibility reviews
Scope
What makes Claro unique
--PostCSS
Conclusion

Throughout Drupal's existence, no other changes have made as much an impression on users as refreshes of the administrative interface in Drupal. Content editors and site builders have a long list of expectations when it comes to the editorial experience they wish to use, and surveys and tests had shown that Seven was showing its age. Now, thanks to Claro, the new administration theme for Drupal 8, user interfaces for all editors in Drupal are now optimized for accessibility and usability in addition to a realistic roadmap for the future.

Recently, Cristina Chumillas (Claro maintainer and Front-End Developer at Lullabot), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1) and Michael Meyers (Managing Director at Tag1) sat down with me (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a Tag1 Team Talks episode about the Claro administration theme and its bright future. In this multi-part blog series, we track the journey of Claro from beginnings to its current state. In this third installment, we uncover some of the ways in which Claro has improved outcomes for accessibility and stayed innovative in a fast-changing front-end landscape.

Accessibility in Claro

If you haven't yet read the first and second installments of this blog series, it's a good idea to go back to better understand where Claro came from and how it came to be. In this section, we examine some of the ways in which Claro prizes accessibility and ensures that Drupal users with disabilities can easily operate Drupal.

Color

Claro: Drupal Core Color Module During our conversation, Fabian asked Cristina: "Will there be a dark mode (in Claro)?" Though a dark mode toggle isn't part of the minimum viable product (MVP) for Claro, Cristina agrees that it could be a valuable addition to the theme, initially as part of the contributed theme ecosystem and then eventually part of core. The notion of a dark mode brings up one of the central considerations for accessibility: the use of color.

Currently, an issue on the Drupal Ideas queue proposes removing the Color module from Drupal core, a module that has been a mainstay of Drupal's available modules list for many years and allows the customization of Drupal themes through color variables. The discussion sparked a wide-ranging discussion about people wishing to customize the colors for administration interfaces for each customer. Though this is undeniably an interesting feature, the issue with offering color customization is that it grants far too much control to users who may configure color combinations that are inaccessible.

Size and spacing

Another piece of feedback that Claro received concerned what some perceived as too much additional space between form elements. Commenters wanted not only to see the elements displayed in a more compact manner but also to be able to customize the extent to which elements were spaced from one another, in addition to font sizes across the theme.

All of this has raised the important question of user customization of elements like size, spacing, and color and whether this sort of customization should be done at the theme level instead of in the user interface. Accessibility, after all, is hampered if a user sets a color scheme with insufficient contrast for certain users. One thing that Claro also implemented in Drupal's administrative interface was an increase in the font size across the board, as smaller font sizes are less accessible.

Enabling better accessibility reviews

A few months into development, the Claro team worked with Andrew McPherson, one of Drupal's accessibility maintainers, to review the designs for the administration theme and found important changes were necessary for elements like the aforementioned text field. An important discovery that the Claro team made was that providing a PNG or PDF file to a design reviewer and accessibility reviewer was less useful when the design isn't fully implemented in order to allow interaction testing as well.

Scope

Another key finding from Claro's development revolves around scope. In open source, of course, one of the fundamental constraints on project scope is always the number of contributor resources available to implement a given design. Having a tight release date also encourages a tighter scope, and Claro's team decided that having Claro ready for Drupal 8.8 was of paramount importance as otherwise Claro would possibly not have been eligible to become a stable theme in Drupal 8.8 or Drupal 8.9.

In the end, the team created a list of components that would be necessary to migrate from one theme to another, and though some of these continue to rely on Seven styles, they remain usable in Claro. By drawing a line in the sand and focusing on a revamp of the design first and foremost, without the risk incurred by engaging overly ambitious ideas like wildly new interfaces and layouts, Claro successfully entered Drupal core thanks in part to the constraints imposed by a narrow scope.

What makes Claro unique

Claro has been pushing the envelope when it comes to Drupal administration themes. Many common patterns now found on the web today are employed in Claro in lieu of the more "classic" components that Seven provided. Cristina says that Claro was "a chance to modernize the interface and get in a lot of patterns common around the web that make lives easier for users and I would say also for front-end developers."

PostCSS

Claro: Drupal Core PostCSS One of the key ways in which Claro demonstrates a high degree of innovation with regard to front-end development is the implementation of PostCSS for the administration theme. The Claro team decided explicitly not to utilize Sass as a Cascading Style Sheets (CSS) preprocessor, as is common in other Drupal themes. Instead, Claro uses PostCSS, a postprocessor for CSS, to provide features such as cross-browser support for CSS variables. Because Claro leverages PostCSS to process CSS written normally, it will remain compatible well into the future.

Although Sass offers many features that normal CSS did not offer several years ago, they provided many features that have now been integrated into CSS3 and modern browsers, which are receiving more regular updates and supporting specifications much more rapidly than before. This means that you can already access many CSS features provided by Sass in browsers including CSS variables — admittedly without Sass's unique features but also without the need for JavaScript to run the preprocessor.

Postprocessors, meanwhile, allow developers to write modern CSS with all of the vendor prefixes needed to support modern browsers. They change variables into functional code for CSS and converts the CSS you write into styles that will work in all browsers, including Internet Explorer. In Fabian's opinion, PostCSS can be considered a bridge from the old to the new much like a JavaScript shim in which one can use new language features available to the language but generate old JavaScript from the file.

Conclusion

Claro: Drupal Available Updates Claro is the new administration theme for Drupal 8, and it looks to be one of the most exciting developments for content editors and site builders in recent years, thanks to the design and development team's emphasis on accessibility, usability, and a narrow scope on implementing a moderate redesign rather than a fundamental rework. But Claro also demonstrates some of the key manners in which Drupal themes can be both accessible and innovative. Thanks to Claro's focus on providing accessible defaults, all users can successfully make use of the administration theme. In addition, with modern approaches like the use of PostCSS, a postprocessor that offers features like vendor prefixes for all modern browsers, Claro shows an innovative front end is possible for Drupal's administration layer.

In this multi-part blog series about Claro for Drupal 8, we have explored the history and origins of the Claro theme, some of the ways in which Claro addressed usability concerns, and how Claro has exemplified accessibility and front-end innovation. In this third installment, we spotlighted some of the ways in which Claro has allowed Drupal's administrative interfaces to become more accessible, including through font size, spacing between elements, and color contrast. We also inspected some of the additional features the Claro theme has added such as PostCSS, which obviates some of the need for CSS preprocessors thanks to the availability of features like CSS variables in modern browsers, and provides postprocessing to ensure vendor prefixes are present. In the following installment of this Claro blog series, we look forward to possible improvements to Claro and Cristina's dream vision for Drupal's new flagship administration theme.

Special thanks to Cristina Chumillas, Fabian Franz, and Michael Meyers for their feedback during the writing process.

Photo by Hulki Okan Tabak on Unsplash

Mar 04 2020
Mar 04

Table of Contents

Inspirations for Claro
--Finding inspiration in many places
--Honoring Seven's legacy
Claro's current state
--A new future for Views UI?
The origins of Claro's design
--Keeping Claro's redesign lean
Conclusion

Across Drupal's storied history, perhaps the most noticeable — and visible — changes in the CMS's evolution have come from the redesigns of the administrative interface that underpin every user's interaction with the Drupal editorial experience. Now, with the release of the Claro administration theme, which replaces the Seven theme and focuses on content author and site builder use cases, Drupal has reached another important inflection point in the history of its user experience. Claro is a theme that prizes accessibility, usability, and Drupal's legacy of extensibility.

This author (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) had the privilege of moderating a wide-ranging conversation about Claro on a Tag1 Team Talks episode with Cristina Chumillas (Claro maintainer and Front-End Developer at Lullabot) and colleagues Fabian Franz (Senior Technical Architect and Performance Lead at Tag1) and Michael Meyers (Managing Director at Tag1). In this multi-part blog series, we traverse the trajectory of Claro from its origins to its current state and dig into its design and development processes to learn more about how this administration theme came to be. In this second installment, we focus on the inspiration behind and execution of Claro's design.

Inspirations for Claro

If you are unfamiliar with the Claro administration theme for Drupal 8, I highly recommend returning to the first installment in this blog series to learn more about Claro itself, its origins, and its goals to better understand the rest of this blog post. In this section, we explore some of the sources of inspiration for Claro and how modern interaction patterns underlined the urgency of a refresh for Drupal's vaunted administration interface.

Finding inspiration in many places

Roughly one year ago, during the beginning stages of the Claro development process, the team expressed an ambitious vision. At the time, Claro's architects were primarily focused on the content author persona. Cristina Chumillas recalls during our conversation that "ideally for a content author, you should have a drag-and-drop interface" or something snazzy that would not only match modern expectations but also convey a sense of futuristic panache. The team worked towards an interface that approximated some of the patterns found in WordPress Gutenberg, as they were finding considerable inspiration in that project.

However, the obstacle Drupal presents is that reaching a certain threshold of impressive transitions and other expectations of interactive user experiences requires considerable JavaScript. In addition, the notion of substituting Drupal's entire administration interface with a dynamic JavaScript application was not feasible. As a result, the team took a step back to reconsider the technical options available to them — at the same time Gutenberg's accessibility fiasco occurred — and determined that a better choice was to release something sooner so that users could experiment with it rather than wait for foundations that may never be stable.

Honoring Seven's legacy

Another important consideration that lent urgency to the endeavor was the need to refresh at least some of the color scheme utilized in Drupal's user interface, as this was the source of some comments about the aesthetic outdatedness of the Seven theme. In the end, Claro is comprised of what Cristina calls a "grown-up" version of Seven with color changes, all of which was made possible over the course of many months with the help of many volunteers.

All in all, the change in Claro's vision away from reinventing the wheel for Drupal's front end allowed for a more realistic and achievable end result to emerge in order to replace the Drupal user interface in a more sustainable fashion. Because Claro is ultimately based on the same stylesheet as Seven, the key advantage that Claro has over a solution such as a JavaScript application is that stability is not a concern.

Claro's current state

Claro is a beta theme in Drupal 8.8, and in several releases, it is slated to become a stable core theme. As Cristina states in our Tag1 Team Talks episode, Claro is perfectly usable at this very moment, and there are several Drupal sites in production already making use of it on their own administration layers. The good news about the widening adoption of and interest in the Claro administration theme is that there is growing momentum for larger-scale modifications to many more interfaces than were touched initially.

A new future for Views UI?

One of the chief examples of this momentum is strong interest in updating Views UI, which remains one of the most frequent user interfaces that Drupal users come in contact with. But as with all projects, Claro had the constraint of time and scope, and the team was loath to deem the stability of a redesigned Views UI a showstopper for a stable release of Claro. This situation would have led, Cristina argues, to a never-ending initiative without a finish line in sight.

Claro: Existing Views UI Cristina states during our webinar that eventually, the vision is to reach the place where all of Drupal's user interfaces have been redesigned from their previous Seven incarnations, from which point several other initiatives will kick off. But Cristina also rightfully adds that anointing an initiative with the responsibility of refreshing an administration interface cannot place every conceivable user interface under that umbrella if there is any hope of releasing a finished product within five years.

During our discussion, Fabian joined Cristina in recommending a strategy in which iterative initiatives that focus on regularly introducing small amounts of new features is the most sustainable form of initiative coordination. After all, at some point we must all decide that we are finished and there is no more to be done. In fact, the Claro team originally envisioned adding a new user role and even adding a second dashboard for content authors with draggable blocks, but questions quickly spiraled out of control, with suggestions for modifying the entire layout of Drupal's administration interface. Cristina admits that many portions of Drupal's administration layer will require initiatives in their own right in order to handle the inevitable demands that a redesign of a complex user interface will bring.

The origins of Claro's design

The initial ideas for the design of the Claro administration theme emerged almost from the beginning, as Cristina recalls during our Tag1 Team Talks episode. During those first discussions, it became readily apparent that any new administration theme had to look sufficiently different enough from Seven such that the vast majority of Drupal users would immediately spot the difference. But Claro's redesign had many limitations, including in Drupal's own brand: It was considered anathema to Drupal's brand identity to change the color scheme to one involving red and green that didn't reflect Drupal's color consistency.

Keeping Claro's redesign lean

Claro: Manage form display While the Claro team had sought lighter and more vivid colors for Claro and looked for inspiration in sources like Material Design and other design systems available online, they also understood that they could not reinvent the wheel with a revolutionary design, not only due to technical considerations but also aesthetic concerns. After all, administration interfaces are not intended to be the center of attention; Cristina efficiently defines the optimal administration interface as "something that works and that people understand."

Because of this sentiment across the team, Claro's architects decided to eschew fancy buttons and fieldsets, such as one particular interaction pattern in which a text field would change its outline color and display an underline to add dynamism to the page. During the first accessibility review, however, the consensus was that an overly "fancy" design was not appropriate for all users and that sticking to basic design principles was a better approach. Trying to keep things focused and tight, the Claro team sought a brighter look, initially employing a neon blue that was eventually darkened due to accessibility concerns. In the end, most of Claro's design elements came together based on inspiration from other designs already in the wild, and the team did their utmost to adhere to known patterns.

Conclusion

With the new Claro administration theme, Drupal 8 joins the growing list of content management systems that have revamped their look and feel for their most important users: the site builders and content authors that use the CMS on a daily basis. Claro demonstrates the sort of scrutiny on personas that also suggests substantial attention paid to the needs of all users, whether that entails accessibility, usability, or extensibility. And thanks to the give-and-take that characterizes accessibility reviews and design sprints, Claro emerged from its development with an efficient design that recalls many known patterns and is therefore a familiar and eminently learnable interface.

In this blog post, we discovered some of the inspiration that Claro built upon to produce its subtle and modern design. We also discussed some of the motivations that led to some of Claro's design decisions, including the choice to eschew a neon blue color and simplification of interactions with text fields in order to limit unnecessary distractions. In the third installment of this multi-part blog series about Claro, we spotlight some of the accessibility considerations that went into Claro's design and development, among other topics.

Special thanks to Cristina Chumillas, Fabian Franz, and Michael Meyers for their feedback during the writing process.

Photo by Aaron Greenwood on Unsplash

Feb 27 2020
Feb 27

Table of contents

Anxieties and aspirations for decoupled Drupal
--Overly different front-end worlds
--Web Components for better encapsulation
--The age-old debate: Decouple Drupal itself?
What’s next for decoupled Drupal?
--Security remains a competitive advantage
--An approach for dynamic components
--Further in the future: A more real-time Drupal
Conclusion

As another decade comes to a close in Drupal’s long and storied history, one important trend has the potential to present amazing — or unsettling, depending on how you look at it — opportunities and shifts in the Drupal ecosystem. That trend is decoupled Drupal. Nowadays, there is no limit to the content prolifically produced by the community about the topic, whether it’s in the form of a comprehensive book on the subject or a conference focused on the theme. Today, decoupled Drupal implementations seem, at best, to be a door presenting new approaches and ideas, and, at worst, here to stay for good without proper consideration for potential externalities.

On one of the latest Tag1 Team Talks episodes, we gathered four insider experts on decoupled Drupal, including yours truly (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice), Sebastian Siemssen (Senior Architect and Lead React Developer at Tag1 and maintainer of the GraphQL module), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), and Michael Meyers (Managing Director at Tag1). Over the course of the wide-ranging discussion, we revealed some of our reminiscences, misgivings, and hopes about decoupled Drupal in the years to come.

What trajectory will decoupled Drupal take over the coming decade? In the second part of this two-part blog series, we take a closer look at how decoupled Drupal has already revolutionized how we think about key features in Drupal with Fabian and Sebastian looking to reshape how Drupal renders markup and dynamic components and all of us highlighting security as one of decoupled Drupal’s competitive advantages.

Anxieties and aspirations for decoupled Drupal

One of the subjects we focused our attention on was our feelings on the current state of decoupled Drupal and how it is impacting the way we implement Drupal architectures today. Fabian, in particular, has mixed feelings about decoupled Drupal at present. While decoupled Drupal confers significant advantages, as we saw in the previous blog post, it also can lead to skepticism about the disadvantages it introduces. I wrote about the risks and rewards of decoupled Drupal back in 2016, and many of these pros and cons still resonate deeply today.

The loss of administrative features that are present in Drupal’s presentation layer but absent from decoupled front ends is an important rationale for many who believe monolithic architectures remain wholly viable. For instance, Fabian cited as evidence a recent Tag1 project that leveraged a decoupled Drupal architecture but also required some reinvention of the wheel to reimplement contextual links, a key front-end Drupal feature, in the decoupled presentation layer. Nonetheless, decoupled Drupal remains an excellent choice for mobile applications, digital signage, and even conversational interfaces.

Overly different front-end worlds

One of the issues with decoupled Drupal is the fact that developers are attempting to solve problems that, according to Fabian, should be solved within Drupal itself.

As Fabian argues in our recent webinar, “One of the mistakes we made in core is that we are mixing data and presentation within our render trees and entities.” The fact that the experience of React developers differs so substantially from Drupal theme developers further highlights this issue.

Web Components for better encapsulation

At DrupalCon Amsterdam, Fabian presented a session about Web Components that proposed a path forward to narrow the gap between front-end developers who are pursuing new paradigms such as React and Drupal developers who wish to implement components in Drupal front ends. Web Components are compelling because they provide for better-encapsulated code but still allow for data to come from any source, including Drupal.

In Fabian’s view, Drupal needs to do much more to bridge the gap between new front-end ecosystems and Drupal, together with a finely tuned developer experience that doesn’t require significant new learning but does allow for better closeness between the front end and the back end. For Fabian, it is essential that the Drupal community scrutinize some of the approaches to componentization found in JavaScript ecosystems.

The age-old debate: Decouple Drupal itself?

Sebastian argues that spitting the back end and front end of Drupal more at the kernel level is the right approach. There is significant weight in Drupal’s codebase that presents challenges not only for the administrative interface that many Drupal users work with on a daily basis but also the API layer within Drupal.

In fact, one of the original goals of the Web Services and Context Core Initiative (WSCCI), whose trajectory I cover extensively in Decoupled Drupal in Practice, was to recast core as a slim kernel that provides low-level functionality completely isolated from user interface components.

What’s next for decoupled Drupal?

So what’s next for decoupled Drupal? How will we as practitioners modify the approaches we take and the paradigms we utilize as decoupled Drupal continues to gain steam across the community? For this, I requested help from Fabian and Sebastian to offer their perspectives on some of the possible directions that decoupled Drupal might take as we traverse new territory.

Security remains a competitive advantage

One of the more enticing elements of decoupled Drupal is the fact that it can facilitate favorable security outcomes. Thanks to the rapidly evolving advantages offered by new JAMstack (JAM stands for JavaScript, APIs, and markup) and serverless approaches, security can become much less of a concern in decoupled architectures.

With a content server like that provided by Drupal, argues Fabian, we can feasibly implement JAMstack architectures that yield static HTML pages as the end result rather than dynamic applications with potential security vulnerabilities or, worse yet, a monolithic architecture that means an attacker could gain access to the entire Drupal system from end to end by exploiting a front-end weakness. After all, static HTML is completely secure, as there is no executable code. For Fabian, this is one of the most important and often overlooked advantages of the decoupled Drupal approach.

An approach for dynamic components

These innovations in the front-end development landscape comprise some of the rationales that Fabian cites for taking a closer look at the JavaScript community and how they handle encapsulated components. The crucial missing piece in any JavaScript implementation is sourcing data — and this is an area that Drupal can improve.

Sebastian agrees that decoupling data from internal Drupal objects like fields and entities could resolve many issues in Drupal, but he also argues that one of the key disadvantages of Drupal in its current conception is its push-based model. As an illustration of this, when Drupal fulfills a request, it executes a massive processing engine to pull data from sources and feed them into a rendering engine.

He goes on to argue that reversing Drupal’s render pipeline approach and pulling data from graphs (e.g. through GraphQL) as the template engine needs it, just in time, would allow Drupal to evolve to a more modern, pull-based approach. To Sebastian, this pull-based mechanism is precisely what an API-first architecture promises, and he cites recent innovations surrounding GraphQL in Twig as the initial steps toward this exciting vision.

Further in the future: A more real-time Drupal

One of the areas that has long fascinated me is the potential for Drupal to evolve into a more real-time system that confers significant advantages when it comes to dynamic components. Fabian argues that Web Components are a key step on the road to real-time Drupal, and a “components everywhere” approach could allow for this to occur. With a component-based approach, the only additional step necessary would be a Node.js server that updates components whenever data sees updates as well.

For all of Drupal’s benefits, real-time capabilities remain elusive. As Sebastian rightly states, real-time approaches are largely infeasible in PHP. After all, real-time data is something that Drupal is ill-equipped to handle itself; any real-time functionality needs to be handled on top of Drupal in a non-PHP layer. One option that Sebastian suggests is the combination of a Node.js server, Drupal’s PHP environment, and Redis, which would facilitate a push-based approach. Saving in Drupal could send a notification to a PubSub environment such as Redis or Kafka, and a Node.js microservice could update the client through a WebSocket.

Though this would mean forwarding Drupal’s APIs to a Node.js environment, fortunately, there are several ways to approach this in a way that keep Drupal’s benefits such as granular caching in Drupal 8, which is provided through the cache tag system. And the closer the data source is to an API like that built in GraphQL, more can be done to optimize the database queries and caching. Fortunately, GraphQL comes with WebSocket and subscription support, and it would be a compelling choice given the ongoing work in the GraphQL specification to send GraphQL queries and subscriptions in the same response.

Conclusion

Thanks to decoupled Drupal, the Drupal community is in the midst of a renaissance that is revolutionizing the way we think about the front end of Drupal while simultaneously challenging our long-held assumptions about how Drupal should look and operate. While this is a testament to Drupal’s considerable longevity, it also presents obstacles if we wish to guarantee the future of Drupal for the coming decade. Fortunately, owing to the vast amount of resources surrounding decoupled Drupal, including the book Decoupled Drupal in Practice and even a conference dedicated to the subject, more and more discussion around the topic will yield significant fruit for the promising future of Drupal.

While no one can prognosticate how decoupled Drupal will look in 2030, at the end of this current decade, we can certainly see that the story of decoupled Drupal is only just getting started. I strongly encourage you to check out our recent Tag1 Team Talks episode to learn more about what could influence Drupal’s trajectory substantially in the coming years. I, for one, believe that though we have come a long way, we still have much distance to cover and much thinking to do in order to prepare for a more decoupled Drupal. Nevertheless, the solutions recommended by Fabian and Sebastian excite me just as much as I hope they have inspired our readers too.

Special thanks to Fabian Franz, Michael Meyers, and Sebastian Siemssen for their feedback during the writing process.

Photo by Jesse Bowser on Unsplash

Feb 25 2020
Feb 25

Table of Contents

Defining decoupled Drupal
--The pros and cons remain largely unchanged
--A competitive advantage for commercial Drupal
Aside: Our stories in decoupled Drupal
GraphQL as a game-changer
--GraphQL queries at scale
--What about JSON:API vs. GraphQL?
--GraphQL v4 and custom schemas
Conclusion

Over the last five years, decoupled Drupal has grown from a fringe topic among front-end enthusiasts in the Drupal community to something of a phenomenon when it comes to coverage in blog posts, tutorials, conference sessions, and marketing collateral. There is now even a well-received book by this author and a yearly conference dedicated to the topic. For many Drupal developers working today, not a day goes by without some mention of decoupled architectures that pair Drupal with other technologies. While Drupal’s robust capabilities for integration are nothing new, there have been comparatively few retrospectives on how far we’ve come on the decoupled Drupal journey.

Recently, your correspondent (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) sat down for a no-holds-barred discussion about decoupled Drupal on Tag1 Team Talks with three other early insiders in the decoupled Drupal mindspace: Sebastian Siemssen (Senior Architect and Lead React Developer at Tag1 and maintainer of the GraphQL module), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), and Michael Meyers (Managing Director at Tag1).

During the conversation, we spoke about the unexpected ways in which decoupled Drupal has evolved and where it could go in the near and far future. In this two-part blog series, we look back and peer forward into decoupled Drupal’s trailblazing and disruptive trajectory in the Drupal ecosystem, starting with Sebastian, who created Drupal’s GraphQL implementation, and Michael, who has witnessed firsthand the paradigm shift in Drupal’s business landscape thanks to the benefits decoupled Drupal confers.

Defining decoupled Drupal

As it turns out, defining decoupled Drupal can be tricky, given the diverse range of architectural approaches available (though these are covered comprehensively in Preston’s book). In its simplest form, decoupled Drupal is an idea centered around the notion of employing Drupal as a provider of data for consumption in other applications. In short, the “decoupling” of Drupal occurs when developers opt to utilize Drupal data by issuing requests to APIs that emit JSON (or XML) rather than through Drupal’s native presentation layer.

As Fabian notes in the webinar, there are multiple “flavors” of decoupled Drupal that have emerged. Fully decoupled Drupal implementations aim to replace the entire presentation layer of Drupal with a distinct application. Progressively decoupled Drupal, an idea first promulgated by Drupal project lead Dries Buytaert in a 2015 blog post, recommends leveraging API calls to provide for components in the existing Drupal front end rather than an entirely separate presentation layer.

The pros and cons remain largely unchanged

Decoupled Drupal presents both advantages and disadvantages that are worth considering for any implementation, and these are also covered in Decoupled Drupal in Practice. Fully decoupled Drupal, for instance, means jettisoning a great deal of what makes Drupal such an effective web framework by limiting its feature set substantially to solely the APIs for retrieving data and an administrative interface for content management.

Though decoupling Drupal inevitably means losing certain functionality, such as contextual links, in-place editing, and layout management, it does confer the opportunity to create something entirely custom atop the data model that Drupal provides, as Sebastian states in our Tag1 Team Talks episode. Nonetheless, while decoupled Drupal does introduce unprecedented flexibility, it also goes beyond a simple separation of the presentation layer from the data layer by rendering Drupal more replaceable. The notion of replaceability does belie an undercurrent of anxiety surrounding decoupled Drupal, and it presents risks for Drupal’s position in the CMS market.

A competitive advantage for commercial Drupal

Nonetheless, from the commercial standpoint and the customer perspective, there is no question that decoupled Drupal is a boon for ambitious digital experiences. For instance, during our conversation, Michael argues that decoupled Drupal actually represents a competitive advantage for Drupal in the commercial space rather than an erosion of its value. The narrative of integration in which Drupal has always trafficked has long been influential in swaying skeptical stakeholders. In short, in this view, Drupal’s core REST API, JSON:API, and GraphQL only strengthen this flexibility to integrate.

As a matter of fact, Tag1 Consulting’s yearslong work with Symantec supports this notion. Together with Tag1, Symantec embarked on a migration from version to version of Drupal in a nontraditional way that was only made possible thanks to decoupled Drupal technologies. By providing multiple front ends across Drupal versions, Symantec and Tag1 succeeded in both accelerating and effectively managing the migration to Drupal 8. Michael notes that from a client standpoint at Tag1, decoupled Drupal has been a valuable and sought-after asset; if anything, it has only increased evaluators’ interest in the platform.

Aside: Our stories in decoupled Drupal

Sebastian, Fabian, and I also reminisced about our respective stories in the decoupled Drupal landscape and how it has impacted the trajectory of our work even today. Arguably among the first in the Drupal community to leverage a decoupled architecture, Fabian admits building a decoupled site that consumed unstyled HTML snippets as far back as 2009, with other data coming from multiple sources. We can certainly forgive the fact that he opted to use Flash for much of the front end in that build.

Early in 2015, Sebastian found that the release of GraphQL by Facebook as a new specification made everything “click” for him as one of the crucial gaps in Drupal’s APIs. That same year, Sebastian began working on GraphQL for PHP, basing much of his ideas on the just recently open-sourced GraphQL JavaScript library. Thanks to Sebastian’s tireless work, the GraphQL module for Drupal first appeared as a codebase on Drupal.org in March 2015 and has since skyrocketed in interest and popularity.

My own journey is much less glamorous, as I assisted Drupal project lead Dries Buytaert with his initial thought leadership on the subject of decoupled Drupal back in 2015. At the time there was considerable hand-wringing about what disruptive advances in the front-end development world could mean for Drupal. Over the course of time, my perspectives have evolved considerably as well, and I believe there are many excellent use cases for monolithic implementations, something I stress in my book about decoupled Drupal. Today, I help run the biggest and oldest conference on decoupled and headless CMS architectures, Decoupled Days, which is now entering its fourth year.

GraphQL as a game-changer

One of the key questions for decoupled Drupal practitioners has been how to leverage the best that Drupal has to offer while also accessing some of the best ideas emerging from the API and front-end development spaces. For instance, among Drupal’s most notable selling points is the fact that not only does Drupal benefit from an excellent administrative interface; it also offers a robust customizable data model that can be crafted and fine-tuned. Entity API and Field API, for instance, have long been some of the most vaunted components of the back-end Drupal developer experience.

Tag1 recently found this with a project that employs Laravel, a more lightweight PHP framework for websites that is far less opinionated than Drupal. Light layers of entities and fields were required for the Laravel project, and Fabian remarks in our recent webinar that these are particularly straightforward to implement and validate in Drupal. GraphQL adds another fascinating dimension to this by allowing Drupal to handle the heavy lifting of important features like field validation while permitting client-tailored queries.

GraphQL queries at scale

Sebastian describes GraphQL as a reversal of the traditional relationship between the server and client. Rather than the client being forced to adhere to what the server provides, the server declares the data possibilities that it is capable of fulfilling, and based on these catalogued possibilities, the client defines exactly what it needs on a per-request basis. The client sends the query to the server in a particular format, which Sebastian characterizes as much like the “JSON you want returned from the API but only the keys.” Thereafter, the GraphQL API inserts the values and returns a response with precisely the same shape.

This brings us to one of the most fascinating aspects of GraphQL within the Drupal CMS, which often utilizes deeply nested relational data structures that are uniquely well-suited for modeling into a graph — which is the very core of what GraphQL does. One of the best aspects of GraphQL, moreover, is its capabilities for schema and query validation, with which we can prevent distributed denial-of-service (DDoS) attacks that attempt to overload queries with many nested layers by checking and analyzing the complexity of queries ahead of time, a task that is much easier thanks to the predictability of GraphQL schemas.

What about JSON:API vs. GraphQL?

While the story of GraphQL has been long-running in the Drupal ecosystem, it’s important to note the other big player in Drupal’s web services: JSON: API, which was just this year introduced into Drupal core. One of the biggest reasons why JSON:API has gained prominence is its relative stability and its high amount of documentation. Sebastian argues that React developers coming from their ecosystem are more likely to be familiar with GraphQL already, which also helps elevate its status within the decoupled Drupal ecosystem.

GraphQL v4 and custom schemas

One of the most anticipated releases of the GraphQL module is GraphQL v4 for Drupal, which means several significant changes for the ever-evolving GraphQL module. In the latest version of the GraphQL module, the GraphQL schema is fully in the control of Drupal developers, which is a substantial change from previous releases. After all, one of the best selling points for using GraphQL in the first place is schema customizability.

According to Sebastian, this means that you can decouple Drupal on the API access and contract level rather than foisting the data model and data internals of Drupal on the resulting GraphQL API, which may cause confusion among developers of consumer applications like React implementations.

Conclusion

Perhaps the most intriguing development in the long-running saga of decoupled Drupal is the diversification and proliferation of a wide range of approaches, like GraphQL, that improve the developer experience in Drupal significantly. Besides its obvious benefits of greater flexibility on the side of developers, moreover, Drupal users and agency customers are also discovering the advantages of decoupled Drupal for a variety of use cases.

As we recently discussed during our conversation, a mix of both nostalgia and forward thinking, decoupled Drupal is here to stay. Whereas in years past this statement may have caused considerable anxiety in the Drupal community, today it is emblematic of the ongoing explosion of options and capabilities that decoupled Drupal engenders for Drupal developers. But in the second installment in this two-part series, Fabian, Sebastian, and I discuss some of the anxieties and worries we share about decoupled Drupal, before attempting to predict what could come next for this fast-evolving paradigm.

Special thanks to Fabian Franz, Michael Meyers, and Sebastian Siemssen for their feedback during the writing process.

Photo by Jesse Bowser on Unsplash

Feb 19 2020
Feb 19

Content collaboration has long been table stakes for content management systems like WordPress and Drupal, but what about real-time peer-to-peer collaboration between editors who need direct interaction to work on their content? The WordPress Gutenberg team has been working with Tag1 Consulting and the community of Yjs, an open-source real-time collaboration framework, to enable collaborative editing on the Gutenberg editor. Currently an experimental feature that is available in a Gutenberg pull request, shared editing in Gutenberg portends an exciting future for editing use cases beyond just textual content.

Yjs is both network-agnostic and editor-agnostic, which means it can integrate with a variety of editors like ProseMirror, CodeMirror, Quill, and others. This represents substantial flexibility when it comes to the goals of WordPress to support collaborative editing and the potential for other CMSs like Drupal to begin exploring the prospect of shared editing out of the box. Though challenges remain to enable truly bonafide shared editing off the shelf in WordPress and Drupal installations, Gutenberg is brimming with possibility as the collaboration with Tag1 continues to bear significant fruit.

In this Tag1 Team Talks episode that undertakes a technical deep dive into how the WordPress community and Tag1 enabled collaborative editing in the Gutenberg editor, join Kevin Jahns (creator of Yjs and Real-Time Collaboration Systems Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and your host Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for an exploration of how CMSs around the landscape can learn from Gutenberg's work to empower editors to collaborate in real-time in one of the most exciting new editorial experiences in the CMS world.

[embedded content]

Feb 18 2020
Feb 18

In the first part of our two-part blog series on Drush 10, we covered the fascinating history of Drush and how it came to become one of the most successful projects in the Drupal ecosystem. After all, many of us know many of the most common Drush commands by heart, and it’s difficult to imagine a world without Drush when it comes to Drupal’s developer experience. Coming on the heels of Drupal 8.8, Drush 10 introduces a variety of new questions about the future of Drush, even as it extends Drush’s robustness many years into the future.

Your correspondent (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) had the unique opportunity to discuss Drush’s past, present, and future with Drush maintainer Moshe Weitzman (Senior Technical Architect at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), and Michael Meyers (Managing Director at Tag1), as part of the Tag1 Team Talks series at Tag1 Consulting, our biweekly webinar and podcast series. In the conclusion to this two-part blog series, we dig into what’s new in Drush 10, what you should consider if you’re making a choice between Drush and Drupal Console, and what the future for Drush might hold in store for Drupal’s first CLI.

What’s new in Drush 10

Drush 10 is the version of Drush optimized for use with Drupal 8.8. It embraces certain new configuration features available as part of the upcoming minor release of Drupal, including the Exclude and Transform APIs as well as config-split in core. Nevertheless, the maintainers emphasize that the focus of Drush 10 was never on new additive features; instead they endeavored to remove a decade’s worth of code from Drush and prepare it for many years to come.

To illustrate this fact, consider that Drush 9 was a combination of both old APIs from prior versions of Drush and all-new APIs that Drush’s maintainers implemented to modernize Drush’s commands. Therefore, while Drush 9 commands generally make use of the newly available APIs, if you call a site with Drush 9 installed from Drush 8, it will traverse all of the old APIs. This was a deliberate decision by Drush’s maintainers in order to allow users a year to upgrade their commands and to continue to interoperate with older versions. As a result of the removals of these older approaches, Drush 10 is extremely lean and extremely clean, and it interoperates with sites having Drush 9 but not those with earlier versions.

How should developers in the Drupal community adopt Drush 10? Moshe recommends that users upgrade at their earliest convenience through Composer, as Drush’s maintainers will be able to offer the best support to those on Drush 10.

Why Drush over Drupal Console?

One key question that surfaces frequently concerning Drupal’s command-line ecosystem is the distinction between Drush and a similar project, Drupal Console, and when to use one over the other. Though Drush and Drupal Console accomplish a similar set of tasks and share similar architectures because they both depend on Symfony Console, there are still quite a few salient differences that many developers will wish to take into account as they select a command-line interface to use with Drupal.

Commands, for instance, are one area where Drush and Drupal Console diverge. Command authors will find that commands are written quite differently. Drush leverages an annotated command layer on top of Symfony Console where developers employ annotations to write new commands. Drupal Console instead utilizes Symfony Console’s approach directly, with a few methods attached to each command. However, this is a minor consideration, as there is little to no difference in the CLI’s functionality, and it is merely a stylistic preference.

Drush and Drupal Console also differ significantly in their approaches to testing. Whereas Drupal Console performs unit testing, Drush prefers functional testing, with a full copy of both Drupal and Drush in their test suite. All Drush CLI commands are run on a real, fully functional Drupal site, whereas Drupal Console opts to leverage more mocking. There are admittedly many advantages to both approaches. But perhaps the most important distinction is of a less technical variety: Drupal Console has seen a bit less contribution activity as of late than Drush, which is an important factor to consider when choosing a CLI.

The future of Drush: Drush in core?

Though Moshe and Greg have committed themselves to maintaining and supporting Drush in the future, there are doubtlessly many questions about Drush’s roadmap that will influence decision-making around Drupal.

Drush’s inclusion in core has long been a key talking point with a variety of potential resolutions. Drupal already has two CLI commands in it unrelated to Drush, namely the site-install and quick-start commands, which are seldom used as they have limited coverage of key use cases. For instance, site-install only installs Drupal successfully on SQLite databases and lacks consideration for configuration. Drush’s maintainers are keen on considering a version of Drush in core, and an active discussion is ongoing.

Moreover, now that the starter template for Drupal projects is now deprecated in favor of core-recommended, there is an opportunity for Drush 10 to serve as a key dependency in those starter templates, initially as a suggested dependency and eventually as a required one. Some of the key commands that a hypothetical Drush in core would encompass include enabling and uninstalling modules as well as clearing caches and logging in as a user. In the not-too-distant future, a Drupal user could start a Drupal project and immediately have Drush and all its commands available from the very outset.

Conclusion

Drush 10 is an inflection point not only in the history of Drupal but in how Drupal developers interact with Drupal on a daily basis. Thanks to its leaner, faster state, Drush 10 marks a new era for remote interactions with Drupal. Because Drush 10 has tracked closely to the Drupal 8 development cycle, many of the core changes present in Drupal 8.8 are reflected in Drush 10, and the ongoing discussion surrounding the potential of Drush in core will doubtlessly continue apace.

For many of us in the Drupal community, Drush is more than a cherished tool; it is one of the primary entry points into Drupal development. With the help of your contributions, Drush can reach even greater heights. Moshe recommends that new contributors get started with improving Drush’s documentation and content concerning Drush, whether it comes in the form of blog posts or step-by-step tutorials that make learners’ experiences much better. The Drush maintainers are always happy to link to compelling content about Drush, to address bugs and issues in Drush’s issue queue, and to offer co-maintainership to prolific contributors.

While this was an exhaustive look at Drush 10, it by no means includes all of the insights we gathered from Moshe, and we at Tag1 Consulting encourage you to check out our recent Tag1 Team Talk about Drush 10 to learn even more about Drush’s past, present, and future.

Special thanks to Fabian Franz, Michael Meyers, and Moshe Weitzman for their feedback during the writing process.

Photo by Bill Oxford on Unsplash.com

Feb 13 2020
Feb 13

Table of contents

What is Tag1 Quo
How does Tag1 Quo work
What makes Tag1 Quo unique
--Immediate notice of vulnerabilities
--Backports of LTS patches
--Automated QA testing for Drupal 7 LTS
--Customer-driven product development
Conclusion

One of the challenges of securing any Drupal site is the often wide range of modules to track, security advisories to follow, and updates to implement. When it comes to Drupal security, particularly older versions of Drupal such as Drupal 6 and Drupal 7, even a slight delay in patching security vulnerabilities can jeopardize mission-critical sites. Now that Drupal 7 and Drupal 8 are fast approaching their end of life (EOL) in November 2021 (Drupal 6 reached end of life on February 24, 2016), the time is now to prepare your Drupal sites for a secure future, regardless of what version you are using.

Fortunately, Tag1 Consulting, the leading Drupal performance and security consultancy, is here for you. We’ve just redesigned Tag1 Quo, the enterprise security monitoring services trusted by large Drupal users around the world, from the ground up, with an all-new interface and capabilities for multiple Drupal versions from Drupal 6 to Drupal 8. Paired with the Tag1 Quo module, available for download on Drupal.org, and Tag1 Quo’s services, you can ensure the security of your site with full peace of mind. In this blog post, we’ll cover some of the core features of Tag1 Quo and discuss why it is essential for your sites’ security.

What is Tag1 Quo?

Tag1 Quo is a software-as-a-service (SaaS) security monitoring and alerting service for Drupal 6, Drupal 7, and Drupal 8. In addition, it includes long-term support (LTS) for Drupal 6 and is slated to commence backporting security patches for both Drupal 7 and Drupal 8 when both major versions no longer have community-supported backports. The centerpiece of Tag1 Quo integration with Drupal is the Tag1 Quo module, which is installed on your servers and communicates securely with our servers.

In addition, for a fee, we can help you with a self-hosted version of Tag1 Quo for sites hosted on-premise. This does require setup fees and entails higher per-site licensing fees, so we encourage you to reach out to us directly if you’re interested in pursuing this option.

How does Tag1 Quo work?

When a new module update is released on Drupal.org, or when a security advisory is announced that directly impacts your Drupal codebases, the Tag1 Quo system alerts you immediately and provides all of the necessary updates required to mitigate the vulnerability, with a direct link to the code you need to install to address the issue. Not only are these alerts sent over e-mail by default; they can also flow directly into your internal project workflows, including issue tracking and ticketing systems.

Tag1 Quo doesn’t stop there. As part of our long-term support (LTS) offering, when security releases and critical updates emerge, or when new security vulnerabilities are announced for community-supported Drupal versions, Tag1 audits these and determines whether the identified vulnerability also impacts end-of-life (EOL) versions of Drupal such as Drupal 6 and, in November 2021, Drupal 7. If those EOL versions are also susceptible to the vulnerabilities, we backport and test all patches to secure the EOL versions as well and distribute them to you through the Tag1 alert system.

Moreover, when a new security vulnerability is discovered in an EOL version of Drupal without an equivalent issue in a currently supported version, Tag1 creates a patch to rectify the problem and collaborates with the Drupal Security Team (several of whom are part of the Tag1 team) to determine if the EOL vulnerability applies vice-versa to all currently supported versions of Drupal so that they can be patched too. In short, no matter where the vulnerability occurs across all of Drupal’s versions, you can rest easy with Tag1 Quo’s guarantees.

What makes Tag1 Quo unique

Tag1 Quo features a centralized dashboard with an at-a-glance view of all of your Drupal sites and their current status, regardless of where each one is hosted. After all, most enterprise organizations juggle perhaps dozens of websites that need to remain secure. Such a perspective at an organizational level is essential to maintain the security of all of your websites. But the Tag1 Quo dashboard is only one among a range of capabilities unique to the service.

Immediate notice of vulnerabilities

Although several members of the Tag1 team are also part of the Drupal Security Team, and are aware of vulnerabilities as soon as they are reported, the Drupal Security Team’s first policy is to collaborate privately to address the issue before revealing its nature publicly. This is to facilitate progressive disclosure in the form of issuances of public advisories and releases of public patches before nefarious actors are able to attack Drupal sites with success. This is for your safety and for the viability of released patches.

Thanks to our deep knowledge of both projects used by our clients' websites and security advisories, Tag1 has the distinction of being among the very first to notify Tag1 Quo customers as soon as the official announcement is released. Immediately afterwards, Tag1 Quo will prepare you to apply the updates as quickly as possible to ensure your web properties’ continued safety.

Backports of LTS patches

If a fix for a vulnerability is reported for currently supported versions of Drupal but also applies to EOL versions, the patch must be backported for all Drupal sites to benefit from the patch. Unfortunately, this process can be complex and require considerable planning and analysis of the problem across multiple versions—and it can sometimes only occur after the patch targeting supported versions has been architected or completed. This means it may take more time to develop patches for LTS versions of Drupal.

Luckily, we have a head-start in developing LTS patches thanks to our advance notice of vulnerabilities in currently supported versions of Drupal. Despite the fact that we cannot guarantee that LTS updates will be consistently released simultaneously with those targeting supported versions, Tag1 has an admirable track record in releasing critical LTS updates at the same time as or within hours of the issuance of patches for supported Drupal versions.

Automated QA testing for Drupal 7 LTS

Throughout Drupal’s history, the community encouraged contributors to write tests alongside code as a best practice, but this was rarely the case until it became an official requirement for all core contributions beginning with the Drupal 7 development cycle in 2007. Tag1 team members were instrumental in tests becoming a core code requirement, and we created the first automated quality assurance (QA) testing systems distributed with Drupal. In fact, Tag1 maintains the current Drupal CI (continuous integration) systems that perform over a decade of concurrent years of testing within a single calendar year.

Because the Drupal Association has ended support for Drupal 7 tests and decommissioned those capabilities on Drupal.org, Tag1 is offering the Tag1 Quo Automated QA Testing platform as a part of Tag1 Quo for Drupal 7 LTS. The service will run all tests for Drupal 7 core and any contributed module tests that are available. Where feasible and appropriate, Tag1 will also create new tests for Drupal 7’s LTS releases. Therefore, when you are notified of LTS updates, you can rest assured that they have been tested robustly against core and focus your attention on integration testing with your custom code instead, all the while rolling out updates with the highest possible confidence.

Customer-driven product development

Last but certainly not least, Tag1 Quo is focused on your requirements. We encourage our customers to request development in order for us to make Tag1 Quo the optimal solution for your organization. By working closely with you to determine the scope of your feature requests, we can provide estimates for the work and an implementation timeline. While such custom development is outside the scope of Tag1 Quo’s licensing fees, we allot unused Tag1 Quo consulting and support hours to minor modifications on a monthly basis.

Examples of features we can provide for custom code in your codebases includes ensuring your internal repositories are relying on the latest versions of dependencies, and providing insights into your custom code through site status views on your Tag1 Quo dashboard. We can even do things like add custom alerts to notify specific teams and users responsible for these sites and customize the alerts to flow into support queues or other ticketing systems. Please get in touch with us for more information about these services.

Conclusion

The new and improved Tag1 Quo promises you peace of mind and renewed focus for your organization on building business value and adding new features. Gone are the days of worrying about security vulnerabilities and anxiety-inducing weekends spent applying updates. Thanks to Tag1 Quo, regardless of whether your site is on Drupal 6, Drupal 7, or Drupal 8, you can rest assured that your sites will remain secure and monitored for future potential vulnerabilities. With a redesigned interface and feature improvements, there is perhaps no other Drupal security monitoring service better tuned to your needs.

Special thanks to Jeremy Andrews and Michael Meyers for their feedback during the writing process.

Photo by Ian Schneider on Unsplash

Feb 12 2020
Feb 12

An effective administrative interface is table stakes for any content management system that wishes to make a mark with users. Claro is a new administration theme now available in Drupal 8 core thanks to the Admin UI Modernization initiative. Intended to serve as a logical next step for Drupal's administration interface and the Seven theme, Claro was developed with a keen eye for modern design patterns, accessibility best practices, and careful analysis of usability studies and surveys conducted in the Drupal community.

Claro demonstrates several ideas that not only illustrate the successes of open-source innovation but also the limitations of overly ambitious ideas. By descoping some of the more unrealistic proposals early on and narrowing the focus of the Claro initiative on incremental improvements and facilitating the work of later initiatives, Claro is an exemplar of sustainable open-source development.

In this closer look at how Claro was made possible and what its future holds for Drupal administration, join Cristina Chumillas (Claro maintainer and Front-End Developer at Lullabot), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a Tag1 Team Talks episode about the newest addition to Drupal's fast-evolving front end.

[embedded content]


Feb 11 2020
Feb 11

Table of Contents

If you’ve touched a Drupal site at any point in the last ten years, it’s very likely you came into contact with Drush (a portmanteau of “Drupal shell”), the command-line interface (CLI) used by countless developers to work with Drupal without touching the administrative interface. Drush has a long and storied trajectory in the Drupal community. Though many other Drupal-associated projects have since been forgotten and relegated to the annals of Drupal history, Drush remains well-loved and leveraged by thousands of Drupal professionals. In fact, the newest and most powerful version of Drush, Drush 10, is being released jointly with Drupal 8.8.0.

As part of our ongoing Tag1 Team Talks at Tag1 Consulting, a fortnightly webinar and podcast series, yours truly (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) had the opportunity to sit down with Drush maintainer Moshe Weitzman (Senior Technical Architect at Tag1) as well as Tag1 Team Talks mainstays Fabian Franz (Senior Technical Architect and Performance Lead at Tag1) and Michael Meyers (Managing Director at Tag1) for a wide-ranging and insightful conversation about how far Drush has come and where it will go in the future. In this two-part blog post series, we delve into some of the highlights from that chat and discuss what you need to know and how best to prepare for the best version of Drush yet.

What is Drush?

The simplest way to describe Drush, beyond its technical definition as a command-line interface for Drupal, is as an accelerator for Drupal development. Drush speeds up many development functions that are required in order to take care of Drupal websites. For instance, with Drush, developers can enable and uninstall modules, install a Drupal website, block or delete a user, change passwords for existing users, and update Drupal’s site search index, among many others — all without having to enter Drupal’s administrative interface.

Because Drush employs Drupal’s APIs in order to execute actions like creating new users or disabling themes, Drush performs far more quickly than Drupal’s bootstrap itself, because there is no need to traverse Drupal’s render pipeline and theme layer. In fact, Drush is also among the most compelling real-world examples of headless Drupal (a topic on which this author has written a book), because the purest definition of headless software is an application that lacks a graphical user interface (GUI). Drush fits that bill.

The origins and history of Drush

Though many of us in the Drupal community have long used Drush since our earliest days in the Drupal ecosystem and building Drupal sites, it’s likely that few of us intimately know the history of Drush and how it came to be in the first place. For a piece of our development workflows that many of us can’t imagine living without, it is remarkable how little many of us truly understand about Drush’s humble origins.

Drush has been part of the Drupal fabric now for eleven years, and during our most recent installment of Tag1 Team Talks, we asked Moshe for a Drush history lesson.

Drush’s origins and initial years

Though Moshe has maintained Drush for over a decade to “scratch his own itch,” Drush was created by Arto Bendiken, a Drupal contributor from early versions of the CMS, and had its tenth anniversary roughly a year ago. Originally, Drush was a module available on Drupal.org, just like all of the modules we install and uninstall on a regular basis. Users of the inaugural version of Drush would install the module on their site to use Drush’s features at the time.

The Drupal community at the time responded with a hugely favorable reception and granted Drush the popularity that it still sees today. Nonetheless, as Drush expanded its user base, its maintainers began to realize that they were unable to fully realize the long list of additional actions that Drupal users might want, including starting a web server to quickstart a Drupal site and one of the most notable features of Drush today: installing a Drupal site on the command line. Because Drush was architected as a Drupal module, this remained an elusive objective.

Drush 2: Interacting with a remote Drupal site

Drush 2 was the first version of Drush to realize the idea of interacting with a remote Drupal website, thanks to the contributions of Adrian Rousseau, another early developer working on Drush. Today, one of the most visible features of Drush is the ability to define site aliases to target different Drupal sites as well as different environments.

Rousseau also implemented back-end functionality that allowed users to rsync the /files directory or sql-sync the database on one Drupal installation to another. With Drush 2, users could also run the drush uli command to log in as the root user (user 1 in Drupal) on a remote Drupal site. These new features engendered a significant boost in available functionality in Drush, with a substantial back-end API that was robust and worked gracefully over SSH. It wasn’t until Drush 9 that much of this code was rewritten.

Drush 3: From module to separate project

During the development of Drush 3, Drush’s maintainers made the decision to switch from Drush’s status as a module to a project external to Drupal in order to enable use cases where no Drupal site would be available. It was a fundamental shift in how Drush interacted with the Drupal ecosystem from there onwards, and key maintainers such as Greg Anderson, who still maintains Drush today seven versions later, were instrumental in implementing the new approach. By moving off of Drupal.org, Drush was able to offer site installation through the command line as well as a Drupal quickstart and a slew of other useful commands.

Drush 5: Output formatters

Another significant step in the history of Drush came with Drush 5, in which maintainer Greg Anderson implemented output formatters, which allow users to rewrite certain responses from Drush into other formats. For instance, the drush pm-list command returns a list of installed modules on a Drupal site, including the category in which they fit, formatted as a human-readable table.

Thanks to output formatters, however, the same command could be extended to generate the same table in JSON or YAML formats, which for the first time opened the door to executable scripts using Drush. During the DevOps revolution that overturned developer workflows soon afterwards, output formatters turned out to be a prescient decision, as they are particularly useful for continuous integration (CI) and wiring successive scripts together.

Drush 8: Early configuration support

Drush 8, the version of Drush released in preparation for use with Drupal 8 sites, was also a distinctly future-ready release due to its strong command-line support for the new configuration subsystem in Drupal 8. When Drupal 8 was released, core maintainer Alex Pott contributed key configuration commands such as config-export, config-import,config-get, and config-set (with Moshe’s config-pull coming later), all of which were key commands for interacting with Drupal’s configuration.

Due to Drush 8’s early support for configuration in Drupal 8, Drush has been invaluable in realizing the potential of the configuration subsystem and is commonly utilized by innumerable developers to ensure shared configuration across Drupal environments. If you have pushed a Drupal 8 site from a development environment to a production environment, it is quite likely that there are Drush commands in the mix handling configuration synchronicity.

Drush 9: A complete overhaul

About a year ago, Drush’s indefatigable maintainers opted to rewrite Drush from the ground up for the first time. Drush had not been substantially refactored since the early days in the Drush 3 era, when it was extracted out of the module ecosystem. In order to leverage the best of the Composer ecosystem, Drush’s maintainers rewrote it in a modular way with many Composer packages for users to leverage (under the consolidation organization on GitHub).

This also meant that Drush itself became smaller in size because it modularized site-to-site communication in a tighter way. Declaring commands in Drush also underwent a significant simplification from the perspective of developer experience. Whereas foregoing Drush commands were written in PHP as was the case in Drush 8, developers could now write Drush commands in a PHP method within a callback with the lines of Doxygen above the callback housing the name, parameters, and other details of the command. Also in the same release came YAML as the default format for configuration and site aliases in Drush as well as the beginning of Symfony Console as the runner of choice for commands.

Drush 9 introduced a diverse range of new commands, including config-split, which allows for different sets of modules to be installed and different sets of configuration to be in use on distinct Drupal environments (though as we will see shortly, it may no longer be necessary). Other conveniences that entered Drush included running commands from Drupal’s project root instead of the document root as well as the drush generate command, which allows developers to quickly scaffold plugins, services, modules, and other common directory structures required for modern Drupal sites. This latter scaffolding feature was borrowed from Drupal Console, which was the first to bring that feature to Drupal 8. Drush’s version leverages Drupal’s Code Generator to perform the scaffolding itself.

Conclusion

As you can see, Drush has had an extensive and winding history that portends an incredible future for the once-humble command-line interface. From a pet project and a personal itch scratcher to one of the most best-recognized and commonly leveraged projects in the Drupal ecosystem, Drush has a unique place in the pantheon of Drupal history. In this blog post, we covered Drush’s formative years and its origins, a story seldom told among open-source projects.

In the second part of this two-part blog post series, we’ll dive straight into Drush 10, inspecting what all the excitement is about when it comes to the most powerful and feature-rich version of Drush yet. In the process, we’ll identify some of the key differences between Drush and Drupal Console, the future of Drush and its roadmap, and whether Drush has a future in Drupal core (spoiler: maybe!). In the meantime, don’t forget to check out our Tag1 Team Talk on Drush 10 and the story behind Drupal’s very own CLI.

Special thanks to Fabian Franz, Michael Meyers, and Moshe Weitzman for their feedback during the writing process.

Photo by Jukan Tateisi on Unsplash

Feb 05 2020
Feb 05

What happens when you have a connection that isn't working, but you have a mission-critical document that you need to collaborate on with others around the world? The problem of peer-to-peer collaboration in an offline environment is becoming an increasingly pressing issue for editorial organizations and enterprises. As we continue to work on documents together on flights, trains, and buses, offline-first shared editing is now a base-level requirement rather than a pipe dream.

Yjs, an open-source framework for real-time collaboration, integrates gracefully with IndexedDB, the local offline-first database available in browsers, to help developers easily implement offline shared editing for their organization's needs. Paired in turn with other technologies like WebRTC, a peer-to-peer communication protocol, and Yjs connectors, a graceful architecture is possible that not only enables offline shared editing for a variety of use cases beyond textual content but also makes the developer experience as straightforward as possible.

In this technical and topical deep dive into how Yjs and IndexedDB make offline shared editing possible, join Kevin Jahns (creator of Yjs and Real-Time Collaboration Systems Lead at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and your host Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a Tag1 Team Talks episode you don't want to miss about how to enable offline shared editing for web applications and even CMSs like Drupal.

[embedded content]

Jan 29 2020
Jan 29

Preston is a product strategist, developer advocate, speaker, and author of Decoupled Drupal in Practice (Apress, 2018).

A globally recognized voice on decoupled Drupal and subject matter expert in the decentralized web and conversational design, Preston is Editor in Chief at Tag1 Consulting and Principal Product Manager at Gatsby, where he works on improving the Gatsby developer experience and driving product development.

Having spoken at over 50 conferences, Preston is a sought-after presenter with keynotes on five continents and in three languages.

Jan 29 2020
Jan 29

Preston is a product strategist, developer advocate, speaker, and author of Decoupled Drupal in Practice (Apress, 2018).

A globally recognized voice on decoupled Drupal and subject matter expert in the decentralized web and conversational design, Preston is Editor in Chief at Tag1 Consulting and Principal Product Manager at Gatsby, where he works on improving the Gatsby developer experience and driving product development.

Having spoken at over 50 conferences, Preston is a sought-after presenter with keynotes on five continents and in three languages.

Jan 28 2020
Jan 28

Testing is becoming an essential keyword and toolkit for developers and development teams who seek to architect and implement successful and performant websites. Thanks to the unprecedented growth in automated testing tools and continuous integration (CI) solutions for all manner of web projects, testing is now table stakes for any implementation. That said, many developers find automated testing to be an altogether intimidating area of exploration. Fortunately, when paired with a development culture that values quality assurance (QA), you can focus on adding business value instead of fixing issues day in and day out.

Three years ago, Yuriy Gerasimov (Senior Back-End Engineer at Tag1 Consulting) gave a talk at DrupalCon New Orleans about some of the key ideas that Drupal developers need to understand in order to implement robust testing infrastructures and to foster a testing-oriented development culture that yields unforeseen business dividends across a range of projects. In this four-part blog series, we summarize some of the most important conclusions from Yuriy’s talk. And in this third installment, we’ll take a closer look at two of the most essential parts of any testing toolkit: unit testing and functional testing.

Unit testing

Unit testing is a particularly fascinating topic, not only because many developers already know what user testing entails, but also because it enjoys a range of possible technologies that are readily available and that development teams can easily leverage. After all, unit testing is a commonly taught concept in universities. In its most reduced form, unit testing refers to the verification of a feature’s robustness by feeding it a variety of arguments, all of which test the limits of what is possible within the feature. Drupal developers, for instance, have access to a variety of unit tests for both core and contributed modules. And the best part of unit testing is that you can test functions and classes in isolation rather than evaluating large parts of the code.

The most optimal locations for unit tests are functions that are responsible for calculating some result. For instance, if you have a function that receives a set of arguments and that performs a series of calculations based on those arguments, you can feed unit tests arguments that evaluate whether the function works for a variety of possible inputs.

Unit tests as documentation

This also reveals one of the most interesting characteristics of unit tests. Because unit tests are focused on stretching the outer limits of what functions are capable of, they are also a convenient source of robust documentation. Yuriy admits that when he downloads external libraries for the first time in order to integrate them with his existing code, he often scrutinizes the unit tests, because they indicate what the code expects as input. When documentation is challenging to read or nonexistent, unit tests can be an ideal replacement for developers seeking to comprehend the purpose of certain code, because the unit tests are where developers ensure that logic operates correctly.

As an illustration of how unit tests can yield outsized benefits when it comes to well-documented code, consider the case of Drupal’s regular expressions. In Drupal 7, the regular expressions responsible for parsing .info files, which are used across both modules and themes, are extremely lengthy owing to the myriad demands on the files. Regular expressions, after all, are easy to write but difficult to understand. Though we are privileged as Drupal developers in that Drupal breaks regular expressions into separate, assiduously commented lines, many developers in the contributed ecosystem will avoid taking this additional step. For this reason, Yuriy recommends that all developers write unit tests for regular expressions that parse critical input.

Writing testable code is difficult but useful

Unit tests help developers think in a very different way about how to write software in a legible and understandable way. In Drupal 8, for instance, the new dependency injection container is particularly well-suited for unit tests, because we can swap services in and out of it. If you work with a database as an external service, for example, you can easily mock objects and ensure that your services work for those mock objects successfully.

To illustrate this, Yuriy cites the real-world example of the Services module in Drupal 7, which originally consisted of classical Drupal code but was subsequently coupled with unit tests. Yuriy witnessed through his addition of unit tests that he was able to parse different arguments entering certain Services functionality or inspect how routing functioned. With unit tests, ensuring the security of functions is much easier, because they can help you determine all of the arguments necessary to the code so that a call to globals or static variables is rendered unnecessary. Though unit testing requires considerable effort, Drupal 8 makes the process much easier for developers.

And now, with the introduction of PHPUnit into Drupal testing infrastructures, it is now even easier to test your code. The biggest part that was added is the capability to use mock objects that are presently the industry standard for unit testing.

Functional testing

As the name suggests, functional testing analyzes how actual users will interact with your software from a concrete rather than abstract standpoint. If you have an unlimited budget for your project, you can test every single last page on your website, but this is seldom—if ever—feasible from the perspective of project budgets. Whereas unit testing only requires perhaps ten to twenty percent of your development time, functional tests will require several days to write and noticeably more effort to implement.

Selling functional testing

In many of Yuriy’s previous projects, he was able to sell functional testing to customers by justifying the need to ensure that the software would be functional irrespective of the time of day or night. His teams had particular success selling functional testing in commerce projects, because stakeholders in commerce implementations are strongly invested in consumers successfully checking out at all times.

Often, the most challenging aspect of functional testing in commerce projects is the credit card details that customers must inevitably provide to the commerce platform. If you have a testing environment with all possible payment providers, however, customers no longer need to check to make sure that the checkout functions properly while they are at home or at the office. Your clients can simply click a single button in the continuous integration (CI) server and witness for themselves that the user flow is functional or configure notifications so that they are issued solely when that process breaks.

Maintenance costs of functional testing

Functional testing requires considerable maintenance, because it is primarily based on the document object model (DOM) of your website rather than abstract code. If you modify something on the website, you will need to revise your functional tests accordingly, including the selectors that you are targeting with functional tests. Yuriy warns that many may operate under the misconception that functional tests only require a mere twelve hours of implementation, but due to the unique attributes of each project, functional tests may require several days to implement.

As such, automating the process of functional testing is of paramount importance. Yuriy suggests that functional tests should not be made difficult for developers and especially project managers who are concerned about project quality. And as with other types of tests, if functional tests are not run on a regular basis, they are of limited usefulness.

Tools for functional testing

Luckily, there are many tools available for functional testing, the most notable among them being Behat. The Behat ecosystem makes a variety of extensions available, but it is by no means the only functional testing solution available in the landscape. Other solutions exist in the software-as-a-service (SaaS) space, including Selenium, which records clicks and provides a more visual representation of functional tests. For this reason, Yuriy recommends Selenium for junior developers with less exposure to functional testing.

Nevertheless, functional testing can fall victim to the vagaries of project budgets, and prospective expense remains among the most important considerations for customers interested in functional testing. To account for this, it is essential to consider which user flows are the most critical to the success of your implementation and to provide the appropriate level of coverage for those components. It is also a good idea to discuss at length with your customer what outcomes they would like to see and to ensure that the user flows they consider most crucial enjoy adequate coverage.

Conclusion

Thanks to the dual innovations engendered by unit testing and functional testing, you can have both an abstract and concrete view into how your code is performing in terms of quality with little to no overhead. With unit testing, which ensures that each designated function works properly with a variety of inputs that stretch the limits of what it can do, you can protect yourself from potential security vulnerabilities such as a lack of input sanitization. Functional testing, meanwhile, allows you to perform automated trials of your implementation to guarantee that your users are traversing the experience you intend.

In this blog post, we explored the boundaries of what unit testing and functional testing can offer you when it comes to a modern testing infrastructure and test-oriented development culture. In the fourth and final installment of this four-part blog series, we turn our attention to two of the areas of testing experiencing some of the most innovation as of late: visual regression testing and performance testing. As demands on our implementations shift, it is increasingly more important that we understand not only when changes impact user experiences but also how they perform under realistic loads.

Special thanks to Yuriy Gerasimov and Michael Meyers for their feedback during the writing process.

Please check out Part 1 and Part 2!

Jan 23 2020
Jan 23

Over the course of Drupal’s lengthy history, one of the most common feature requests has been automatic updates. A common complaint of Drupal site administrators, especially those who have smaller sites updated less frequently, is the frequently complex and drawn-out process required to update a Drupal site from one minor version to another. Updates can involve a difficult set of highly specific steps that challenge even the most patient among us. Indeed, many in the Drupal community simply choose to ignore the automatic e-mails generated by Drupal.org indicating the availability of a new version, and waiting can lead to compounding security vulnerabilities.

Fortunately, the era of frustration when it comes to automatic updates in Drupal is now over. As one of the roughly dozen Drupal Core Strategic Initiatives, Drupal automatic updates are a key feature that will offer Drupal users better peace of mind when minor releases occur. Over the last several years, Tag1 Consulting, well-known as leading performance and scalability experts in the Drupal community, has worked closely with the Drupal Association, MTech, and the Free and Open Source Software Auditing (FOSSA) program at the European Commission to make automatic updates in Drupal a reality.

Recently, I (Preston So, Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) sat down with Lucas Hedding (Senior Architect and Data and Application Migration Expert at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Tim Lehnen (CTO at the Drupal Association), and Michael Meyers (Managing Director at Tag1) to host a Tag1 Team Talks episode about the story of Tag1’s involvement in the automatic updates strategic initiative. In this blog post, we dive into some of the fascinating background and compelling features in Drupal’s new automatic updates, as well as how this revolutionary feature will evolve in the future.

What are automatic updates in Drupal?

Listed as one of the Drupal Core Strategic Initiatives for Drupal 9, Drupal’s automatic updates are intended to resolve some of the most intractable usability issues in maintaining Drupal sites. Updating Drupal sites can be a challenging, tedious, and costly process. Building an automatic updater for Drupal is a similarly difficult problem, with a variety of potential security risks, but it’s a problem that other ecosystems have solved successfully. Following Dries’ announcement of automatic updates as a strategic priority, several early architectural discussions took place, especially at Midwest Drupal Summit 2018 in Ann Arbor.

Automatic updates in Drupal provide certain key benefits for users of all shapes and sizes who leverage Drupal today, including individual end users, small- to medium-sized agencies, and large enterprises. Advantages that apply to all users across the spectrum include a reduction in the total cost of ownership (TCO) for Drupal sites and especially a decrease in maintenance costs.

As for small- and medium-sized agencies and individual site owners, it can be difficult—and deeply disruptive and anxiety-inducing—to mobilize sufficient resources in a brisk timeframe to prepare for Drupal security releases that typically occur on Wednesdays. For many end users and small consultancies who lack experience with keeping their Drupal sites up to date, high-alert periods on Wednesday can be deeply stressful. And for enterprise users, how to incorporate security updates becomes a more complex discussion: Should we integrate manual updates into our security reviews or keep adhering to continuous integration and continuous deployment (CI/CD) processes already in place?

Where are Drupal’s automatic updates today?

The full roadmap for Drupal’s automatic updates is available on Drupal.org for anyone to weigh in, but in this blog post we focus on its current state and long-term future. Automatic updates in Drupal include updates on production sites as well as on development and staging environments, although some integration with existing CI/CD processes may be required. In addition, automatic updates support both Drupal 7 and Drupal 8 sites.

Because of the ambitious nature of the automatic updates initiative, as well as the desire by the module’s maintainers to undertake a progressive approach from an initial module in the contributed ecosystem to a full experimental module in Drupal core, the development process has been phased from initial architecture to present. Currently, a stable release is available that includes features like public safety alerts and readiness checks.

As for other developments within the scope of available funding from the European Commission, in-place automatic updates have also arrived. If a critical security release is launched, and your site has the automatic updates module installed, you’ll receive an e-mail notification stating that an update is forthcoming in the next several days. Once the update is available, the module will then automatically execute the in-place automatic update if all readiness checks show as green on the Drupal user interface, meaning that no additional action is required on the user’s part.

Key features of Drupal automatic updates

Together with MTech, the Drupal Association, and the European Commission, Tag1 has been heavily involved in architecting the best and most graceful approach, particularly in such a way that it can be generalized for and leveraged by other open-source software projects in the PHP landscape. This includes approaches seen in other ecosystems such as readiness checking, file downloading, and signature verification that generates “quasi-patches” as well as inspiration from the WordPress community. One of the team’s major concerns in particular is ensuring the continuous integrity of update packages such that users can be confident that such packages are installed from a trusted source.

There are three key features available as part of automatic updates in Drupal that will be part of the initial release of the module, and we discuss each of these in turn here.

Public safety messaging

After the noted security vulnerability in 2014 commonly known as “Drupalgeddon,” a notice was posted indicating that a critical release was forthcoming. When it comes to automatic updates, a similar process would occur: Several days before a critical security release for Drupal core or for a contributed project in Drupal, a notice would be posted several days prior and available on every Drupal site.

This sort of public safety messaging allows for an additional communication mechanism before a key update so that site owners can ensure they are ready for an update to land. In Drupal sites, the feed of alerts originate from the same security advisories (SAs) that the Drupal Security Team and Drupal’s release managers issue.

Readiness or “preflight” checks

Every Drupal site with automatic updates installed will also have readiness checks, also known as “preflight” checks, that run regularly every six hours through Drupal’s cron and will inform site owners if their site is prepared for an automatic update. Readiness checks are essential to ensure that sites are not functionally compromised after an automatic update.

For instance, if Drupal core has been hacked by a developer, if a site is running on a read-only filesystem, or if there are foregoing database updates that need to be run, readiness checks will indicate that these issues need resolutions before a site can automatically update. There are eight or nine readiness checks available currently, and some are simple warnings to aid the user (e.g. in situations where cron is running too infrequently to update the site automatically in a timely fashion), while others are errors (e.g. the filesystem is read-only and cannot be written to). Whereas warnings will not impede the commencement of an automatic update, errors will.

In-place updates

The final crucial component for automatic updates is in-place updates, the centerpiece of this new functionality. The in-place updates feature in Drupal’s automatic updates downloads a signed ZIP archive from Drupal.org. Using the libsodium library, the feature then compares the signature of the ZIP file to verify that the resulting archive matches Drupal.org’s official archive.

Thereafter, in-place updates will back up all files that are slated for update and update the indicated files. If the process concludes successfully, the site issues a notification to the user that the site has been upgraded. If something fails during the procedure, in-place updates will restore the available backup.

Common questions about automatic updates

On the recent Tag1 Team Talks episode about automatic updates in Drupal, contributors from Tag1 and the European Commission answered some of the most important questions on every Drupal user’s mind as the initiative continues to roll out automatic updates features.

What about using Composer versus tarballs?

One of the key differences between Drupal implementations today is the use of the Composer command-line interface to handle Drupal’s installed modules in lieu of managing module installations through tarballs. Due to the widening use of Composer in the Drupal community, if a site has updated to Drupal 8.8.0 or later, the site will be using Composer. And if the two key Composer-related files in Drupal codebases (namely composer.json and composer.lock) are not modified, automatic updates will continue to function properly. However, for sites leveraging Composer and subsequently modifying the /vendor directory in Drupal codebases, this question becomes more complicated.

At present, the automatic updates team will release early versions supporting all scenarios for Drupal sites, short of those sites that have modified composer.json and composer.lock directly. By observing users as they gradually adopt automatic updates, the team plans to learn much about how users juggle Drupal dependencies in order to release improved update methods that accommodate Composer much more gracefully.

Are automatic updates part of Drupal core?

As of now, automatic updates are not part of a vanilla Drupal installation, but all major components of the module will be incorporated into Drupal core in due course. The in-place updates feature presents the most overt difficulties.

Before in-place updates can land in core, the automatic update team plans to implement an A/B front-end controller that is capable of swapping between two full codebases and toggle back to the backed-up, out-of-date codebase if the update exposes certain issues mid-flight.

What is the future of automatic updates?

While the European Commission has funded the first twelve months of work over the course of 2019, there is much more work to do. The initial European Commission funding accounts for the three aforementioned key features, namely readiness checking, the delivery of update “quasi-patches,” and a robust package signing system, all focused on security updates, which are the most pressing. However, the current year of development excludes better support for Composer and contributed projects.

The longer-term roadmap for automatic updates includes the A/B front-end controller mentioned in the previous section, more robust support for Composer-powered sites, and other types of updates. These include updates for contributed modules and themes as well as batched successful updates for sites that have fallen particularly behind.

Conclusion

Automatic updates will reinvent how we maintain and upgrade Drupal sites, particularly in the realm of security. Because they allow novice and experienced Drupal users alike to save time without the need to worry about how they will implement updates, this strategic initiative improves the total cost of ownership for Drupal users of all sizes and backgrounds.

No account of the extraordinary initiative that is Drupal’s automatic updates would be complete without appreciation for the sponsors of the developers involved, especially from the Drupal Association, MTech, Tag1 Consulting, and the European Commission’s FOSSA program. Organizations and individuals alike have sponsored automatic updates in Drupal to widen awareness of their brands, to showcase their skills as developers, and to attract other Drupal contributors and resource Drupal teams.

To sponsor the continued success of Drupal’s automatic updates, please consider sponsoring development by contacting the Drupal Association. And for more insight into automatic updates directly from the module’s creators, check out our recent Tag1 Team Talks episode on the topic for information we were unable to fit in this blog post.

Special thanks to Fabian Franz, Lucas Hedding, Tim Lehnen, and Michael Meyers for their feedback during the writing process.

Click the following link for our Tag1 Team Talk on Drupal Automatic Updates!

Jan 22 2020
Jan 22

WebRTC, a protocol that facilitates peer-to-peer communication between two clients via the browser, is now supported by all modern browsers. Since its introduction it has mainly been used for web conferencing solutions, but WebRTC is ideal for a variety of other use cases as well. Because of its wide platform support, creating peer-to-peer applications for the web is now more straightforward than ever. But how do you manage many people working together at the same time on the same data? After all, conflict resolution for peer-to-peer applications remains a challenging problem. Fortunately, with Yjs, an open-source framework for real-time collaboration, developers can now combine WebRTC and Yjs to open the floodgates to a range of future-ready collaborative use cases.

Thanks to WebRTC and Yjs, anyone can build collaborative editing into their web application, and this includes more than just text Yjs enables collaborative drawing, drafting, and other innovative use cases. The advantage of such a peer-to-peer model (in lieu of a client–server model) in the CMS world is that collaborative editing can be added to any editorial interface without significant overhead or a central server handling conflict resolution. By integrating with y-webrtc, the Yjs connector for WebRTC, CMS communities can easily implement collaborative editing and make it natively available to all users, whether on shared hosting or in the enterprise. The future of Drupal, WordPress, and other CMSs is collaborative, and, together, WebRTC and Yjs enable collaborative editing out of the box.

In this deep dive into how Yjs enables peer-to-peer collaboration, join Kevin Jahns (Real-Time Collaboration Systems Lead at Tag1 and creator of Yjs), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a closer look at how you too can build peer-to-peer collaboration into your decentralized application.

[embedded content]
Jan 22 2020
Jan 22

WebRTC, a protocol that facilitates peer-to-peer communication between two clients via the browser, is now supported by all modern browsers. Since its introduction it has mainly been used for web conferencing solutions, but WebRTC is ideal for a variety of other use cases as well. Because of its wide platform support, creating peer-to-peer applications for the web is now more straightforward than ever. But how do you manage many people working together at the same time on the same data? After all, conflict resolution for peer-to-peer applications remains a challenging problem. Fortunately, with Yjs, an open-source framework for real-time collaboration, developers can now combine WebRTC and Yjs to open the floodgates to a range of future-ready collaborative use cases.

Thanks to WebRTC and Yjs, anyone can build collaborative editing into their web application, and this includes more than just text Yjs enables collaborative drawing, drafting, and other innovative use cases. The advantage of such a peer-to-peer model (in lieu of a client–server model) in the CMS world is that collaborative editing can be added to any editorial interface without significant overhead or a central server handling conflict resolution. By integrating with y-webrtc, the Yjs connector for WebRTC, CMS communities can easily implement collaborative editing and make it natively available to all users, whether on shared hosting or in the enterprise. The future of Drupal, WordPress, and other CMSs is collaborative, and, together, WebRTC and Yjs enable collaborative editing out of the box.

In this deep dive into how Yjs enables peer-to-peer collaboration, join Kevin Jahns (Real-Time Collaboration Systems Lead at Tag1 and creator of Yjs), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Michael Meyers (Managing Editor at Tag1), and Preston So (Editor in Chief at Tag1 and author of Decoupled Drupal in Practice) for a closer look at how you too can build peer-to-peer collaboration into your decentralized application.

[embedded content]
Jan 22 2020
Jan 22

Automated tests are rapidly becoming a prerequisite for successful web projects, owing to the proliferation of automated testing tools and an explosion of continuous integration (CI) services that ensure the success of web implementations. Nonetheless, for many developers who are new to the space, automated testing can be an intimidating and altogether novel area that causes more than a few groans at weekly meetings. Luckily, with the right development culture and testing infrastructure in place, your team can focus on implementing new features rather than worrying about the quality of their code.

Yuriy Gerasimov (Senior Back-End Engineer at Tag1 Consulting) delivered a presentation at DrupalCon New Orleans about automated testing and its advantages for web projects of all shapes and sizes. In this four-part blog series, we explore some of the essentials that all developers should be aware of as they explore automated testing as well as the key fundamentals you need to know to incorporate automated testing into your daily workflows. In this second installment, we inspect how to implement a robust testing infrastructure and how to cultivate a development culture favorable to automated testing with the help of code checks.

Key elements of automated testing

Over the course of his career, Yuriy has interacted with a variety of consultancies to gather anecdotal evidence of how automated testing has worked across a range of scenarios. What he discovered is that most orgs tend to implement testing infrastructures and automated testing on a project-by-project basis rather than incorporating it as a part of their regular process for every project. This betrays a fundamental disconnect between the value that automated testing can provide and the inevitable inefficiencies that arise when automated testing is added to individual projects on an ad-hoc basis.

Implementing robust testing infrastructures

One of the classic problems that arise from such a situation is the notion of development teams only implementing a continuous integration (CI) server when customers are able to provide for it in the project budget. In other words, either you can build a server that extends a development culture centered around automated testing, or you risk a scenario in which shared characteristics and shared testing components are absent across projects, requiring you to bootstrap a new infrastructure every time.

Even upon improving quality in your projects thanks to a robust testing infrastructure, code reviews are essential and will allow your interactions between your team’s developers to be elevated to a higher caliber. Unfortunately, however, developers tend not to share substantial knowledge after they complete the features they are tasked to complete. If a developer sees a colleague not following best practices, code reviews can foster improvements in the code thanks to the knowledge that all parties gain in the process. Because of this, Yuriy suggests that development teams leverage a source control provider like GitHub, Bitbucket, or GitLab that incorporates built-in peer reviews functionality.

Fostering a development culture conducive to testing

Development culture is also essential to ensure the success of automated testing. This means that all developers should understand how the testing infrastructure functions in order to guard against regressions. When deployments are not tied to individual team members, for instance, this means that all members of the team understand how deployment occurs and are thus able to implement improvements themselves. For this reason, we discourage blocking deployments on a single function or individual contributor.

The optimal situation is one in which even a project manager who does not write code is capable of initializing deployments and kicking off a series of automated tests. When deployment is automated in this way to the point where even team members uninvolved in development can understand how quality is assessed across the project, this can level up the skill sets of the entire team.

For example, Yuriy recommends that every new developer on a team conduct a deployment themselves in isolation from the rest of the team. By doing so, the least experienced individual contributor may encounter inefficiencies theretofore unaccounted for by other team members and catalyze improvements in the quality of the code. When collaborators who are not yet on-boarded are able to foster advancement in the testing infrastructure across the team, the benefits can not only enrich the automated tests themselves but also cultivate a highly improved development culture across the board.

Considering maintenance costs

Nonetheless, maintenance costs are an important facet of automated testing to consider for any implementation, large or small, because they may be sufficiently exorbitant to encourage recalibration in the testing infrastructure. Some of the key questions to ask include: Do you have enough people to maintain the system from end to end? Do you have a dedicated systems administrator or DevOps specialist who can correct issues when discovered?

After all, testing infrastructures tend to be the components of projects that are scrutinized the least once they are complete—this is part of the blessing and curse of the notion of a “one and done” mindset. In the end, every project has different requirements, and other upcoming projects may demand different systems or other approaches to the same problem. When selecting automated testing systems, therefore, it is essential to consider their impact on the maintenance costs that your team will inevitably incur.

Code checks

Among the easiest and simplest to implement, code checks are static analyses of code that are not only educational about the code’s quality itself but also can perform very quickly unless your codebase includes hundreds of megabytes of code. As Yuriy notes, in that case, you have other problems to solve first. For many Drupal development teams, code checks for adherence to Drupal coding standards are the first line of defense against potential regressions.

By the same token, security checks, which evaluate the risk of potential vulnerabilities, are also critical. Security checks are capable of verifying that certain best practices are followed when it comes to possible attack vectors, such as the use of globals or session variables in key functions or the allowance of input into deeper recesses of Drupal without proper sanitization. These checks are also convenient in that in many cases, less experienced developers can run security checks and understand the implications of the results without consulting a more senior developer. Along these same lines, linters, which check for syntax errors and the like, can be hugely beneficial for front-end developers.

Complexity metrics and copy-paste detection

Another fascinating selling point for code quality is complexity metrics, which are comprised of assessments of how complex the code is. Among the most important of these is cyclomatic complexity. Consider a scenario in which you have written a function that contains a foreach cycle with multiple control structures (many if-statements) nested within. If your function has many levels of nesting, this can present problems not only in code readability and the likelihood of introducing bugs but also in maintenance. Code checks that analyze cyclomatic complexity can help you to uncover situations in which others would have a horrible experience reading your code by limiting the number of levels that code can be nested (e.g. no more than five levels). Such complexity metrics will aid you in isolating certain logic into other functions or exiting early from your cycles to help your code become more legible.

Finally, copy-paste detection is another hugely useful element of code checking that allows for you to encounter inefficiencies in code. Some developers, for better or worse, often copy and paste code from examples or Stack Overflow responses without necessarily considering how it can best be incorporated into the existing codebase. Copy-paste detection can thus inspect the code to detect code pasted in multiple places; if you use the same piece of code in multiple locations, it may be best to abstract it out by isolating it into another function instead.

Conclusion

All told, code checks are often so immediate that they can take mere fractions of a second. For this reason, they are a ubiquitous element of automated testing and can allow developers to become more productive in a short period of time. In this way, your team can create not only a robust underlying testing infrastructure well-suited for repeatability but also ensure the longevity of a testing-oriented development culture that values consistent code quality.

In this blog post, we covered some of the most crucial elements of any automated testing approach for projects both large and small, namely a robust testing infrastructure and a focused development culture for automated testing. In turn, we looked at the outsized benefits that code checks and security checks can have on your codebase. In the next installment of this four-part blog series, we will devote our attention to some of the other essentials in the testing toolbox: unit testing and functional testing.

Special thanks to Yuriy Gerasimov, Jeremy Andrews, and Michael Meyers for their feedback during the writing process.

Jan 22 2020
Jan 22

Automated tests are rapidly becoming a prerequisite for successful web projects, owing to the proliferation of automated testing tools and an explosion of continuous integration (CI) services that ensure the success of web implementations. Nonetheless, for many developers who are new to the space, automated testing can be an intimidating and altogether novel area that causes more than a few groans at weekly meetings. Luckily, with the right development culture and testing infrastructure in place, your team can focus on implementing new features rather than worrying about the quality of their code.

A few years back, Yuriy Gerasimov(Senior Back-End Engineer at Tag1 Consulting) delivered a presentation at at DrupalCon New Orleans about automated testing and its advantages for web projects of all shapes and sizes. In this four-part blog series, we explore some of the essentials that all developers should be aware of as they explore automated testing as well as the key fundamentals you need to know to incorporate automated testing into your daily workflows. In this second installment, we inspect how to implement a robust testing infrastructure and how to cultivate a development culture favorable to automated testing with the help of code checks.

Key elements of automated testing

Over the course of his career, Yuriy has interacted with a variety of consultancies to gather anecdotal evidence of how automated testing has worked across a range of scenarios. What he discovered is that most agencies tend to implement testing infrastructures and automated testing on a client-by-client basis rather than incorporating it as a part of their regular process for every project they come across. This betrays a fundamental disconnect between the value that automated testing can provide and the inevitable inefficiencies that arise when automated testing is added to individual projects on an ad-hoc basis.

Implementing robust testing infrastructures

One of the classic problems that arise from such a situation is the notion of development teams only implementing a continuous integration (CI) server when customers are able to provide for it in the project budget. In other words, either you can build a server that extends a development culture centered around automated testing, or you risk a scenario in which shared characteristics and shared testing components are absent across projects, requiring you to bootstrap a new infrastructure every time.

Even upon improving quality in your projects thanks to a robust testing infrastructure, code reviews are essential and will allow your interactions between your team’s developers to be elevated to a higher caliber. Unfortunately, however, developers tend not to share substantial knowledge after they complete the features they are tasked to complete. If a developer sees a colleague not following best practices, code reviews can foster improvements in the code thanks to the knowledge that all parties gain in the process. Because of this, Yuriy suggests that development teams leverage a source control provider like GitHub, Bitbucket, or GitLab that incorporates built-in peer reviews functionality.

Fostering a development culture conducive to testing

Development culture is also essential to ensure the success of automated testing. This means that all developers should understand how the testing infrastructure functions in order to guard against regressions. When deployments are not tied to individual team members, for instance, this means that all members of the team understand how deployment occurs and are thus able to implement improvements themselves. For this reason, we discourage blocking deployments on a single function or individual contributor.

The optimal situation is one in which even a project manager who does not write code is capable of initializing deployments and kicking off a series of automated tests. When deployment is automated in this way to the point where even team members uninvolved in development can understand how quality is assessed across the project, this can level up the skill sets of the entire team.

For example, Yuriy recommends that every new developer on a team conduct a deployment themselves in isolation from the rest of the team. By doing so, the least experienced individual contributor may encounter inefficiencies theretofore unaccounted for by other team members and catalyze improvements in the quality of the code. When collaborators who are not yet on-boarded are able to foster advancement in the testing infrastructure across the team, the benefits can not only enrich the automated tests themselves but also cultivate a highly improved development culture across the board.

Considering maintenance costs

Nonetheless, maintenance costs are an important facet of automated testing to consider for any implementation, large or small, because they may be sufficiently exorbitant to encourage recalibration in the testing infrastructure. Some of the key questions to ask include: Do you have enough people to maintain the system from end to end? Do you have a dedicated systems administrator or DevOps specialist who can correct issues when discovered?

After all, testing infrastructures tend to be the components of projects that are scrutinized the least once they are complete—this is part of the blessing and curse of the notion of a “one and done” mindset. In the end, every project has different requirements, and other upcoming projects may demand different systems or other approaches to the same problem. When selecting automated testing systems, therefore, it is essential to consider their impact on the maintenance costs that your team will inevitably incur.

Code checks

Among the easiest and simplest to implement, code checks are static analyses of code that are not only educational about the code’s quality itself but also can perform very quickly unless your codebase includes hundreds of megabytes of code. As Yuriy notes, in that case, you have other problems to solve first. For many Drupal development teams, code checks for adherence to Drupal coding standards are the first line of defense against potential regressions.

By the same token, security checks, which evaluate the risk of potential vulnerabilities, are also critical. Security checks are capable of verifying that certain best practices are followed when it comes to possible attack vectors, such as the use of globals or session variables in key functions or the allowance of input into deeper recesses of Drupal without proper sanitization. These checks are also convenient in that in many cases, less experienced developers can run security checks and understand the implications of the results without consulting a more senior developer. Along these same lines, linters, which check for syntax errors and the like, can be hugely beneficial for front-end developers.

Complexity metrics and copy-paste detection

Another fascinating selling point for code quality is complexity metrics, which are comprised of assessments of how complex the code is. Among the most important of these is cyclomatic complexity. Consider a scenario in which you have written a function that contains a foreach cycle with multiple control structures (many if-statements) nested within. If your function has many levels of nesting, this can present problems not only in code readability and the likelihood of introducing bugs but also in maintenance. Code checks that analyze cyclomatic complexity can help you to uncover situations in which others would have a horrible experience reading your code by limiting the number of levels that code can be nested (e.g. no more than five levels). Such complexity metrics will aid you in isolating certain logic into other functions or exiting early from your cycles to help your code become more legible.

Finally, copy-paste detection is another hugely useful element of code checking that allows for you to encounter inefficiencies in code. Some developers, for better or worse, often copy and paste code from examples or Stack Overflow responses without necessarily considering how it can best be incorporated into the existing codebase. Copy-paste detection can thus inspect the code to detect code pasted in multiple places; if you use the same piece of code in multiple locations, it may be best to abstract it out by isolating it into another function instead.

Conclusion

All told, code checks are often so immediate that they can take mere fractions of a second. For this reason, they are a ubiquitous element of automated testing and can allow developers to become more productive in a short period of time. In this way, your team can create not only a robust underlying testing infrastructure well-suited for repeatability but also ensure the longevity of a testing-oriented development culture that values consistent code quality.

In this blog post, we covered some of the most crucial elements of any automated testing approach for projects both large and small, namely a robust testing infrastructure and a focused development culture for automated testing. In turn, we looked at the outsized benefits that code checks and security checks can have on your codebase. In the next installment of this four-part blog series, we will devote our attention to some of the other essentials in the testing toolbox: unit testing and functional testing.

Special thanks to Yuriy Gerasimov, Jeremy Andrews, and Michael Meyers for their feedback during the writing process.

Jan 17 2020
Jan 17

With the release of Drupal 8.8, Drush is also due for an upgrade — to Drush 10. For this venerable command-line interface that many Drupal developers know intimately well, what does the present and future look like? What considerations should we keep in mind when selecting Drupal Console or Drush? What new features are available in Drush 10 that characterize the new CI/CD approaches we see expanding in the Drupal community?

In this Tag1 Team Talk, join the creator and maintainer of Drush Moshe Weitzman (Senior Technical Architect at Tag1), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1), Preston So (Editor in Chief at Tag1), and Michael Meyers (Managing Director at Tag1) for a journey through Drush’s history and promising future. We take a deep look at what made Drush what it is today, the most compelling features in Drush 10, and how a hypothetical Drush in core could look.

[embedded content]
Jan 15 2020
Jan 15

Testing has become an important topic in recent years thanks to the explosion of testing technologies and continuous integration (CI) approaches but also due to the need for an ever-widening range of tests for a variety of use cases. For many developers, understanding how to incorporate testing into their development workflows can be daunting due to the many terms involved and, worse yet, the many tools available both in software-as-a-service (SaaS) companies and in open-source ecosystems like Drupal.

Yuriy Gerasimov (Senior Back-End Engineer at Tag1 Consulting) presented a session at DrupalCon New Orleans about modern testing approaches and how to decide on the correct suite of tests for your software development workflows. In this four-part blog series, we analyze the concepts in contemporary testing approaches that you need to know in your day-to-day and why they can not only protect but also accelerate your project progress. In this first installment, we take a look at how to sell testing as an important component of client (your stakeholders) projects, as well as why automated testing is an essential component of any web implementation.

Why testing?

Many people around the web development landscape have heard of testing, but when you ask about real-world examples on real-life projects, the same developers admit that their testing infrastructures are sorely lacking. The most important reason for this is that while testing is a compelling value-add for developers, it can be difficult to incorporate testing as a required line item in projects, especially when business stakeholders are looking to save as much money as possible. How to sell testing to clients continues to be a challenging problem, especially because requesting that a client pay for an additional 100 hours on top of the 500 already incurred can be anathema to their sense of frugality.

After all, many customers will respond by arguing that by choosing you as an agency or developer, they already trust you to foster a high-quality outcome. As such, many clients will ask, “Why do we need to spend extra money on testing?” Without the overt benefit that project components like architectural workshops and actual implementation provide, testing is often the forgotten and most easily abandoned stage of a build.

A real-world example of the need for testing

In his talk at DrupalCon New Orleans, Yuriy describes a large project (prior to his time at Tag1) on which he collaborated with many other people on a development team tasked with finishing the implementation in six months. The project was for a local municipality, with many integration points and features. Every feature needed to work perfectly, including critical features for civic life such as applying for permits, and tolerance for dysfunction was low.

By the end of the project, originally slated for six months, Yuriy’s development team ultimately spent six months developing and an additional six months fixing issues and testing functionality. Fortunately for his team, the municipality had already been through a project whose timeline ballooned out of control, and the team was able to deliver the project within a year as opposed to the previous partner, who spent two years on the same project.

One of the most alarming aspects of the project at the time was that all of the testing the team had done until that moment consisted of manual testing sessions. A meeting was convened, and every developer stood up, responsible for describing the rationale for each feature they had built and demonstrating the feature. Every team member would then test each constituent feature and fix issues on the spot, in the moment.

Learning from past mistakes

As one can imagine, this manual testing approach is highly unsustainable for projects that require tight timelines with a high degree of confidence. Yuriy learned many lessons from the project, and in a subsequent implementation six months later, in which he and his collaborators built an application for people with hearing and speech difficulties, he made considerable changes. The project was complex, with several servers communicating with the application through REST APIs and a high expectation for a user experience that would allow users to click icons representing elocutions that would speak in their place.

From the beginning, Yuriy and his team baked in automated testing up front to test communication with the REST APIs and ensure all requests were functioning properly. They built the project to be scalable because they knew that many users would be using the application simultaneously. In the end, the quality assurance (QA) overhead was minimal, because developers on the team could simply run automated tests and show the result to the client. Even though the size of the project was roughly the same, having built-in automated testing with acceptance criteria was a benefit difficult to overstate.

Defending quality: Selling testing to customers

When testing aficionados attempt to sell testing to customers, they must frame the investment in terms of quality and long term vs. short term costs (failing to deal with this in the short term will actually cost you more in the long term). However, it is admittedly difficult to sell something when its success cannot be measured. After all, from the client perspective, a buyer is selecting a vendor based on the quality with which they implement projects. But there are only anecdotal metrics that indicate whether an organization performs better than another with high-quality projects. For this reason, it is essential that developers interested in selling testing as part of their contracts offer metrics that are comprehensible to the customer.

In the end, the sole concern of customers is that software is delivered without bugs. While ease of maintenance is also important, this is generally considered table-stakes among stakeholders (or a problem for the future). In order to provide a high degree of confidence for issue-free builds, we need metrics for traits like performance and code quality (like adherence to Drupal’s coding standards). Thus, when a customer asks about the justification of a metric such as code quality, we can show the results of tools like code audits, which in Drupal consist of a single Drush command that generates a full report. By performing a code audit on a codebase written by a less experienced team, for example, clients can be sold immediately on the value of your team by seeing the results of a highly critical code audit—and will seldom be opposed to your team winning the contract.

Automated testing

For many developers who are new to the concept of automated testing, the term can unleash a torrent of anxiety and concern about the massive amount of work required. This is why Yuriy recommends, first and foremost, building a testing infrastructure and workflow that demands minimum effort while yielding maximum dividends. Nonetheless, successful automated testing requires a robust testing infrastructure and a healthy development culture that is supportive. Without these elements, success is seldom guaranteed.

Fortunately, the up-front cost of automated testing is low owing to the “one and done” nature of automated testing. Though it’s likely you’ll spend a few weeks building out the infrastructure, there is no need to repeat the same process over and over again. Nevertheless, Yuriy recommends exploring the approaches that other software companies and industries undertake to understand how they tackle similar requirements. For example, automated testing for the C language has been around for many years. Moreover, there is no need to write our own continuous integration (CI) server, thanks to the wide variety of services available on the market, including software-as-a-service (SaaS) solutions that charge as little as $50 per month.

Even if you have written a large number of tests for your project, one of the most important aspects of automated testing may seem particularly obvious. If you don’t run automated tests regularly, you won’t receive any benefits. For instance, it is certainly not adequate to inform your customer that you have implemented automated testing unless you are running said tests weekly or monthly based on the project requirements. Otherwise, the value of the time you have spent implementing automated tests becomes questionable.

Conclusion

As you can see, testing is frequently the most easily forgotten component of web projects due to the extent to which clients question its value. However, armed with the right approaches to selling tests, you too can cultivate a culture of quality assurance (QA) not only within your development team but also for your business’s customers. With the help of automated testing, you can reduce the headaches for your team down the road and justify additional time that means extra money in your pocket.

In this blog post, we covered some of the important aspects of modern testing approaches and why customers are beginning to take a second look at the importance of quality in their projects. In the second installment of this four-part blog series, we’ll turn our attention to implementing testing infrastructures and fostering a development culture favorable to testing within your existing projects. We’ll discuss some of the maintenance costs associated with implementing automated testing and begin to look at two of the most prominent areas of testing: code checks and unit testing.

Special thanks to Yuriy Gerasimov, Jeremy Andrews, and Michael Meyers for their feedback during the writing process.

Nov 11 2019
Nov 11

Table of Contents

What makes a collaborative editing solution robust?
Decentralized vs. Centralized Architectures in Collaborative Editing
Operational Transformation and Commutative Replicated Data Types (CRDT)
Why Tag1 Selected Yjs
Conclusion

In today’s editorial landscape, content creators can expect not only to touch a document countless times to revise and update content, but also to work with other writers from around the world, often on distributed teams, to finalize a document collaboratively and in real time. For this reason, collaborative editing, or shared editing, has become among the most essential and commonly requested features for any content management solution straddling a large organization.

Collaborative editing has long existed as a concept outside the content management system (CMS). Consider, for example, Google Docs, a service that many content creators use to write content together before copy-and-pasting the text into form fields in a waiting CMS. But in today’s highly demanding CMS landscape, shouldn’t collaborative editing be a core feature of all CMSs out of the box? Tag1 Consulting agreed, and the team decided to continue its rich legacy in CMS innovation by making collaborative editing a reality.

Recently, the team at Tag1 Consulting worked with the technical leadership at a top Fortune 50 company to evaluate solutions and ultimately implement Yjs as the collaborative editing solution that would successfully govern content updates across not only tens of thousands of concurrent users but also countless modifications that need to be managed and merged so that content remains up to date in the content management system (CMS). This process was the subject of our inaugural Tag1 Team Talk, and in this blog post, we’ll dive into some of the common and unexpected requirements of collaborative editing solutions, especially for an organization operating at a large scale with equally large editorial teams with diverse needs.

Collaborative editing, simply put, is the ability for multiple users to edit a single document simultaneously without the possibility of conflicts arising due to concurrent actions—multiple people writing and editing at the same time can’t lead to a jumbled mess. At minimum, all robust collaborative editing solutions need to be able to merge actions together such that every user ends up with the same version of the document, with all changes merged appropriately.

Collaborative editing requires a balancing act between clients (content editors), communication (whether between client and server or peer-to-peer), and concurrency (resolving multiple people’s simultaneous actions). But there are other obstacles that have only emerged with the hyperconnectivity of today’s global economy: The ability to edit content offline or on slow connections, for instance, as well as the ability to resynchronize said content, is a baseline requirement for many distributed teams.

The provision of a robust edit history is also uniquely difficult in collaborative editing. Understanding what occurs when an “Undo” or “Redo” button is clicked in single editors without the need for real-time collaboration is a relatively trivial question. However, in collaborative editors where synchronization across multiple users’ changes and batch updates from offline editing sessions need to be reflected in all users’ content, the definition of undo and redo actions becomes all the more challenging.

Moreover, real-time collaborative editing solutions also need to emphasize the collaboration element itself and afford users the ability to understand where other users’ cursors are located in documents. Two of the most fundamental features of any collaborative editing solution in today’s landscape are indications of presence and remote cursors, both characteristics of free-to-use collaborative editing solutions such as Google Docs.

Presence indications allow for users in documents to see who else is currently actively working on the document, similar to the user thumbnails in the upper-right corner of a typical Google Docs document. Remote cursors, meanwhile, indicate the content a user currently has selected or the cursor location at which they last viewed or edited text.

During Tag1’s evaluation of the collaborative editing landscape, the team narrowed the field of potential solutions down to these four: Yjs, ShareDB, CKEditor, and Collab. See below for a comparison matrix of where these real-time collaborative editing solutions stand, with further explanation later in the post.

 

 

Yjs

ShareDB

CKEditor

Collab

License

MIT

MIT

Proprietary (On-Prem Hosted)

MIT

Offline editing

Decentralized

Network-agnostic

Shared cursors

Presence (list of active users)

Commenting

Sync after server data loss

✖ (sync error)

✖ (Unsaved changes are lost)

Can implement other collaborative elements (e.g., drawing)

Scalable


(Many servers can handle the same document)

✔ 

(Locking via underlying DB)


(Hosted)

(Needs central source of truth - a single host for each doc. Puts additional constraints on how doc updates are propagated to “the right server”).

Currently supported editors 

ProseMirror

Quill
Monaco
CodeMirror
Ace

Quill
CodeMirror
Ace

CKEditor

ProseMirror

Implementation

CRDT

OT

OT

Reconciliation

Demos

Editing, Drawing,

3D model shared state

Sync

Editing

Editing

Editing in Tip Tap

Whereas the features within a collaborative editor are of paramount importance to its users, the underlying architecture can also play a key role in determining a solution’s robustness. For instance, many long-standing solutions require that all document operations ultimately occur at a central server instance, particularly in the case of ShareDB and Collab.

While a centralized server does confer substantial advantages as a single source of truth for content state, it is also a central source of failure. If the server fails, the most up-to-date state of the content is no longer accessible, and all versions of the content will become stale. For mission-critical content needs where staleness is unacceptable, centralized servers are recipes for potential disaster.

Furthermore, centralized systems are generally much more difficult to scale, which is an understandably critical requirement for a large organization operating at considerable scale. Google Docs, for example, has an upper limit on users who can actively collaborate. With an increasing number of users, the centralized system will start to break down, and this can only be solved with progressively more complex resource allocation techniques.

For these reasons, Tag1 further narrowed the focus to decentralized approaches that allow for peer-to-peer interactions, namely Yjs, which ensures that documents will always remain in sync irrespective, as document copies live on each user’s own instance as opposed to on a failure-prone central server. This means users can always refer to someone else’s instance in lieu of a single authoritative source that may not be available. Resource allocation is also much easier with Yjs because many servers can store and update the same document. It is significantly easier to scale insofar as there is essentially no limit on the number of users that can work together.

The majority of real-time collaborative editors, such as Google Docs, EtherPad, and CKEditor, use a strategy known as operational transformation (OT) to realize concurrent editing and real-time collaboration. In short, OT facilitates consistency maintenance and concurrency control for plain text documents, including features such as undo/redo, conflict resolution, and tree-structured document editing. Today, it is used to power collaboration features in Google Docs and Apache Wave.

Nonetheless, OT comes with certain disadvantages, namely the fact that existing OT frameworks are very tailored to the specific requirements of a certain application (e.g. rich text editing) whereas Yjs does not assume anything about the communication protocol on which it is implemented and works with a diverse array of applications. Yjs leverages commutative replicated data types (CRDT), used by popular tools like Apple Notes, Facebook’s Apollo, and Redis, among others. As Joseph Gentle, a former engineer on the Google Wave product and creator of ShareDB, once wrote:

“Unfortunately, implementing OT sucks. There’s a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. […] Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time.”

The key distinction between OT and CRDT is as follows: Consider an edit operation in which a user inserts a word at character position 5 in the document. In operational transformation, if another user adds 5 characters to the start of the document, the insertion is moved to position 10. While this is highly effective for simple plain text documents, complex hierarchical trees such as the document object model (DOM) present significant challenges. CRDT, meanwhile, assigns a unique identifier to every character, and all state transformations are applied relatively to objects in the distributed system. Rather than identifying the place of insertion based on character count, the character at that place of insertion retains the same identifier regardless of where it is relocated to within the document. As one benefit, this process simplifies resynchronization after offline editing.

If you want to dive deeper, the Conclave real-time editor (which is no longer maintained and therefore was not considered in our analysis) has another great high-level writeup explaining OT and CRDT. Additionally, you can watch or listen to our deep dive on OT vs. CRDT as part of our recent Tag1 Team Talk, “A Deep Dive into Yjs - Part 1”.

While other solutions such as ShareDB, CKEditor, and ProseMirror Collab are well-supported and very capable solutions in their own right, these technologies didn’t satisfy the specific requirements of our client’s project. For instance, ShareDB relies on the same approach as Google Docs, operational transformation (OT), rather than relying on the comparatively more robust CRDT (at least for our requirements). CKEditor, one of the most capable solutions available today, relies on closed-source and proprietary dependencies. Leveraging an open-source solution was strongly preferred by our client for many reasons, foremost among them to meet any potential need by enhancing the software themselves, and they didn’t want to be tied to a single vendor for what they saw as a core technology to their application. Finally, ProseMirror’s Collab module does not guarantee conflict resolution, which can lead to merge conflicts in documents.

Ultimately, the Tag1 team opted to select Yjs, an implementation of commutative replicated data types (CRDT), due to its network agnosticism and conflict resolution guarantees. Not only can Yjs support offline and low-connectivity editing, it can also store documents in local databases on user devices (such as through IndexedDB) to ensure full availability without a stable internet connection. Because Yjs facilitates concurrent editing on tree structures, not just text, it integrates well with view libraries such as React. Also compelling is its support for use cases beyond simple text editing, including collaborative drawing and state-sharing for 3D models. Going beyond text editing to implement other collaborative features is an important future goal for the project.

Furthermore, Yjs performs transactions on objects across a distributed system rather than on a centralized server, the problem of a single point of failure can be avoided, and it’s extremely scalable with no limitations on the number of concurrent collaborators. Moreover, Yjs is one of the only stable and fully tested implementations of CRDT available, while many of its counterparts leverage OT instead.

Finally, because Yjs focuses on providing decentralized servers and connector technology rather than prescribing the front-end editor, there is no dependency on a particular rich text editor, and organizations can opt to swap out the editor in the future with minimal impact on other components in the architecture. It also makes it easy to use multiple editors. For instance, our project uses ProseMirror for collaborative rich text editing and CodeMirror for collaborative Markdown editing (and other text formats can be added easily).

Real-time collaborative editing surfaces unique difficulties for any organization seeking to implement content workflows at a large scale. Over the course of the past decade, many new solutions have emerged to challenge the prevailing approaches dependent on operational transformation. Today, for instance, offline editing and effective conflict resolution on slow connections are of paramount importance to content editors and stakeholders alike. These key requirements have led to an embrace of decentralized, peer-to-peer approaches to collaborative editing rather than a failure-prone central server.

Tag1 undertook a wide-ranging evaluation of available solutions for collaborative editing, including Yjs, ProseMirror’s Collab module, ShareDB, and CKEditor. In the end, Yjs emerged as the winner due to its implementation of CRDT, as well as its scalability and emphasis on network agnosticism and conflict resolution, both areas where the other solutions sometimes fell short. While any robust evaluation of these solutions takes ample time, it’s our hope at Tag1 that our own assessment guides your own thinking as you delve into real-time collaborative editing for your own organization.

Special thanks to Fabian Franz, Kevin Jahns, Michael Meyers, and Jeffrey Gilbert for their feedback during the writing process.

Nov 11 2019
Nov 11

Table of Contents

What is a Rich Text Editor?
The Modern Rich Text Editor and Emerging Challenges
How we Evaluated Rich Text Editors
Why Tag1 Selected ProseMirror
Conclusion

Among all of the components commonly found in content management systems (CMSs) and typical editorial workflows, the rich text editor is perhaps the one that occupies the least amount of space but presents the most headaches due to its unique place in content architectures. From humble beginnings in discussion forums and the early days of the web and word processing, the rich text editor has since evolved into a diverse range of technologies that support a lengthening list of features and increasingly rich integrations.

Recently, Tag1 embarked on an exploration of rich text editors to evaluate solutions for a Fortune 50 company with demanding requirements. In this blog post, we’ll take a look at what impact the choice of a rich text editor can have down the line, some characteristics of the modern rich text editor, and Tag1’s own evaluation process. In the end, we discuss some of the rationales behind Tag1’s choice of ProseMirror as a rich text editor and some of the requirements leading to a decision that can serve as inspiration for any organization.

At its core, a rich text editor enables content editors not only to insert and modify content but also to format text and insert assets that add to the content in question. They are the toolbars that line every body field in CMSs, allowing for a rich array of functionality also found in word processors and services like Google Docs. Most content editors are deeply familiar with basic formatting features like boldfacing, italicization, underlining, strikethrough, text color, font selection, and bulleted and numbered lists.

There are other features that are considered table-stakes for rich text editors, especially for large organizations with a high threshold for formatting needs. These can include indentation (and outdent availability), codeblocks with syntax highlighting (particularly for knowledge bases and documentation websites for developers), quotations, collapsible sections of text, embeddable images, and last but not least, tables.

While these features comprise the most visible upper layer of rich text editors, the underlying architecture and data handling can be some of the most challenging elements to implement. All rich text editors have varying degrees of customizability and extensibility, and all editors similarly have different demands and expectations when it comes to how they manage the underlying data that ultimately permits rich formatting. In the case of Tag1’s top Fortune 50 customer, for example, the ability to insert React-controlled views and embedded videos into content ended up becoming an essential requirement.

Whereas many of the foregoing rich text editors available in the late 1990s and early 2000s trafficked primarily in basic formatting, larger editorial organizations have much higher expectations for the modern rich text editor. For instance, while many rich text editors historically focused solely on manipulation of HTML, the majority of new rich text editors emerging today manipulate structured data in the form of JSON, presenting unique migration challenges for those still relying on older rich text editors.

Today, there are few to no robust rich text editors available that support swappable document formats between HTML, WYSIWYG, Markdown, and other common formats. Any conversion between HTML, WYSIWYG, and Markdown formats will result in some information loss due to differences in available formatting options. As an illustrative example, a WYSIWYG document can include formatting features that are unsupported in Markdown, such as additional style information or even visually encoded traits such as the width of a table column. While converting a document format to another preserves the majority of information, there will inevitably be data loss due to unsupported features.

Moreover, as rich text editors become commonplace and the expectations of content editors evolve, there is a growing desire for these technologies to be accessible for users of assistive technologies. This is especially true in large companies such as Tag1’s Fortune 50 client, which must provide for content editors living with disabilities. Rich text editors today frequently lack baseline accessibility features such as ARIA attributes for buttons in editorial interfaces, presenting formidable challenges for many users.

Tag1 evaluated a range of rich text editors, including ProseMirror, Draft.js, CKEditor 5, Quill, Slate, and TipTap. Our mission was to find a solution that would be not only robust for content editors accustomed to word processors and Google Docs but also customizable and robust in handling the underlying data. But there were other requirements as well that were particularly meaningful to the client for whom Tag1 performed this evaluation.

An important first requirement was the ability for the chosen rich text editor to integrate seamlessly with collaborative editing solutions like Yjs and Collab out of the box. In addition, because of the wide-ranging use of open-source projects at the organization, a favorable license was of great importance to allow teams to leverage the project in various ways. In addition, other characteristics such as plugin availability, an active contributor community, and some support for accessibility were considered important during the evaluation.

As mentioned previously, other requirements were more unique to the customer in question, including native mobile app support, which would allow for mobile editing of rich text, a common feature otherwise found in many responsive-enabled CMSs; embedding of React view components, which would provide for small but rich dynamic components within the body of an item of content; and the ability to annotate content with comments and other notes of interest to content editors.

The table below displays the final results of the rich text editor evaluation and illustrates why Tag1 ultimately selected ProseMirror as their editor of choice for this project.

* Doesn’t support feature yet, but could be implemented (additional effort & cost) ** Comments are part of the document model (requirements dictate they not be) *** Per CKEditor documentation -- needs to be verified (see review process below) ⑅ In Depth accessibility reviews must be completed before we can grade

Ultimately, Tag1 opted to choose ProseMirror as the rich text editor of choice for their upcoming work with a top Fortune 50 company. Developed by Marijn Haverbeke, the author of CodeMirror, one of the most popular code editors for the web, ProseMirror is a richly customizable editor that also counts on an exhaustive and well-documented API. In addition, Haverbeke is known for his commitment to his open-source projects and responsiveness in the active and growing ProseMirror community. As those experienced in open-source projects know well, a robust and passionate contributor community does wonders to lower implementation and support costs.

Out of the box, ProseMirror as a tool is not particularly opinionated about the aesthetics of its editor or especially feature-rich. But this is in fact a boon for extensibility, as each additive feature of ProseMirror is provided by a distinct module encapsulating that functionality. For instance, while core features that are considered table-stakes among rich text editors today such as basic support for tables and lists are part of the core ProseMirror project, others, like those that provide improved table support and codeblock formatting, are only available through community-contributed ProseMirror modules.

ProseMirror also counts among its many advocates large organizations and publishers that demand considerable heft from its rich text editing solutions. Newspapers like The New York Times and The Guardian, as well as well-known companies like Atlassian and Evernote, are leveraging ProseMirror and its modules. In fact, Atlassian published the entirety of their ProseMirror modules under the highly permissive Apache 2.0 license.

Beyond the fact that many editors are based on ProseMirror as a foundation available through open source, such as Tiptap, the Fiduswriter editor, and CZI-ProseMirror, it was a logical choice for Tag1 and part of Tag1’s commitment to enabling innovation in editorial workflows with a strong and stable foundation at their center. Through an integration between ProseMirror and Yjs, the subject of a previous Tag1 blog post on collaborative editing, all requirements requested by the top Fortune 50 company will be satisfied.

Choosing a rich text editor for your editorial workflows is a decision fraught with small differences with large implications. While basic features such as simple list and table formatting are now ubiquitous across rich text editors like ProseMirror, Draft.js, CKEditor, Quill, and Slate, the growing demands of our customers obligate us to consider ever more difficult requirements than before. At the request of a top Fortune 50 company, Tag1 embarked on a robust evaluation of rich text editors that satisfied some of their unique requirements such as React component embeds, accessibility, and codeblocks with syntax highlighting.

In the end, teams opted to leverage ProseMirror due to its highly active and passionate community, the availability of unique features such as content annotations, native mobile support, and accessibility support. Thanks to its large community and extensive plugin ecosystem, Tag1 and its client can work with a variety of available tools to craft a truly futuristic rich text editing experience for their content editors. As this evaluation indicates, it is always of utmost importance to focus not only on the use cases for required features, but also on the users themselves who will use the product—the content editors and engineers who need to write, format, and manipulate rich text, day in and day out.

Special thanks to Fabian Franz, Kevin Jahns, Jeffrey Gilbert, and Michael Meyers for their feedback during the writing process

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web