Nov 20 2019
Nov 20

We recently needed to import hundreds of projects, and hundreds of thousands of time cards, from Harvest into Drupal. I liked the idea of having a user interface for running the migrations, and was curious to see if the Entity Import module could do the trick.

TLDR: it worked great. Here’s the code. To learn how to write your own source plugin for Entity Import, read on.

Why Entity Import?

What I love about Entity Import is that it provides a UI for running migrations. Further, migrate source plugins for Entity Import have their own custom configuration forms, allowing site admins to configure options for each importer. I envisioned a setup that would let us easily add new Harvest importers through the admin UI, configuring basic parameters like date range and endpoint for each.

The Recipe

To import data from Harvest using Migrate and Entity Import, we needed two main ingredients:

  • A simple REST API client to make authenticated requests to Harvest .
  • An entity import / migrate source plugin to pull the data into Drupal.

REST API Client

I won’t go in-depth on the API client in this post, but here’s a great article from my coworker, Joel, that outlines an approach similar to the one I took: Building Flexible REST API Clients in Drupal 8. You can download the code for the simple Harvest API client on GitHub. The end result was a Drupal 8 service I can inject into my migrate plugin, making API requests with a single line of code.

Here are a couple examples:

// Get all active projects.
$data = $this->harvestApiService->get(‘projects’, [“is_active”:TRUE]);
 
// Get all time cards entered since November 1.
$data = $this->harvestApiService->get(’time_entries’, [“from”:2019-11-01]);

Migrate Source Plugin for Entity Import

There are tons of great resources about migrate source plugins on Drupal.org covering everything from using core source plugins (like CSV), to leveraging contributed plugins (see Migrate Plus), to instructions on how to create your own. Writing a source plugin specifically for Entity Import is almost the same process, plus a few key additions.

Choosing the Right Base Plugin

Entity Import has two different options for base plugins. You can extend either one to create your own new source plugin.

Use EntityImportSourceBase for Configurable Importers

The EntityImportSourceBase class extends the core SourcePluginBase. You get all the methods and properties available in core, plus a couple new features specific to Entity Import.

First, you can add your own configuration form with the buildConfigurationForm() method. For my Harvest example, this is the form admins can use to add Harvest’s authentication keys, specify which endpoint the importer will use, and add basic parameters to the request.

Entity Import module - configuration form

Second, you can add your own importer form with the buildImportForm() method. This is the form admins will see when they are actually running the importer. For Travis’s original use case of importing CSVs, this form is where users actually upload the data. For my Harvest example, I used this form simply to show users some high level data for what they are about to input.

Entity Import module importer form

And of course, you can load up the data from your source. This comes directly from the core SourcePluginBase and isn’t specific to the Entity Import classes. But it’s important enough that I had to mention it, since without source data we wouldn’t be importing anything. In my Harvest example, I extended the initializeIterator() method to load the data from Harvest’s API.

Alright, we’re ready to import data. Only problem is, there’s a lot of it. When I configured a Harvest importer for time entries, there were hundreds of thousands of items to import. That’s way too much to try to do in a single HTTP request through the browser. Fortunately, Entity Import has us covered.

Use EntityImportSourceLimitIteratorBase for Batch Processing

The EntityImportSourceLimitIteratorBase class extends EntityImportSourceBase and gives you everything we covered above, plus a very important capability: this class was designed to work with Drupal’s batch API. As the name implies, it uses a “Limit Iterator” to process one segment of the data at a time. Extend this class and voila, your importer will run in batches. In my Harvest example, I successfully imported hundreds of thousands of records into Drupal through the browser. The batch system gives feedback about how things are going and makes sure we’re not doing too much in a single HTTP request.

Using Paginated APIs

We’ve covered the basics of which class to extend to get the results you want. In my case, I extended the EntityImportSourceLimitIteratorBase class so my importer would run in batches. There’s a small problem with that approach, though. The Harvest API, like most other APIs, is paginated. Working with pages isn’t the same as working with limited iterators.

The limited iterator approach works perfectly for CSVs. You can grab the entire list of source row ids at once and then limit processing to one segment at a time, advancing through the entire dataset in batches.

With paginated APIs, you can’t grab the entire list of IDs at once. You are limited to requesting one page at a time. I needed the batch process to advance one page at a time through the entire set of data offered by the API.

While processing paginated APIs works differently than processing limited iterators, the internal methods and properties are similar enough that the EntityImportSourceLimitIteratorBase class still worked great for solving the problem. We just needed to treat the iterator a little differently than other non-paginated sources.

Here’s the initializeIterator() method from the base class. It returns one segment of a complete array, using PHP’s built-in LimitIterator:

  /**
   * {@inheritdoc}
   */
  public function initializeIterator() {
    return new \LimitIterator(
      $this->limitedIterator(), $this->getLimitOffset(), $this->getLimitCount()
    );
  }

And here is the Harvest API version that adds pagination, relying on the limit offset and count to request the right page:

  /**
   * Initialize the migrate source iterator with results from an API request.
   *
   * The request is paginated. We determine the current page based on the
   * limit count and offset. The limit count is stored in configuration.
   * The limit offset is set by the batch processor and increments for
   * each batch (see \Drupal\entity_import\Form\EntityImporterBatchProcess).
   */
  public function initializeIterator() {
    $this->currentPage = ($this->getLimitCount() + $this->getLimitOffset()) 
      / $this->getLimitCount();
    $results = $this->apiRequest($this->currentPage);
    $this->createIterator($results);
    return $this->iterator;
  }

Wrapping it Up

To learn more about Entity Import and how to both configure and run Drupal migrations through the browser, check out this earlier post from Travis. Grab the latest, complete code for the Harvest API example on GitHub. We’d love to hear if (and how) you’re using Entity Import with custom source plugins. Leave us a comment or drop us a line on our contact form!

Nov 20 2019
Nov 20

2 mins

Free time over the holiday period? Why not try these tips to improve your Drupal website:

  1. Ensure your Caching is turned on. Drupal comes with some basic tools to help your sites load faster. Find them in the Administration Menu under Configuration > Development.
  2. Make sure you deactivate the user accounts of anyone who leaves the organisation. Every ignored user account is another potential vulnerability.
  3. Check your Analytics every few months. Find out what pages people are visiting and more importantly, what pages people AREN’T visiting.
  4. If your website has a publicly available contact form, make sure you backup and clear the submissions every few months to keep your database size down.
  5. Check your Watchdog log often, it’s a great way of seeing if your site is being targeted by malicious activity. Check for repeat requests to certain URL’s such as wp-login.php or install.php.
  6. Give your content editors a hand by configuring your content type form display. Display fields in a logical order, and use the Field Group module to group fields together.
  7. While you are at it, make sure that your WYSIWYG editor is as trimmed down as it can be. A simple editor interface makes everything easier.
  8. If you need to audit some content, don’t be afraid to create a View just for you. It’s an easy way of listing, filtering and exporting page content. Views aren’t just for listing the most recent blog posts.
  9. Check in every few months and make sure your backups are working. Don’t just make sure the files are there, try restoring it to a local instance just to make sure everything works if something goes wrong.
  10. Make sure you site is always patched and up to date. It’s not fun work, but it’s better than getting hacked.
     
Date posted:

20 November 2019

Authored by :

Toby Wild

Nov 20 2019
Nov 20

Agiledrop is highlighting active Drupal community members through a series of interviews. Now you get a chance to learn more about the people behind Drupal projects.

In our latest Drupal Community Interview, we spoke with Alex Moreno López, Technical Architect at Acquia. Alex shared with us his thoughts on the future of Drupal and open source, what technologies he's most excited about, and who has inspired him the most in the tech industry. 

1. Please tell us a little about yourself. How do you participate in the Drupal community and what do you do professionally?

I am a Spanish expat living outside of London in a beautiful little town where I have the peace and calm I need to find myself productive, while I stay 20 minutes from London and the same from two main airports for wherever I need to travel for work (which is often nowadays). The last few years I’ve been in different development roles doing PHP, Symfony and Drupal, but also many other things around that ecosystem, like Ansible, Docker, ... and anything I find interesting really.

My recent interests are still around technology, but also about promoting open source and Drupal in circles where we are not normally going to promote it. For example right now I am sitting on a train back from Extremadura Digital Day, where I came to talk about Open Source business models and examples like Red Hat or the Acquia/Drupal environment itself.

I think we have to promote Drupal outside of our own circles, so I am trying to build a plan around that and see where I can get to.

2. When did you first come across Drupal? What convinced you to stay, the software or the community, and why?

Back when I was finishing University I researched a few CMS. I built a phpnuke for the LUG (Linux User Group) that I founded (hey friends at GLUEM :-)). During that period it appeared clear to me that content writing and software Engineering (my degree) were the two things I was most passionate about. Finishing university and starting my own business meant that I had to find ways of building sites and apps to fit different purposes in an efficient way. I had suffered some of the pain of JSP and Java building some things while finishing my degree, so I didn't want to make the same mistakes. 

That was the point I got in contact with Joomla, WordPress and Drupal finally. I built things in all of them, but especially Drupal got me hooked by the power and possibilities that it was already offering. I think it was between Drupal 4 to 5. People were like “wow, why use Drupal when you have Wordpress, so much simpler”. But that was not the point for me, the point was the potential that I saw in there in order to scale and build so many more complex things than you could have thought of initially.

At some point I decided that I was not brilliant at doing business, I was just doing OK but not brilliant, so I decided to move to the UK to learn more about engineering in the real world, teamwork… and not just business. I've been extremely lucky and blessed to have crossed paths with incredibly talented colleagues with some of the biggest and most complex websites in the world, you all know who you are, thank you all. As they say, the rest is history... for me I guess.

3. What impact has Drupal made on you? Is there a particular moment you remember?

Drupal has allowed me to travel, to live abroad, but I could probably have done the same in any other technology, as some of my friends have. Who knows. The things I keep closer to my heart and mind are the friends that I’ve made and keep making during that trip along this thing called life.

What any other technology does not have is the sense of community and giving back that the Drupal community has. Open source is going through an important and critical moment.  Add community and together we'll be able to find the path through the problems that we are facing in the whole open source community, like the sustainability of the ecosystem on its own.

4. How do you explain what Drupal is to other, non-Drupal people?

Funnily enough and as I was mentioning, I think we have to make a lot more effort to visit places where Drupal is not well known, marketeers, business conferences, tech festivals. Places like that where there are a lot of people that would love to see what you can do with Drupal.

How would I explain it to them? I would say that Drupal is a tool that allows you to materialize your digital Dreams in a way that you don’t necessarily need coding knowledge.

This is going to be more and more true and relevant as Drupal 8 and things like Layout Builder mature, and no code or low code is going mainstream. In my opinion if you combine both, the future and possibilities are really exciting, but there is a lot of hard work to do ahead.

5. How did you see Drupal evolving over the years? What do you think the future will bring?

It's a complex time, with lots of strong competition inside and outside our open source world, but we are in a very desirable position. We have to remember that our competitors are not in our own open source world, but out there, and we should make more efforts to build bridges between the communities. That is already happening, with Drupal and Symfony being the clearest example, but this is just the beginning of what can be an unstoppable force like Linux was at some point in the server market (which now is 90 high percent of that market).

6. What are some of the contributions to open source code or to the community that you are most proud of?

I admire people not only exclusively in the open source world that have inspired me to be better in different aspects, like Robert Martin (unclebob), Martin Fowler or Kent Beck for how much they have contributed to software and the agile and craftsmanship movements, but also people like Richard Stallman or Miguel de Icaza in the Linux and open source communities.

Then in our little PHP/Drupal world, I am amazed by people like Jeff Geerling in terms of how much they can get done while life happens for other mortals, or the commitment and passion of people in Drupal core like Gábor Hojtsy, Tim Plunkett, Wim Leers... There are a lot of people in the community that do not get any proper recognition though, but to be fair it would be impossible if you look at what makes Drupal great (see http://drupalcores.com/index.html).

More than 5500 contributors, some of them with a lot of work invested as I already mentioned, but lots more with small little contributions. If you look at the numbers, those core contributors make this thing really work, but they contribute between 0.5% to 3% of the total of commits. That’s open source in its purest state. And yes, I agree that the number of commits is not a very reliable measurement tool, but it gives us an idea of what is going on under the hood.

7. Is there an initiative or a project in Drupal space that you would like to promote or highlight?

There is a lot going on at the moment. Some people may already know about Acquia BLT, a tool that eases the pain of building artifacts, deploying to cloud, automating tasks like testing etc… I wish I had known about it much earlier, back when we created similar custom much inferior solutions that became a pain to maintain.

The one I am particularly excited about is Acquia Dev Studio, which is in Beta right now but is evolving very quickly. It promises to be the substitute of the fantastic Dev Desktop that a lot of people in the community have used, but taking that idea a bit further. The same reason people are excited about Gatsby or Nuxt.js, because they wrapped in an opinionated way different tools to save you time, Dev Studio does the same wrapping local tools like Lando (and DDEV soon) but also everything you need to go from local to prod, like the CI/CD flow, git hooks to ensure the team follows Drupal best practices…

I also maintain the WebP module. If you are interested in performance and making your site faster (which we all should for our users' sake) then you should have a look at this module.

8. Is there anything else that excites you beyond Drupal? Either a new technology or a personal endeavor. 

Software craftsmanship as I’ve already mentioned has had a big influence on my way of approaching personal growth and learning new stuff. I am in a moment in my life where dev technology excites me like the first day, but also all other (soft?) skills required for developer relations, project management, public speaking, etc. It also helps me to find the balance between having your head down for hours in code and learning things that are compatible without having your head exploding after too many hours focussed on a thing. 

And of course, who is not interested in all this stuff going on regarding decoupled technologies, React/Angular, etc? I just wished I had 40 hours days to be able to learn and do more, which I appreciate can be overwhelming at times (it does for me at least), and one also needs to find hobbies outside of technology, family, exercise, open air and nature, ...

Nov 19 2019
Nov 19

As one of the longest-running and largest Drupal events in the Asia-Pacific region, DrupalSouth is an opportunity for the greater community to come together and celebrate all things Drupal. In 2019, DrupalSouth will be making its way for the first time ever to Hobart, Tasmania, under Downunder!

With our team distributed throughout Australia, PreviousNext use DrupalSouth as an opportunity to get everyone together in person each year, run a team off-site, dinner and to enjoy the conference together. We're also sponsoring and mentoring at the Sprint Day on Wednesday November 27 prior to the main conference commencing. PreviousNext staff are volunteering on the event organising team and our sister company, Skpr, is sponsoring Michelle Mannering's keynote presentation on Thursday November 28.

The conference will feature both local and international speakers, including twelve sessions by PreviousNext’s team who were selected to present this year. The event program committee had no details on the applicants throughout the selection process so all sessions were chosen based on topic and content alone, with PreviousNext set to present or be involved in the following sessions:

We find DrupalSouth is a perfect opportunity to engage socially, network and mix with our region's active Drupal contributors and community members. The conference will run over two days at the Hotel Grand Chancellor in Hobart, from 28-29 November, preceded on Wednesday 27 Nov by a Code Sprint for those are keen to join in. 

Photo of Lucy Vernon

Posted by Lucy Vernon
Administration

Dated 19 November 2019

Add new comment

Nov 19 2019
Nov 19

Heather Rocker, Drupal Association Executive Director, joins Mike Anello to talk about her background, her vision for the Drupal Association, and DrupalCon Minneapolis!

Discussion

DrupalEasy News

Upcoming Events

Sponsors

Follow us on Twitter

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Nov 18 2019
Nov 18

Stop me if you've heard this one...

A developer is trying to connect Drupal to Gatsby. Drupal says, "I've got a Related Content field that allows SEVEN different entity types to be referenced, but only THREE slots for content. What do you think?"

Developer says, "Sure, no problem, Drupal!"

Gatsby jumps in and says, "You better make sure that all seven of those entity types are in those three slots."

Developer and Drupal look at each other in confusion. Developer says, "Gatsby, that's not GREAT"

** Laughs go here **

That's the joke meme

What happened?

It turns out the way that Gatsby builds what it calls "nodes" is by querying the data available. This usually works perfectly, but there are occasions where all the data might not be present, so Gatsby doesn't build the nodes, or it builds the nodes, but doesn't create all of the fields.  Any optional field can fall into this if there is no content in that field on a Drupal entity, and if you're building templates for your decoupled site, then you might just want some content in that field.

NOTE: A Drupal "node" and a GraphQL "node" are not the same thing. This may take some time to sink in. It definitely did for me! For the remainder of this post I will refer to GraphQL nodes as "nodes" and Drupal nodes as "entities". Everybody got that? Good, moving on.

Because the nodes with your hypothetical content have not yet been created, or the fields on your entity don't have anything in them, Gatsby just passes right over them and acts like they don't even exist.

Commonly, you'll see something like this pop up during gatsby build or gatsby develop:

Generating development JavaScript bundle failed


/var/www/gatsby/src/components/templates/homepage/index.js
  56:9  error  Cannot query field "field_background_media" on type "node__homepageRelationships".

This can be frustrating. And when I say "frustrating" I mean that it's table-flipping, head slamming into desk, begging for help from strangers on the street frustrating. The quick fix for this one is simple: ADD SOMETHING TO THE FIELD. In fact, you don't even have to add it to every instance of that field. You can get away with only putting some dummy content in a single entity and you'll have access to that field in Gatsby.

You could also make it a required field so that there MUST be something in it or else the author can't save. This is valid, but in the event that your project's powers-that-be decide that it should be optional, you may need another alternative. This is where GraphQL schema customization comes in.

GraphQL schema and Gatsby

First, a little primer on GraphQL schema. Outside of Gatsby, a system using GraphQL usually needs to define the schema for GraphQL nodes. Gatsby takes this out of the developer's hands and builds the schema dynamically based on the content available to it from its source, in this case gatsby-source-drupal. However, empty data in the source results in no schema in Gatsby.

Let's look at the fix for the optional image above and why it works:

In gatsby-node.js we build our schema through createPages, createPage, and GraphQL queries. This is also where we're going to place our code to help GraphQL along a little bit. This will live in a function called createSchemaCustomization that we can use to customize our GraphQL schema to prevent empty field errors from ruining our day.

The Gatsby docs outline this in the Schema Customization section, but it didn't really click there for me because the examples are based on sourcing from Markdown as opposed to Drupal or another CMS. Things finally clicked a little when I found this issue on Github that showed how to work around a similar problem when sourcing from WordPress.

Three scenarios and three solutions

Scenario 1: Empty field

This is by far the simplest scenario, but that doesn't make it simple. If you have a basic text field on a Drupal entity that does not yet have any content or at some point may not have any content, you don't want your queries to break your site. You need to tell GraphQL that the field exists and it needs to create the matching schema for it.

Here's how it's done:

exports.createSchemaCustomization = ({ actions }) => {
  const { createTypes } = actions
  const typeDefs = `
    type node__homepage implements Node {
      field_optional_text: String
  `
  createTypes(typeDefs)
}

Now, to walk through it a bit. We're using createSchemaCustomization here and passing in actions. The actions variable is an object that contains several functions so we destructure it to only use the one we need here: createTypes.

We define our types in a GraphQL query string that we lovingly call typeDefs. The first line of the query tells which type we're going to be modifying, in this case node__homepage, and then what interface we're implementing, in this case Node

NOTE: If you're looking at GraphiQL to get your field and node names it may be a bit misleading. For example, in GraphiQL and in my other queries node__homepage is called as nodeHomepage. You can find out what the type name is by running a query in GraphiQL that looks like:

query MyQuery {
  nodeHomepage {
    internal {
      type
    }
  }
}

or by exploring your definitions in GraphiQL.

Next, we add the line field_optional_text: String which is where we define our schema. This is telling GraphQL that this field should exist and it should be of the type String. If there are additional fields, Gatsby and GraphQL are smart enough to infer them so you don't have to redefine every other field on your node.

Finally, we pass our customizations into createTypes and restart our development server. No more errors!

Scenario 2: Entity reference field

Suppose we have a field for media that may not always have content. We still want our queries to work in Gatsby, but if there's only one Drupal entity of this content type and it happens to not have an image attached, then our queries are going to break, and so will our hearts. This requires a foreign key relationship because the entity itself is a node to GraphQL. 

Here's how it's done:

exports.createSchemaCustomization = ({ actions }) => {
  const { createTypes } = actions
  const typeDefs = `
    type node__homepage implements Node {
      relationships: node__homepageRelationships
    }
    type node__homepageRelationships {
      field_background_media: media__image @link(from: "field_background_media___NODE")
    }
  `
  createTypes(typeDefs)

Looks a little similar to the previous scenario, but there's a bit more to it. Let's look at the changes.

Instead of defining our custom schema under the type node__homepage implements Node { line, we reference our relationships field which is where all referenced nodes live. In this case, we tell GraphQL that the relationships field is of type node__homepageRelationships. This is another thing you can find using GraphiQL. 

The next section further defines our relationships field. We declare that node__homepageRelationships should have the field field_background_media of type media__image, but what's that next part? That's where our foreign key link is happening. Since field_background_image is an entity reference, it becomes a node. This line links the field to the node created from the field. If you don't include the @link(from: "field_background_media___NODE) in your query you'll get a deprecation notice:

warn Deprecation warning - adding inferred resolver for field node__homepageRelationships.field_background_media. 
  In Gatsby v3, only fields with an explicit directive/extension will get a resolver.

Edit: This post was edited to reflect the deprecation notice instead of indicating that @link is optional.

Scenario 3: Related content field with multiple content types

This is the one that had me crying. I mean scream-crying. This is the widowmaker, the noisy killer, the hill to die upon. At least it WAS, but I am triumphant!

Like my horrible joke from the beginning of this post, I have a related content field that is supposed to allow pretty much any content type from Drupal, but only has three slots. I need to have queries available for each content type, but if one doesn't exist on the field, then GraphQL makes a fuss. I can't say for sure, but I've got a feeling that it also subtly insulted my upbringing somewhere behind the scenes.

Now, I'm here to save you some time and potentially broken keyboards and/or monitors.

Here's how it's done:

exports.createSchemaCustomization = ({ actions }) => {
  const { createTypes } = actions
  const typeDefs = `
    union relatedContentUnion =
      node__article
      | node__basic_page
      | node__blog_post
      | node__cat_food_reviews
      | node__modern_art_showcase
      | node__simpsons_references

    type node__homepage implements Node {
      relationships: node__homepageRelationships
    }
    type node__homepageRelationships {
      field_related_content: [relatedContentUnion] @link(from: "field_related_content___NODE")
    }
  `
  createTypes(typeDefs)
}

The biggest changes here are the union and the brackets around the type. From the GraphQL docs:

Union types are very similar to interfaces, but they don't get to specify any common fields between the types.

This means that any type in the union can be returned and they don't need to have the same fields. Sounds like exactly what we need, right?

So we define our union of our six content types and create the relationship using a foreign key link. We use the node created from our field, field_related_content___NODE as our foreign reference, and we get results. However, the brackets around the union are there, and they must do something. In fact they do. They indicate that what will be returned is an array of content in this union, meaning that there will be multiple values instead of one single value returned.

Summary

This seriously took me much longer than it should have to figure out. Hopefully, you don't suffer my same fate and find this post useful. One thing to note that almost destroyed me is that there are THREE, count them 1, 2, 3, underscores between the field name and NODE in the link. _ _ _ NODE. I only put two when I was fighting this and it cost me more time than I'm willing to admit on a public forum.

Special thanks to Shane Thomas (AKA Codekarate) for helping me sort through this and being able to count to 3 better than me.

Nov 18 2019
Nov 18

In 1992, there was a little known new thing called the world wide web. By 1995, it was a "thing". Now, what exactly do those quotes do to the word "thing"? And what does this have to do with "entities"?  Cue my favorite programming joke.

Q: What are the two hardest problems in computer programming?

A: Naming things, garbage collection, and off-by-one calculations.

All of that joke is true, but it's the first problem that is the point of this blog post. Because "entity" is just a fancy word for "thing", i.e. coders have the same problem naming some "things" as everyone else does.

What is an Entity?

Just as in our vernacular usage, an "entity" is a "thing with something extra", and it's the "extra" that makes it an entity. To make it more concrete: lots of things can be entities, and in CiviCRM, all contacts, contributions, and events are entities. It's not to hard to think of a contact as a "thing", but in fact, a lot more things in CiviCRM are entities: even such abstract notions as relationships, memberships, and participants to an event. You might get carried away and guess that everything is an entity, and it's a reasonable guess, but not quite. For example, an option in a list of options is usually not an entity. You might also be getting confused at this stage about whether a specific contact is an entity or whether "Individuals" is an entity: the answer is that we use "Entity type" for "Individual", whereas a specific contact is an "entity" (of type "Individual").

Who Cares?

And here's why you should care, even if you're not a programmer. In CiviCRM, entities are a core part of the codebase, and that means that what you can do with one entity, you can often do with other entities. So understanding what is an entity, and what you can do with any entity, makes you a better user of CiviCRM.

Again, what is an entity?

If you look up the definition of entity on Wikipedia, there's a good chance you will be no further ahead than I was after reading it. Because "entities" aren't a specific thing, they are an abstraction. Just like our vernacular use of "thing", a thing becomse a "thing" when it acquires special properties, and those special properties allow us to treat them with a very useful level of abstraction that doesn't need to worry about other details. And now you know why parents often haven't a clue what their children are talking about.

For example, when the web became a "thing", we no longer had to explain what it was, it was a self-explanatory word. Analogously, an entity is a "thing" with certain properties that allows us write code about it without knowing the specifics of the entity - the entity is self-explanatory enough that we can handle it in an abstract manner.

Again, why should I care?

The best I can do is to provide some examples. Here are three that illustrate the power of the entity in civicrm.

1. Custom Fields

You probably know that you can add custom fields to contacts via the web interface in CiviCRM. But that same code works for any entity. So you can add custom fields to things like Relationships, or Recurring Contributions, or ... a surprising number of things you might not even be able to imagine uses for. Go ahead and look at the form where you add a new custom fieldset, and you'll see a long list of things that you can apply it to: those are CiviCRM's entities.

2. CSV entity importer 

Eileen's excellent extension allows you to import data into your civicrm for anything that is an entity. So for your highly customized CiviCRM for which you had to add all kinds of custom fields, you can import that content from your old CRM without even writing any custom code. My example of this, and why I'm such a big fan of this extension, was when our clients needed to import recurring contribution data from their old CRM.

3. The API and browser

CiviCRM has an API that leverages the properties of underlying entities. That means if your site has some kind of custom entity, it's automatically exposed via the API and the API browser, even if you didn't know about it, and even though that code was written before you invented your entity.

3. CiviCRM Entity in Drupal

The Drupal module "CiviCRM Entity" exposes CiviCRM's entities as Drupal entities. Drupal has it's own concept of entity, and it's compatible in concept to a CiviCRM entity. So this module provides some magic glue function allowing all of CiviCRM's entities to also be Drupal entities. And that means you get (for example), Drupal Views of any CiviCRM entity.

But we're still stuck with the hardest problems, including naming things. And off-by-one calculations.

Nov 18 2019
Nov 18

Conversations about web accessibility too often miss the bigger picture and the paramount importance of broadening our perspectives for the digital age. 

For this reason, it’s extremely important to understand, before getting into the weeds of compliance, that there are multiple facets of digital accessibility. Standard expectations such as images with alternative text for those who are visually impaired or video captioning for those who are hearing impaired, account for only a small segment of what’s involved in making a site accessible. 

In fact, accommodations for cognitive disabilities, such as autism or attention deficit disorder, and relatively common conditions, such as dyslexia, represents a huge component of accessibility compliance. 

Disabilities covered by Web Content Accessibility Guidelines (WCAG) can be permanent, temporary or situational; digital accessibility is all encompassing, for all people, of all ages, and benefits everyone. 

A Closer Look

WCAG 2.1 encompasses 78 guidelines, and is built upon the following four principles: 

1. Perceivable

Information and user interface components must be presented to users in ways they can perceive. Information and components can not be hidden to users. For example, a person who is blind will not be able to perceive the meaning of an image, but with the help of their screen reader that provides a description of the image via alt-text, they will be able to perceive the meaning of the image.
29 of the 78 WCAG 2.1 guidelines align with the “Perceivable” principle.

2. Operable

User interface components and navigation must be operable. Components and navigation are available to all users to interact with. For example, a user who is unable to use a mouse, will need to be able to navigate a menu using the keyboard on their computer. 
Another 29 of the guidelines align with the “Operable” principle.

3. Understandable

Information and the operation of the user interface must be understandable and logical to the user. One example of adhering to the Understandable principle is displaying a menu in the same location, in the same way, across a website.
17 of the guidelines align with the “Understandable” principle.

4. Robust

Content must be robust enough that it can be interpreted by a wide variety of user agents, including assistive technologies. Content should not change for users, when technology changes. An example is code that is well written and properly structured. There are no duplicate attributes and all tags are properly closed.
 

Relative Rankings

Some of the guidelines are specific to a particular issue, others are all-encompassing. The most distinct difference among the guidelines, however, concerns the level of importance that has been assigned to them. All guidelines are ranked as either Level A, Level AA, or Level AAA.

Within the spectrum of “must-haves” and nice-to-haves”, 30 WCAG 2.1 guidelines are considered to be “must haves” (Level A). An example of a Level A guideline is a prohibition against the use of color in and of itself, (such as a green circle to indicate “good” and a red circle to convey “bad”) without accompanying text or imagery to convey meaning.

For a website or application to be considered in conformance, it must meet all 30 Level A and all 20 Level AA guidelines. The remaining 28 Level AAA guidelines are considered to be helpful, but not essential for accessibility conformance. This is largely due to the depth of development complexity required, and in some cases, the potential workarounds to achieve the same objective.
 

Slow Compliance Climb

Despite clear guidelines and available expertise for achieving web accessibility, businesses have been slow to step up and demonstrate a commitment to digital inclusivity. A recent report analyzing the top 1 million homepages on the web revealed that 99 percent failed to follow even the most widely used accessibility standards. 

Some might have been holding off on moving forward with the expectation that the U.S. Supreme Court would take up the case of Robles v. Domino’s Pizza, and rule that business websites do not constitute standalone public accommodations, and as such, do not need to comply with Section 508 of the ADA. The high court opted last month, however, not to take up the case, and WCAG 2.1 remains the standard for avoiding a lawsuit based on an inaccessible site or applications.
 

This decision needs to be viewed as a loud wake-up call for businesses that want to avoid legal action and unwanted attention to what their websites are lacking.

More so than ever before, businesses that delay in having their sites thoroughly evaluated for accessibility and remediated as necessary, risk being sued. 

While this might seem like an onerous burden, digital accessibility actually represents a significant opportunity. Remediating a site for compliance is about more than code. It involves a re-examination of the range of abilities, an overcoming of assumptions, and an acknowledgement that in our digital world, an inclusive marketplace is not just the right thing to do, it’s good for business. 

Interested in starting a conversation about getting your website and applications into compliance with WCAG 2.1?  We have a whole team of accessibility specialists who can help. Contact us today.

Nov 18 2019
Nov 18

In his State of Drupal keynote at DrupalCon Amsterdam, Dries Buytaert showed once again some tools to use to prepare for Drupal 9 including the Upgrade Status module. To me the process is even more interesting than the tools, because it is entirely different from the last upgrade. As I wrote last week, you now make a lot of incremental improvements on your existing Drupal 8 site that makes the gap to Drupal 9 significantly smaller.

It is a new mindset to look at your Drupal 8 site to improve in preparation for Drupal 9 and we have tools to identify those improvements to make. However Dries also mentioned that we are not quite there to automate those improvements. Back in May Dezső Biczó of Pronovix built out a proof of concept integration with Rector that implements a sample set of refactors, including changes for global constants and best effort solutions to some EntityManager uses, a UrlGeneratorTrait and a drupal_set_message() rector. While the extent of impact of the global constant rectors are not yet known due to limitations in our tools not finding them yet, the rest of the implemented rectors are definitely tapping into the top list of deprecated APIs.

Unfortunately slightly after he posted his blog post about the proof of concept, Dezső did not have time for this project anymore. I think this tool could be of huge help in removing deprecated API use in your projects and thus making them more modern Drupal 8 projects (while being more if not already entirely compatible with Drupal 9). However, we need contributors to cover more of the deprecated APIs, especially around file_*() and db_*() functions. If we can figure out rectors for most of the top 15 errors (and Dezső already did some of them), we could cover over half of all deprecated API use in contributed projects:

Donut chart of top 15 usages of deprecated APIs on drupal.org

To top that off, I also think a simplytest.me style online service would be very useful to run drupal8-rector on your drupal.org project and suggest a patch to apply to remove deprecated APIs. Even better if it allows custom code to be uploaded so people can use it for that too. The more we can automate these tasks the easier the transition of Drupal 8 to 9 will be.

Submit issues and pull requests in the drupal8-rector project to improve API coverage. Look for me on Drupal slack if you get stuck and I'll try to help. I'd also love to talk to you if you want to set up an automated service. Let's do this!

Nov 17 2019
Nov 17

All Drupal core initiatives with leads attending DrupalCon Amsterdam took part in an experimental PechaKucha-style keynote format (up to 15 slides each, 20 seconds per slide):

Drupal 8’s continuous innovation cycle resulted in amazing improvements that made today’s Drupal stand out far above Drupal 8.0.0 as originally released. Drupal core initiatives played a huge role in making that transformation happen. In this keynote, various initiative leads will take turns to highlight new capabilities, challenges they faced on the way and other stories from core development.

I represented the API-First Initiative and chose to keep it as short and powerful as possible by only using four of my five allowed minutes. I focused on the human side, with cross-company collaboration across timezones to get JSON:API into Drupal 8.7 core, how we invited community feedback to make it even better in the upcoming Drupal 8.8 release, and how the ecosystem around JSON:API is growing quickly!

It was a very interesting experience to have ten people together on stage, with nearly zero room for improvisation. I think it worked pretty well — it definitely forces a much more focused delivery! Huge thanks to Gábor Hojtsy, who was not on stage but did all the behind-the-scenes coordination.

Conference: 

DrupalCon Amsterdam

Location: 

Amsterdam, Netherlands

Date: 

Oct 28 2019 - 13:30

Duration: 

4 minutes

Nov 17 2019
Nov 17

It actually all started several weeks before DrupalCon. I was chatting on Slack with Aleksi Peeples, who, if you don't already know, is the author of the Data Transform Plugin, and a huge contributor to the Drupal theming ecosystem. I had assumed that people like him, always attended, but he explained that as a freelancer it was out of his budget. It’s my belief that for contributors like Aleksi it's extremely important to attend DrupalCons, since many important discussions and decisions are often made at the events themselves. Additionally, being physically present at the contribution sprints makes it’s possible for contributors like Aleksi to sit down with, let’s say, Tim Plunkett, to discuss Layout Builder (which they did). But that wouldn’t have happened if I hadn’t spoken up! I explained all this to Baddy who in turn talked to Ryan Szrama from Centarro. Several emails later, Aleksi had a ticket! Thank you, Ryan!! 

You can learn more about Aleksi and his contributions to Drupal on his blog: https://www.aleksip.net/

Nov 17 2019
Nov 17

In the past four years that I have been building component-based themes for Drupal projects I have never created a separate component library. And I have never used the same component templates anywhere else than in the Drupal theme that they were originally developed for.

The projects I have worked on have been small, at least in the sense that in most them I have been the only developer. My clients have never required building a design system, style guide, pattern library or a component-based theme.

So, despite all this, why have I wanted to use a component-based approach?

Prototyping and developing a Drupal theme without Drupal

The single most compelling reason for me to develop component-based themes has been the ability to prototype and develop most of the theme independently of Drupal, right from the beginning of a project.

Clients often do not have a clear idea of what they actually want, and static wireframes or Photoshop comps, even with variations for mobile and desktop, leave too much to the imagination. Prototyping and presenting designs in real browsers, in different viewport sizes, can be really helpful in understanding possibilities, limitations and requirements.

Using actual theme templates for prototyping means that a lot, if not most, of the work from this stage is usable for the final implementation.

I have also found that client requirements can change a lot at this stage, and that these changes can affect the back-end implementation. The ability to discuss and quickly modify a real web page prototype even before installing Drupal can save a lot of work on the back-end as well.

A shared way of thinking

Building something from components is essentially a very simple idea. Practically everyone knows Lego bricks and how you can build larger structures from different kinds of smaller bricks. Using Pattern Lab or similar tools it is possible to show theme component templates in isolation, something that is not trivial to do in Drupal itself. When components are identified, named and discussed together with the client, designing and building layouts for pages can become much easier.

Also, while it is not something that can be easily measured and compared, I do believe that starting from a more generic, component-based mindset instead of a Drupal-specific one can bring better results.

So what about all the extra work?

When the initial work has already been done in a component-based starter theme, there is very little work required to start prototyping component templates and full web pages in a tool like Pattern Lab.

Component-based Drupal themes are not so different from vanilla Drupal themes. The main difference is that currently in most, if not all, component-based themes regular Drupal templates contain a Twig include statement to include a namespaced component template, which is organized in a separate folder structure. I would say that this is more of an organizational difference, both mentally and in the file system, than anything else.

I have found that most of the extra effort in component-based theming comes from creating demo data for an external tool like Pattern Lab. However, compared with the effort required to arrive at a shared vision based on iterations of static files, or prototyping with something else than the theme templates, creating demo data could actually take a lot less work.

A thing worth noting is that component-based theming and the use of external tools and demo data are not an all-or-nothing approach. It is possible to use both vanilla Drupal templates and component templates in the same theme, and create as much or little demo data as required.

Separate components and component libraries

Theme components can also be developed, packaged, distributed and installed separately from a Drupal theme, either individually or as a component library. Doing so opens up a lot of possibilities that might be especially interesting to large organizations and projects, and even more so if the components use only vanilla Twig.

Unfortunately common best practices for such technical implementations and related workflows do not yet exist. If you only need to build one specific component-based Drupal theme, separate components and component libraries are unlikely to be worth the extra effort.

Drupal themes and component libraries

https://www.aleksip.net/drupal-themes-and-component-libraries

Nov 16 2019
Nov 16

A few weeks ago we announced our diversity scholarship for DrupalSouth. Before announcing the winner I want to talk a bit about our experience doing this for the first time.

DrupalSouth is the largest Drupal event held in Oceania every year. It provides a great marketing opportunity for businesses wanting to promote their products and services to the Drupal community. Dave Hall Consulting planned to sponsor DrupalSouth to promote our new training business - Getting It Live training. By the time we got organised all of the (affordable) sponsorship opportunities had gone. After considering various opportunities around the event we felt the best way of investing a similar amount of money and giving something back to the community was through a diversity scholarship

The community provided positive feedback about the initiative. However despite the enthusiasm and working our networks to get a range of applicants, we only ended up with 7 applicants. They were all guys. One applicant was from Australia, the rest were from overseas. About half the applicants dropped out when contacted to confirm that they could cover their own travel and visa expenses.

We are likely to offer other scholarships in the future. We will start earlier and explore other channels for promoting the program.

The scholarship has been awarded to Yogesh Ingale, from Mumbai, India. Over the last 3 years Yogesh has been employed by Tata Consultancy Services’ digital operations team as a DevOps Engineer. During this time he has worked with Drupal, Cloud Computing, Python and Web Technologies. Yogesh is interested in automating processes. When he’s not working, Yogesh likes to travel, automate things and write blog posts. Disclaimer: I know Yogesh through my work with one of my clients. Some times the Drupal community feels pretty small.

Congratulations Yogesh! I am looking forward to seeing you in Hobart.

If you want to meet Yogesh before DrupalSouth, we still have some seats available for our 73780151419">2 day git training course that’s running on 25-26 November. If you won’t be in Hobart, contact us to discuss your training needs.

Share this post

Nov 15 2019
Nov 15

1 minute read Published: 15 Nov, 2019 Author: Colan Schwartz
Drupal Planet , SaaS , OpenSaaS , DevOps , Aegir , OpenStack , Presentations

On Friday, June 14th, I presented this session at Drupal North 2019. That’s the annual gathering of the Drupal community in Ontario and Quebec, in Canada.

As I realized I hadn’t yet posted this information yet, I’m doing so now.

Session information:

Are you (considering) building a SaaS product on Drupal or running a Drupal hosting company? Have you done it already? Come share your experiences and learn from others.

Among other things, we’ll be discussing:

…and any other related topics that come up.

A video recording of my presentation is available on:

My slides (with clickable links) are available on our presentations site.

The article Drupal North 2019: Drupal SaaS: Building software as a service on Drupal first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Nov 15 2019
Nov 15

Our normally scheduled call to chat about all things Drupal and nonprofits will happen Thursday, November 21, at 1pm ET / 10am PT. (Convert to your local time zone.)

Feel free to share your thoughts and discussion points ahead of time in our collaborative Google doc: https://nten.org/drupal/notes

We have an hour to chat so bring your best Drupal topics and let's do this thing!

Some examples to get your mind firing: how do I recreate [feature] on my Drupal 7 site in Drupal 8? I need to explain [complicated thing] to a non-technical stakeholder -- any advice? How can I get Drupal and my CRM to play nicely?

This free call is sponsored by NTEN.org but open to everyone.

ATTENTION: New call-in information this month. We're moving to Zoom, y'all!

  • Join the call: https://zoom.us/j/308614035
    • Meeting ID: 308 614 035
    • One tap mobile
      • +16699006833,,308614035# US (San Jose)
      • +16465588656,,308614035# US (New York)
    • Dial by your location
      • +1 669 900 6833 US (San Jose)
      • +1 646 558 8656 US (New York)
  • Follow along on Google Docs: https://nten.org/drupal/notes
  • Follow along on Twitter: #npdrupal

View notes of previous months' calls.

Nov 15 2019
Nov 15

The “lucky dozen” is here for us — our amazing InternetDevels development company is turning twelve today! The 12 months after last year’s birthday has rushed by like the wind. But each month had something nice, useful, and outstanding to look back on. Let’s begin!

Some nice moments from these 12 months to remember

November: supporting awards for talented students

Every future web development Jedi begins their star journey somewhere! We want to support them in any way we can.

In late November 2018, InternetDevels together with its programming school OxIT sponsored the “Student of the Year” awards where gifted students from our region’s universities competed in several categories.

InternetDevels sponsors the Student of the Year award

December: creating the unique Drupal City map

In December, as the holidays were approaching, we were filled with a fairy-tale spirit. Together with our child company WishDesk, we created a fascinating map of Drupal City. It is made completely of real Drupal module, theme, or distribution names.

The map is not just for entertainment — our mission is also to help everyone learn Drupal modules and be inspired by Drupal’s greatness.

Drupal City Map made of Drupal module, theme, and distribution names

January: celebration with our long-term customer YaWave

Long-term customers are our great treasure. One of them is the revolutionary crowdfunding platform YaWave.

They celebrated the 6th anniversary of their amazing web development project. Our dedicated team is happy to be a vital part of it! So we enjoyed this anniversary celebration in the beautiful city of Lviv in January.

InternetDevels celebrates anniversary with a long-term customer YaWave

February: great results in Drupal Code Sprint

Drupal Global Contribution Weekend is always a chance to give your time and effort to Drupal. In early February, our developers participated in a large code sprint devoted to this annual event.

One of them was an official mentor, organizing the work of others, and sharing his skills. Another was especially active in creating patches and made 10+ of them on drupal.org within a day.

InternetDevels participates in Drupal Code Sprint

March: a journey to the Mountain Camp in Davos

Awakening spring, Swiss mountains, and expert Drupal talks make an amazing combination! In March, our representatives visited the Drupal Mountain Camp in Davos.

“Open source on top of the world” is the event’s official slogan. On top of the world is exactly how you feel when you are part of such a great event and listen to what’s hot in Drupal development.

InternetDevels at Drupal Mountain Camp in Davos

April: our child company Drudesk turns 4!

April is always a special month for us. Four years ago, all website owners across the globe received a real gift. This is because we established the website support service Drudesk.

Attentive, caring, and professional guys and girls from Drudesk can solve any issue in your website. Bug fixes, all kinds of audits and optimization, new functionality, and anything else you wish can be simply sent as a task.

Drudesk website support agency turned 4 years old

May: the breathtaking heights of Himalai

Mountains do not allow everyone to approach them. You need to have courage, patience, and respect. When the Himalai open their heart to you, they teach you to be thankful for every moment and feel the joy of life.

Our senior developer Zemelia was full of impressions after he had reached the Everest Base Camp at 5,364 meters and climbed the 5,643-meter Kala Patthar.

InternetDevels developer travels to Himalai

June: interviews with amazing Drupal contributors

In June, along with our restless child company WishDesk, were prepared an outstanding blog post. Step by step, we interviewed Drupal contributors and experts, asking them 3 questions about Drupal.

We are very grateful to all the experts who took their time and shared their thoughts with our readers! Check out what Drupal contributors say about Drupal. This includes incredibly famous names in the Drupal world, as well as our company’s developers.

Interviews with outstanding Drupal contributorsJuly: skill-sharing at Drupal Cafe

Our senior developer from the JYSK team, Daylioti, gave a speech at Drupal Cafe Lutsk together with his Kyiv colleague. The guys chose a really interesting topic: “Cloud Native: Basics and Tools for Drupal.”

Daylioti paid special attention to describing the Docker commander tool that he created. It offers a user interface to run Docker commands and it is widely used on the project.

InternetDevels developer speaks at Drupal Cafe Lutsk

August: maintaining useful Drupal modules

Our developers never forget to contribute their modules to drupal.org. Among the ones actively maintained this year are:

  • UpTime Widget that lets you stay aware of your website’s uptime by connecting to the popular free uptime monitoring service
  • Registration Confirm Email Address that creates an additional field in the registration form to confirm your email and avoid mistakes

To make the modules more user-friendly and clear for everyone to use, the developers helped describe them in the blog posts and updated the drupal.org documentation.

Uptime Widget Drupal module

Registration Confirm Email Address Drupal module

September: training with our long-term customer JYSK

We want to mention again our long-term and respected customer — the international retail chain JYSK. AT the end of September, key representatives of their team arrived from Denmark to Kyiv.

Their goal was an interesting training course on RTTM (Reduce Time To Market) and Scrum with us. Our guys and girls showed great results in the training games!

This year, JYSK reached a significant milestone — a record turnover of 1 billion Danish krones a year. Properly built e-commerce websites help businesses flourish, and we are happy to be involved!

InternetDeverls and customer JYSK hold a training together

October: preparing this year’s 7th study course

We have already mentioned our programming school OxIT whose mission is to raise new web development talents. After last year’s birthday in November, we have held 7 educational courses (in Drupal 8, quality assurance, and project management).

We prepared the 7th course for this year in October. We are happy to share skills, give chances, and invite the best candidates to proceed with an internship.

OxIT programming school by InternetDevels

OxIT programming school by InternetDevels

Our child company WishDesk also prepared a cool animation featuring these vivid moments. So, despite the fact that the 12 months have rushed by like 12 hours around the clock face, they are impossible to forget!

Happy birthday to the InternetDevels team and the company’s founder Leviks! Let’s treasure every moment and believe the best ones are yet to come!

Nov 15 2019
Nov 15

The 3rd edition of the International Splash Awards, inaugurated at Drupal Europe 2018 as the European Splash Awards, was held in conjunction with the 2019 European DrupalCon in Amsterdam. We were happy to be able to help out with the event by being the “platinum-diamond-main-and-only sponsor”, as the organizers so eloquently put it in the post-event newsletter.

The Splash Awards are intended to recognize and reward companies for their groundbreaking projects in different categories, ranging from strictly digital ones such as design and e-commerce, to broader ones, such as healthcare and non-profit. This year again featured a diverse selection of innovative web projects each outstanding in its own way and deserving recognition.

But, not everyone can take home the trophy every time, as each of the ten categories could have only one winner - no easy task for the jury to select their favorites, and we feel the votes must have definitely been super close.

Taking this into account, the winners, as well as all the nominees, deserve that much more praise and recognition for a job well done - huge congratulations to all, from us and (we’re sure) from the entire Drupal community! You can check out who were the winners of their respective categories here.

We managed to speak with some of them about their winning projects. Here is what Ronald van Rooijen, Managing Partner at FRMWRK, had to say about earning first place in the Government and Public Services category with the SIM platform developed by their team:

And here is a quote by Synetic’s CTO Daniël Smidt commenting on winning the Tools & Apps category with the Digital Asset Management system that they’ve built for Bejo

We’re glad to see the regional Splash Awards, which originally started in 2014 in the Netherlands, successfully transition to the international level. We hope this is only the beginning for their international edition, as they are an excellent way of showcasing and celebrating Drupal’s diverse capabilities and the companies that champion them.

If you’d like to support the International Splash Awards, financially or otherwise, and help secure their sustainability, reach out to the team through the Drupal Slack #splashawards channel to learn how you can get involved. 

Credit for the two winners photos, as well as the group cover photo, goes to Rachel Viersma.
 

Nov 15 2019
Nov 15

Recently we got an exciting task, to scrape job descriptions from various web pages. There’s no API, no RSS feed, nothing else, just a bunch of different websites, all using different backends (some of them run on Drupal). Is this task even time-boxable? Or just too risky? From the beginning, we declared that the scraping tool that we would provide is on a best effort basis. It will fail eventually (network errors, website changes, application-level firewalls, and so on). How do you communicate that nicely to the client? That’s maybe another blog post, but here let’s discuss how we approach this problem to make sure that the scraping tool degrades gracefully and also in case of an error, the debugging is as simple and as quick as possible.

Architecture

The two most important factors for the architectural considerations were the error-prone nature of the scraping and the time-intensive process of downloading something from an external source. On the highest level, we split the problem into two distinct parts. First of all, we process the external sources and create a coherent set of intermediate XMLs for each of the job sources. Then, we consume those XMLs and turn it into content in Drupal.

So from a bird-eye-view, it looks like this:

Phase I: scheduling and scraping

Phase II: content aggregation

That was our plan before the implementation started, let’s see how well it worked.

Implementation

Scraping

You just need to fetch some HTML and extract some data, right? Almost correct. Sometimes you need to use a fake user-agent. Sometimes you need to download JSON (that’s rendered on a website later). Sometimes, if you cannot pass cookies in a browser-like way, you’re out of luck. Sometimes you need to issue a POST to view a listing. Sometimes you have an RSS feed next to the job list (lucky you, but mind the pagination).

For the scraping, we used Drupal plugins, so all the different job sites are discoverable and manageable. That went well.

What did not go so well was that we originally planned to use custom traits, like JobScrapeHtml and JobScrapeRss, but with all the vague factors enumerated above, there’s a better alternative. Goutte is written in order to handle all the various gotchas related to scraping. It was a drop-in replacement for our custom traits and for some sites that expected a more browser-like behavior (ie. no more strange error messages as a response for the HTTP request).

Content synchronization

For this matter, we planned to use Feeds, as it’s a proven solution for this type of task. But wait, only an alpha version for Drupal 8? Don’t get scared, we didn’t even need to patch the module.

Feeds took care of the fetching part, using Feeds Ex, we could parse the XMLs with a bunch of XPath expressions, and Feeds Tamper did the last data transformations we needed.

Feeds and its ecosystem is still the best way for recurring, non-migrate-like data transfer into Drupal. The only downside is the lack of extensive documentation, so check our sample in case you would like to do something similar.

Operation

This system has been in production for six months already. The architecture proved to be suitable. We even got a request to include those same jobs on another Drupal site. It was a task with a low time box to implement it, as we could just consume the same set of XMLs with an almost identical Feeds configuration. That was a big win!

Where we could do a bit better actually, was the logging. The first phase where we scraped the sites and generated the XML only contained minimal error handling and logging. Gradually during the weeks, we added more and more calls to record entries in the Drupal log, as network issues could be even environment-specific. It’s not always an option to simply replicate the environment in localhost and give it a go to debug.

Also in such cases, you should be the first one who knows about a failure, not the client, so a central log handling (like Loggly, Rollbar or you name it) is vital. You can then configure various alerts for any failure related to your scraping process.

However, when we got a ticket that a job is missing from the system, the architecture again proved to be useful. Let’s check the XMLs first. If it’s in the XML, we know that it’s a somehow Feeds-related issue. If it’s not, let’s dive deep into the scraping.

The code

In the days of APIs and web of things, sometimes it’s still relevant to use the antique, not so robust method of scraping. The best thing that can happen is to re-publish it in a fully machine consumable form publicly (long live the open data movement).

The basic building blocks of our implementation (ready to spin up on your local using DDEV) is available at https://github.com/Gizra/web-scraping, forks, pull requests, comments are welcome!

So start the container (cd server && ./install -y) and check for open positions at Gizra by invoking the cron (ddev . drush core-cron)!

Nov 14 2019
Nov 14

Image Styles Breadcrumb

Images on websites can be a huge pain when you are optimizing a site. We want our images to render as crisp as possible but we also want our sites to load as fast as possible. Content creators will often ask "what size image should I upload" and with the thought of some tiny image being rendered at full-screen width pixelated out of control we'll answer "as large as you've got". The content creator will then upload a 2mb jpeg and the load time/network request size will increase dramatically.

by Nick Fletcher / 15 November 2019

Responsive images can be a decent solution for this. A front end developer can achieve this in many ways. A popular way would be to use a  element with the contents of multiple 's using srcset and `media` attributes and a default  tag.

I'll explain how we can do that in Drupal 8. 
The scenario I'm trying to set up in this example is a paragraph that references a media entity with an image field.

Tutorial

Enable the responsive images module from Drupal core.

  1. To enable the responsive image module, go to Admin > Configuration.
  2. Click the checkbox next to Responsive Image.
  3. Click Install.

This module may already be installed on your project so just head to Admin > Config and ensure that the Responsive image styles module is enabled.

Responsive Image Styles Config

Add / Confirm breakpoints

The default theme will already have a breakpoints YAML file. If you're using a custom theme you'll need to make sure you have a breakpoints YAML file for it. This should exist at themes/{theme-name}/{theme-name}.breakpoints.yml, where {theme-name}, is the name of your theme.
Create or open the file and configure your breakpoints. There should be a few breakpoints in there and they'll look something like this

custom_theme.small:
  label: small
  mediaQuery: "(min-width: 0px)"
  weight: 1
  multipliers:
    - 1x
    - 2x
custom_theme.medium:
  label: medium
  mediaQuery: "(min-width: 768px)"
  weight: 2
  multipliers:
    - 1x
    - 2x
custom_theme.large:
  label: large
  mediaQuery: "(min-width: 1024px)"
  weight: 3
  multipliers:
    - 1x
    - 2x

You can add as many breakpoints as you need to suit your requirements. The weight should go from 0 for the smallest breakpoint to the highest number for the largest breakpoint. The multipliers are used to provide crisper images for HD and retina displays.

Configure Image Styles (sizes)

Head to Admin > Config > Media > Image Styles and create a size for each breakpoint.

Configuring Image Styles UI

  1. Click Add image style.
  2. Give it an Image style name and click Create new style (e.g. Desktop 1x, Desktop 2x, Tablet 1x etc...).

    Create Image Style UI

  3. Select an effect e.g. Scale or Scale and crop.

    Edit Image Style UI

    Edit Image Style Effect Options

  4. Set a width (height is calculated automatically) or width and height when cropping.

    Image Style Effect UI

  5. When creating multiple styles just use the breadcrumbs to get back to Configure Image Styles

    Image Styles Breadcrumb Item

When you have created all the sizes for your responsive format you can move on to the next step.

Create a responsive Image Style

Head to Admin > Config > Media > Responsive Image Styles

  1. Click Add responsive image style to create a new one.
  2. Give it a label (for example if it's for a paragraph type called profile_image then use that as the name)

    Add responsive image style UI

  3. Select your theme name from the Breakpoint Group

    Breakpoint Group Selection

  4. The breakpoints will load, open the breakpoints that this image style will use and check the radio next to Select a single image style or use multiple.

    Configuring the breakpoint image style

  5. Select the image style from the Image style dropdown (these are the styles we created in the previous step).

    Image style selection UI

  6. Set the Fallback Image style (this will be used where the browser doesn't understand the  tags inside the picture element. It should be the most appropriate size to use if you could only pick one across all screen sizes)

    Fallback image style

Add a new view mode for media entities

Head to Admin > Structure > Display Modes > View Modes, click 'Add new view mode' and add your display mode. In this instance, we'll use 'Profile image' again.

Adding a view mode

Update the display of the image for the entity type

Head to Admin > Structure > Media Types > Image > Manage display

  1. On the default tab click on Custom display settings at the bottom and check the new 'Profile Image' view mode and then Click Save

    Custom display settings

  2. Click on the tab that matches your new display type (in my example it's Profile image)
  3. On the Image fields row change the Label to Hidden and the Format to Responsive Image.

    Configuring the image format

  4. Click on the cog at the end of the row.

    Row configuration cog

  5. Under Responsive image style select your style.

    Format Configuration

  6. Select where the file should link to (or Nothing), Click update
  7. Click Save

Update your Paragraph type to use the new display format

Go to Structure > Paragraph Types > {type} > Manage Display

  1. Find the row with the field displaying your media entity and change the format to Rendered Entity
  2. Click the gear icon to configure the view mode by selecting your view mode from the list (in this instance profile image)

    Paragraph type display format

  3. Click Save

Testing Our Work

At this point you should be all set.

  1. Create an example page
  2. Select your paragraph to insert into the page
  3. Add an image
  4. Save the page and view it on the front end
  5. Inspect the image element and ensure that a  element is rendered with 's, a default, and when you resize the browser you see what you expect.
    A profile Image
  6. To inspect further select the Network tab in your developer tools and filter by images. Resize the browser window and watch as new image sizes are loaded at your defined breakpoints.
     

Tagged

Responsive Design, Responsive Images, Media Entities, Drupal 8, Drupal Theming
Nov 14 2019
Nov 14

As part of being a distributed company, we've found that using Know Your Team on a regular basis helps us communicate outside of projects and tasks and deadlines. Every Friday we respond to the question “What’s something you figured out this week?” and here we’re collating our best of.

NUMBER 2–OCTOBER 2019

Chris

I learned how to work with json arrays to store and manage data efficiently in a database field.

Jason

I learned about Event Subscribers and how to intercept every request and do stuff to change responses that are sent for certain requests.

Jill

CSS Scroll Snap controls the location on a page the user scrolls to. This is helpful if a page has defined sections. The ten7.com site is an example of a site containing pages with sections where scroll snap may improve the UI.

<div class="page">
  <div class="snap"></div>
  <div class="snap"></div>
</div>

.page {
  scroll-snap-type: y;
}
.snap {
  scroll-snap-align: start;
}

Les

When debugging email events in SendGrid [our cloud email provider], it's difficult to get the Activity log to show you exactly what happened when. It's easier to export the CSV version of the log and go through it that way. Also, SendGrid's knowledge base didn't tell me anything about cancelling deferred emails. But their chat support was prompt, fast, and helpful.

Lex

This week I figured out that Apache Solr configuration in Flight Deck was causing my local environment to be extremely slow. I had been dealing with the slowness for quite some time and thought it was just Docker on Mac, but it turns out we needed to make adjustments!

Nov 14 2019
Nov 14

Submitted by karthikkumardk on Thursday, 14 November 2019 - 11:14:32 IST

The SimpleTest module has been deprecated in Drupal 8, and will be removed in Drupal 9.

Production sites should never have SimpleTest module installed since it is a tool for development only.

contributed module is now available for those who wish to continue using SimpleTest during the transition from Drupal 8 to 9.

However, developers are encouraged to write tests with PHPUnit, and to use the phpunit command line test runner: https://www.drupal.org/docs/8/phpunit/running-phpunit-tests

Note that the final decision to deprecate SimpleTest still has some dependencies on the in-progress Drupal 9 issue so exact details of the deprecation may be revised (#3075490: Move simpletest module to contrib).

Thanks :)

Nov 14 2019
Nov 14

by David Snopek on November 13, 2019 - 8:44pm

You may have noticed that today the Drupal Security Team marked 16 modules as unsupported, due to the module maintainer not fixing a reported security vulnerability after a sufficiently long period of time.

Among those modules, there were a few very popular ones like Admininistration Views and Nodequeue, which have have reported ~118k and ~40k sites using them, respectively.

Everytime a popular module is unsupported, there's a certain amount of panic and uncertainty, so I wanted to address that in this article, both for the Drupal community at large, and for our customers in particular, because we promise to deploy security updates the same day they are released.

Read more to see our perspective!

Why does this happen? Why all at once?

For modules supported by the Drupal Security Team, security vulnerabilities are reported to a private issue tracker.

The Security Team's job is to make sure that the process is followed, security advisories (SA's) get written, and ultimately publishing the SA's, releases and the various notifications.

It's actually the module maintainer's job to fix the vulnerability, write the first draft of the SA, and create the release.

If a module maintainer is unresponsive, the security team will tell the module maintainer that they have 2 weeks to post an update. Since this is an Open Source project, and most maintainers are volunteers doing this in their free time, they frequently get much more than 2 weeks.

When there aren't any other big security releases, and the security team has time to do it (the security team is also volunteers!), they go through all the modules that got the 2 week warning and didn't respond, and mark all of them as unsupported. That's why they tend to come out in groups.

How should I react if I depend on one of these modules?

In an SA about a module becoming unsupported, there are two recommended actions:

  • Uninstall the module, or
  • Apply to become the maintainer of the module and fix the vulnerability

There's actually a 3rd possible action that can apply to very popular modules:

  • Wait for a new maintainer to be found and a fixed version to be released

When modules become unsupported, especially if it's a popular module, there can be some amount of panic. And in a way, that's good!

For popular modules, the security team will sometimes search for a new maintainer through private channels before marking a module as unsupported, but it's hard to do because the details need to remain confidential.

If that doesn't work out, marking a module as unsupported is a more public way to advertise for a new maintainer - and because of the added urgency/panic - it frequently works!

Now, if you depend on a module, and have the skill and time to become the maintainer, you should absolutely consider applying to maintain the module!

But if you don't, it can be reasonable to wait a few days to see if a new maintainer steps up, before taking the drastic step of uninstalling the module. Right when the SA is published, the details of the vulnerability are still private, and unless stated in the SA, there aren't any known instances of the the vulnerability being exploited in the wild.

However, after a period of time, the security team may publish the details of the vulnerability publicly, or attackers may figure it out on their own, so you can't wait forever! But waiting a few days is not an unreasonable calculated risk.

How we handle this with our customers?

It depends on the situation.

For a very popular module, where we're pretty certain a new maintainer will step up, we'll usually wait a few days. (We also have an idea of the security risk in the vulnerability, because we have a security team member on staff, so that's taken into account as well.)

In some cases, we may even offer to maintain the module ourselves.

And, in other cases, like for really unpopular modules, we'll tell our customers that they should uninstall it and replace it with an alternative module or custom code. That's something we can help our customers to do, however, it's only included in our support plans that have a bucket of developer hours - for other customers, we'll need to do a paid mini-project for a few hours.

For our Drupal 6 Long-Term Support (D6LTS) customers, we also have the option of waiting until the vulnerability is made public, and then we can make a D6LTS patch for it, even if the Drupal 7 or 8 module is still unsupported without an active maintainer.

But no matter the situation, we hope to help our customers be ready for whatever may happen. :-)

Nov 14 2019
Nov 14

PreviousNext builds open source digital platforms for large scale customers, primarily based on Drupal and hosted using Kubernetes, two of the world’s biggest open source projects. With our business reliant on the success of these open source projects, our company is committed to contributing where we can in relation to our relatively small size. We get a lot of questions about how we do this, so are happy to share our policies so that other organisations might adopt similar approaches.

We learned early on in the formation of PreviousNext that developers who are passionate and engaged in open source projects usually make great team members, so wanted to create a work environment where they could sustain this involvement. 

The first step was to determine how much billable work on client projects our developers needed to achieve in order for PreviousNext to be profitable and sustainable. The figure we settled on was 80%, or 32 hrs per week of billable hours of a full time week as the baseline. Team members then self manage their availability to fulfil their billable hours and can direct up to 20% of their remaining paid availability to code contribution or other community volunteering activities. 

From a project management perspective, our team members are not allowed to be scheduled on billable work more than 80% of their time, which is then factored into our Agile sprint planning and communicated to clients. If certain team members contribute more billable hours in a given week, this just accelerates how many tickets we can complete in a Sprint.

If individual team members aren’t involved or interested in contribution, we expect their billable hours rate to be higher in line with more traditional companies. We don’t mandate that team members use their 20% time for contribution, but find that the majority do due to the benefits it gives them outside their roles. 

These benefits include:

  • Learning and maintaining best-practice development skills based on peer review by other talented developers in the global community.
  • Developing leadership and communication skills with diverse and distributed co-contributors from many different cultures and backgrounds.
  • Staying close to and often being at the forefront of new initiatives in Drupal, whether it be as a core code contributor or maintaining key modules that get used by hundreds of thousands of people. For example, the Video Embed Field that Sam Becker co-maintains is used on 123,487 websites and has been downloaded a staggering 1,697,895 times at the time of publishing. That's some useful code!  
  • Developing close working relationships with many experienced and talented developers outside PreviousNext. In addition to providing mentoring and training for our team, these relationships pay dividends when we can open communication channels with people responsible for specific code within the Drupal ecosystem.
  • Building their own profiles within the community and being considered trusted developers in their own right by demonstrating a proven track record. After all, it's demonstrated work rather than the CV that matters most. This often leads to being selected to provide expert talks at conferences and obviously makes them highly desirable employees should they ever move on from PreviousNext.
  • If our team members do get selected as speakers at international Drupal events, PreviousNext funds their full attendance costs and treats their time away as normal paid hours.
  • Working on non-client work on issues that interest them, such as emerging technologies, proof of concepts, or just an itch they need to scratch. We never direct team members that they should be working on specific issues in their contribution time.

All of these individual benefits provide clear advantages to PreviousNext as a company, ensuring our team maintains an extremely high degree of experience and elevating our company’s profile through Drupal’s contribution credit system. This has resulted in PreviousNext being consistently ranked in the top 5 companies globally that contribute code to Drupal off the back of over 1,000 hours of annual code contribution.

In addition to this 20% contribution time, we also ensure that most new modules we author or patch during client projects are open sourced. Our clients are aware that billable time during sprints will go towards this and that they will also receive contribution credit on Drupal.org as the sponsor of the contributions. The benefits to clients of this approach include:

  • Open sourced modules they use and contribute to will be maintained by many other people in the Drupal community. This ensures a higher degree of code stability and security and means that if PreviousNext ceases to be engaged the modules can continue to be maintained either by a new vendor, their internal team or the community at large.
  • Clients can point to their own contribution credits as evidence of being committed Drupal community supporters in their own right. This can be used as a key element in recruitment if they start hiring their own internal Drupal developers.

Beyond code contributions, PreviousNext provides paid time to volunteer on organising Drupal events, sit on community committees, run free training sessions and organise code sprints. This is then backed by our financial contributions to sponsoring events and the Drupal Association itself.

None of this is rocket science, but as a company reliant on open source software we view these contribution policies and initiatives as a key pillar in ensuring PreviousNext's market profile is maintained and the Drupal ecosystem for our business to operate in remains healthy. 

We're always happy to share insights into how your own organisation might adopt similar approaches, so please get in touch if you'd like to know more.

Comments

This is a brilliant contribution thanks Owen! And I argue that culture building is not something a rocket scientist can do.

Pagination

Add new comment

Nov 13 2019
Nov 13
[embedded content]

Yjs, one of the most powerful and robust frameworks for real-time collaborative editing, enables developers to add shared editing capabilities to any application with relatively little effort. In order to make it so easy to use and extend Yjs, the framework abstracts all the complexities, many moving pieces, and deep technical concepts involved in empowering offline first, peer to peer, real time collaboration.

In this Tag1 Team Talk, we continue our deep dive into Yjs with the founder and project lead of this collaborative editing framework to learn more about how it enables not only collaborative text editing but also collaborative drawing, collaborative 3D modeling, and other compelling use cases. In particular, we focus on the three core features that make up any great collaborative editing application: awareness, offline editing, and versioning with change histories.

Join Kevin Jahns (Real-Time Collaboration Systems Lead at Tag1 Consulting and Founder and Project Lead of Yjs), Fabian Franz (Senior Technical Architect and Performance Lead at Tag1 Consulting), Michael Meyers (Managing Director at Tag1 Consulting), and moderator Preston So (Contributing Editor at Tag1 Consulting and Principal Product Manager at Gatsby) for the second part of our deep dive series on Yjs directly from its creator (and excited learners).

A Deep Dive into Yjs Part 1
Evaluating Real Time Collaborative Editing Solutions for a Top Fortune 50 Company
Modern Rich Text Editors: How to Evaluate the Evolving Landscape
Nov 13 2019
Nov 13

PreviousNext continue to be major contributors to the development and promotion of Drupal 8. As participants of the Drupal 8.8.0 Beta Testing Program, we thought it would be useful to document the steps we took to update one of our sites on Drupal 8.7 to the latest 8.8.0 beta.

Every site is different, so your mileage may vary, but it may save you some time.

by Kim Pepper / 13 November 2019

Drupal 8.8 is a big release, with a number of new features added, and APIs deprecated to pave the way to a Drupal 9.0 release. Thankfully, the upgrade process was fairly straightforward in our case.

Upgrade PathAuto

First step was to deal with The Path Alias core subsystem has been moved to the "path_alias" module This meant some classes were moved to different namespaces. In order to make things smoother, we installed the latest version of pathauto module and clear the caches.

composer require drupal/pathauto:^[email protected]
drush cr

Core Dev Composer Package

We use the same developer tools for testing as Drupal core, and we want to switch to the new core composer packages, so first we remove the old one.

composer remove --dev webflo/drupal-core-require-dev

Update Patches

We sometimes need to patch core using cweagans/composer-patches. In the case of this site, we are using a patch from ckeditor_stylesheets cache busting: use system.css_js_query_string which needed to be re-rolled for Drupal 8.8.x. We re-rolled the patch, then updated the link in the extra/patches section.

Update Drupal Core and Friends

In our first attempt, composer could not install due to a version conflict with some symfony packages (symfony/finder,  symfony/filesystem and symfony/debug). These are transient dependencies (we don't require them explicitly). Our solution was to explicitly require them (temporarily) with versions that Drupal core is compatible with, then remove them afterwards.

First require new Drupal core and dependencies:

composer require --update-with-dependencies \
  drupal/core:^[email protected] \
  symfony/finder:^3.4 \
  symfony/filesystem:^3.4

Second, require new core-dev package and dependencies:

composer require --dev --update-with-dependencies \
  drupal/core-dev:^[email protected] \
  symfony/debug:^3.4

Lastly, remove the temporary required dependencies:

composer remove -n \
  symfony/finder \
  symfony/filesystem \
  symfony/debug

Update the Database and Export Config

Now our code is updated, we need to update the database schema, then re-export our config. We use drush_cmi_tools, so your commands may be different, e.g. just a drush config-export instead of drush cexy.

drush updb
drush cr
drush cexy

Settings.php

We also need to update our settings.php file now that The sync directory is defined in $settings and not $config_directories.

This is a trivial change from:

$config_directories['sync'] = 'foo/bar';

to:

$settings['config_sync_directory'] = 'foo/bar';

Also we need to move temporary files directory from config to settings.


$config['system.file']['path']['temporary'] = 'foo/bar';

to:


$settings['file_temp_path'] = 'foo/bar';

Final Touches

In order to make sure our code is compatible with Drupal 9, we check for any custom code that is using deprecated APIs using the excellent PHPStan and Matt Glaman's mglaman/phpstan-drupal. (Alternatively you can use Drupal Check.)

 We were using an older version that was incompatible with "nette/bootstrap":">=3" so needed to remove that from the conflict section and do the remove/require dance once again.

composer remove --dev \
  phpstan/phpstan-deprecation-rules \
  mglaman/phpstan-drupal

composer require --dev --update-with-dependencies \
  phpstan/phpstan-deprecation-rules:^0.11.2 \
  mglaman/phpstan-drupal:^0.11.12

And that's it! Altogether not too painful once the composer dependencies were all sorted out. As we are testing the beta, some of these issues may be addressed in future betas and RCs.

I hope you found this useful! Got a better solution? Let us know in the comments!

Update: Added additional settings changes.

Tagged

Drupal Beta Testing
Nov 12 2019
Nov 12

In July of last year I started a new job as a developer with a new agency. During my first week, in between meetings, HR trainings, and all the other fun things that happen during onboarding, I was introduced to the preferred local development environment that was being used on most of the projects.

It was lightweight, based on Docker, it ran easily, and it was extremely easy to configure. Prior to this, I had bounced around from local setup to local setup. My local dev environment resume included such hits as MAMP, WAMP, Acquia Dev Desktop, Kalabox, VAMPD, DrupalVM, Vagrant, ScotchBox, VirtualBox, native LAMP stacks, and everything in between. All of them had their strengths and weaknesses, but none of them really had that spark that really hooked me.

Enter Docksal.

When I first started using Docksal, I thought it was just like any other setup, and to a point, it is. It creates a reusable environment that can be shared across multiple developers and setup to mimic a hosting provider to a certain point, but the two things that really grabbed me were how easy it was to get started and how fast it was compared to other systems. Docksal has one key, killer feature in my opinionated mind, and that’s the fact that the entire application is written in Bash. The primary binary (which may or may not be the name of my upcoming one-man, off-Broadway, off-any stage show) begins with #! /usr/bin/env bash and runs on any system that has the bash executable, which encompasses Linux (of course), macOS, and now Windows thanks to WSL and the ability to add Ubuntu.

One thing that was missing, though, was a training guide. It has AMAZING documentation, available at https://docs.docksal.io, including a great getting started walkthrough, but for someone just starting out using it who might not have guidance and support from people they work with, it might take a little getting used to.

If you know me, you know that I enjoy talking at conferences. I’ve given over two dozen presentations at several types of events from local meetup groups to national level conferences. If you don’t know me, you just learned something new about me. Since I enjoy talking in front of people so much, the next logical step was to find something I’m familiar with and make a training of it. Turns out, I’m familiar with Docksal.

I submitted my pitch for a training to NEDCamp, the New England Drupal Camp, and they accepted it. Since I now had a reason to write a training, I began writing a training. Initially, I started with a very high-level outline, and eventually built a framework for my training. Thanks to the nature of open source, I was able to use many of the features that https://docs.docksal.io already had in order to make my training seem a little familiar to current users and easily accessible to new users.

The first go at this training will be at NEDCamp 2019 on Friday, November 22nd. This will be the first time a dedicated training spot has been used to train on Docksal, and I'm extremely excited to see how it goes and how to improve. After that training, I will make my handbook available online, eventually to be merged into the Docksal Github repo as part of the documentation. I have had help from numerous people in building this training, especially from the Docksal maintainers, Sean Dietrich, Leonid Makarov, Alexei Chekiulaev; folks who have reviewed what I've written so far, Dwayne McDaniel and Wes Ruvalcaba; and people who have challenged me to learn more about Docksal, whose numbers are too high to list them all.

If you're interested in learning how to use Docksal or what it's all about, consider attending my training at NEDCamp on November 22nd. You can find all the details on the NEDCamp training page, and if you can't make it, be sure to watch for the handbook to be released soon.

Since I'm still working on the finishing touches, why not take the time to let me know what you would like to get out of this type of training or what you wish you would have known when learning how to use Docksal or a similar product in the comments and where you feel extra attention should be placed.

Nov 12 2019
Nov 12
[embedded content]

Yjs is a very compelling choice when it comes to building real-time collaborative applications. A powerful open-source, offline first, peer to peer, shared editing framework that is modular and extensible, Yjs enables developers to easily add real time collaborative capabilities to any type of application. Rich text editing, drawing, 3d modeling... the list of potential use cases for Yjs is lengthy and remarkable. But how did it get started, what is the algorithm it’s based on, and what does the future hold for Yjs? In this Tag1 Team Talk, hear directly from Kevin Jahns, the creator of Yjs, as we dive deeply into the foundations of Yjs and where it’s headed.

Join moderator Preston So (Contributing Editor, Tag1 Consulting) and guests Kevin Jahns (Real Time Collaboration Systems Lead, Tag1; Creator of Yjs), Fabian Franz (Senior Technical Architect and Performance Lead, Tag1), and Michael Meyers (Managing Director, Tag1) for an insider’s perspective on the past, present, and future of Yjs.

A Deep Dive into Yjs Part 2
Evaluating Real Time Collaborative Editing Solutions for a Top Fortune 50 Company
Modern Rich Text Editors: How to Evaluate the Evolving Landscape
Nov 12 2019
Nov 12

As you know Composer is a great tool to manage packages and their dependencies in PHP, while in Drupal 8.8 is going to more composer compatible, you can find composer cheatsheet in the following.

Installing dependencies

composer install

Downloads and installs all the libraries and dependencies outlined in the composer.lock file. If the file does not exist it will look for composer.json and do the same, creating a composer.lock file.

composer install --dry-run

Simulates the install without installing anything.

This command doesn’t change any file. If composer.lock is not present, it will create it.

composer.lock should always be committed to the repository. It has all the information needed to bring the local dependencies to the last committed state. If that file is modified on the repository, you will need to run composer install again after fetching the changes to update your local dependencies to those on that file

Adding packages

composer require vendor/package 
## or in a short way
composer req vendor/package 

Adds package from vendor to composer.json’s require section and installs it

composer require vendor/package --dev

Adds package from vendor to composer.json’s require-dev section and installs it. This command changes both the composer.json and composer.lock files.

Removing packages

composer remove vendor/package

Removes vendor/package from composer.json and uninstalls it This command changes both the composer.json and composer.lock files.

 

Updating packages

composer update

Updates all packages

composer update --with-dependencies

Updates all packages and its dependencies

composer update vendor/package

Updates a certain package from vendor

composer update vendor/*

Updates all packages from vendor

composer update --lock

Updates composer.lock hash without updating any packages This command changes only the composer.lock file.

PS: the all above comments will work to install drupal , just keep in mind in all drupal modules,  vendor=drupal, it means if you want to install a module, just change vendor name in above command wit drupal.for example

I want to install  Nothing  module, the url of the module is https://www.drupal.org/project/nothing so the command for install it is 

composer req drupal/nothing

Development with composer and packages is much much easier than legacy code.

Additional link:

https://getcomposer.org/download/

Nov 12 2019
Nov 12

In our upcoming webinar – Test-Driven Development with Drupal & Cypress.io – we will demonstrate how to use Cypress (and the respective integration module for Drupal) for Behaviour-Driven Development (BDD) to elevate the quality of your code and project.

Evolving from test-driven development practices, Behavior-Driven Development is an Agile software development process that supports collaboration among developers, stimulating teams to use conversation and concrete examples to formalize a shared understanding of how the application should behave.

This webinar will include:

  • A brief introduction to BDD and testing in general
  • An introduction to Cypress
  • Installing and using Cypress module for Drupal
  • Adding E2E tests to a custom module
  • Testing a full Drupal project

Please join us to learn how to get more out of your latest project with Test-Driven Development and Behavior-Driven Development. 

Date: 21 November 2019
Time: 4 - 5pm CEST

REGISTER NOW!
 

Watch our previous Webinars:

Nov 12 2019
Nov 12

Why migrate to Drupal 8

As you might already have understood, we’re in the team “Migrate to Drupal 8 as soon as possible”. We find it’s rather profitable from the business perspective and wise and time-savvy from the technical point of view. So why we preach moving to Drupal 8 to all Drupal 6 and Drupal 7 websites' owners?

Reason 1 - the obvious one

Drupal 9.0.0 will be based on the last Drupal 8 version. What does it mean for a Drupal website owner? 

In terms of development effort: the upgrade to Drupal 9 for Drupal 8 websites' owners will be as easy as the upgrade to a minor Drupal release. 

Let me explain. There are major Drupal versions - 6, 7, 8. They differ significantly. Within the major versions, there are minor versions (Drupal 8.5, Drupal 8.6, etc) that are being released to introduce some new features.

You need less than a day to upgrade to the minor version while the migration to the major one can take months.

In terms of money: since the upgrade from Drupal 8 to Drupal 9 will take less time, it will take less money either. So allocate some money for the migration to Drupal 8 for 2020.

Reason 2 - for those who want to avoid hustle with old versions' support

Both Drupal 7 and Drupal 8 will reach the end of life in November 2021. That means the official Drupal Security Team won’t maintain these versions anymore and it won’t be in charge of keeping them secure. You must find someone who will at least support your website’s security so it doesn't get hacked.

Having your website on Drupal 8 already, you’ll just take a nice relaxed stroll and migrate to Drupal 9 once it has a stable release.

Reason 3 - for Drupal 6 websites’ owners

Drupal 6 websites are quite vulnerable ones and we would advise moving to Drupal 8 as soon as possible even if Drupal 9 wasn’t about to be released.

Here’s why.

  • No more features and improvements for Drupal 6. This version is literally stalled and it does the same to your business.
  • The patches to Drupal 6 are mainly about security. Bug fixes are unlikely.
  • Drupal 8 is more convenient for developers and easier in terms of development.

Reason 4 - for Drupal 7 websites’ owners

  • Drupal 7 doesn’t allow using the Decoupled Drupal approach out-of-the-box, so forget about things like Gatsby. The significant resources and contributed modules are required to make it work.
  • As the developers are focused on Drupal 8, it’s unlikely that Drupal 7 will have new features.
Nov 12 2019
Nov 12

Since co-founding PreviousNext in 2009 with Kim Pepper, our company has put a lot of focus into supporting the Drupal open source project and community at a regional and global level.

by Owen Lansbury / 12 November 2019

This has included a number of key initiatives, including:

  • Providing our team with up to 20% of their paid working hours to contribute code to Drupal core software and contributed modules. This has seen PreviousNext consistently rank in the Top 5 companies contributing code to Drupal at a global level, with several of our individual team members in the Top 100 contributing developers. 
  • Supporting our team to contribute time to voluntary groups that sustain the Drupal community, such as Drupal’s Security Team, running free Drupal training days and organising Drupal events.
  • Donating funds to the Drupal Association's supporting partner program, global initiatives like Promote Drupal, regional conferences and local meetups.
  • Funding our team to travel and speak or participate in regional and global Drupal conferences.

This support plays a key role in PreviousNext’s ability to attract and retain the best Drupal talent, facilitates trusted relationships with key members of the global Drupal community and maintains our reputation as Drupal experts in the eyes of prospective clients. In other words, strong support for Drupal pays dividends to maintain PreviousNext as a sustainable company.

After a decade of leading PreviousNext myself, long-term colleague Jason Coghlan took the reins as Managing Director in late 2018. Jason is responsible for all of PreviousNext’s operations and client engagements, which he manages in concert with our internal Leadership and Delivery teams. My ongoing role is to guide PreviousNext’s strategy, marketing and finances as Chair of our Executive Team, paired with enhanced engagement with the Drupal community.

The first initiative I’ve focused on in 2019 has been the formation of the DrupalSouth Steering Committee. DrupalSouth has been running as an annual conference in Australia and New Zealand since 2008 but had always been reliant on ad-hoc volunteers to take on the significant work to organise and run each event. The Steering Committee’s objective is to provide ongoing consistency for the annual conference whilst spearheading other initiatives that support and grow Drupal’s community and commercial ecosystem in the region. We’ll be presenting our initial ideas at DrupalSouth in Hobart in late November.

I’m also honoured to have been appointed to the global Drupal Association Board of Directors and just returned from my first board retreat before DrupalCon Amsterdam. The board works alongside the new Drupal Association Executive Director, Heather Rocker, on overall strategy and direction for her team of almost 20 staff to implement. I’ve been asked to chair the board’s Revenue Committee that oversees how the DA is funded through event attendance, sponsorships and other sources, and to sit on the Strategic Planning Committee that will define where the association's focus can be best directed. My initial term will run until 2022 with a frequent presence at DrupalCon North America and Europe in coming years.

Drupal Association Board & Staff

Drupal Association Board & Staff at the Amsterdam retreat

Whilst in Amsterdam, I also sat in on round table discussions with other local Drupal associations from around the world, sharing ideas about how we can scale community engagement whilst leveraging common approaches and resources. A Supporting Partners round table focused more on the needs of Drupal services vendors and large users and a CEO dinner was a great insight into the state of Drupal businesses around the world. It was inspiring to see how professionally organised the global Splash Awards were and to understand how we might bring the initiative to our local region to recognise world-class projects being developed here. To cap things off, I had a talk accepted where I could share some of PreviousNext's experience winning and retaining long term clients - essentially all the things I wish someone had told me a decade ago!

With the upcoming release of Drupal 9 in mid 2020 there’s a high degree of optimism and confidence around Drupal’s immediate future. The software is a clear choice for enterprise and large organisations, Drupal services businesses are doing well and there’s a huge number of fresh and enthusiastic members of our community. While there’s some clear challenges ahead, I’m excited to be able to play a role in helping solve them at a global and regional level.

If you ever want to connect with me to discuss how I can help with your own Drupal community or business initiatives, feel free to get in touch via Drupal.org or Drupal Slack.

Tagged

Drupal Association, DrupalCon
Nov 11 2019
Nov 11

Table of Contents

What makes a collaborative editing solution robust?
Decentralized vs. Centralized Architectures in Collaborative Editing
Operational Transformation and Commutative Replicated Data Types (CRDT)
Why Tag1 Selected Yjs
Conclusion

In today’s editorial landscape, content creators can expect not only to touch a document countless times to revise and update content, but also to work with other writers from around the world, often on distributed teams, to finalize a document collaboratively and in real time. For this reason, collaborative editing, or shared editing, has become among the most essential and commonly requested features for any content management solution straddling a large organization.

Collaborative editing has long existed as a concept outside the content management system (CMS). Consider, for example, Google Docs, a service that many content creators use to write content together before copy-and-pasting the text into form fields in a waiting CMS. But in today’s highly demanding CMS landscape, shouldn’t collaborative editing be a core feature of all CMSs out of the box? Tag1 Consulting agreed, and the team decided to continue its rich legacy in CMS innovation by making collaborative editing a reality.

Recently, the team at Tag1 Consulting worked with the technical leadership at a top Fortune 50 company to evaluate solutions and ultimately implement Yjs as the collaborative editing solution that would successfully govern content updates across not only tens of thousands of concurrent users but also countless modifications that need to be managed and merged so that content remains up to date in the content management system (CMS). This process was the subject of our inaugural Tag1 Team Talk, and in this blog post, we’ll dive into some of the common and unexpected requirements of collaborative editing solutions, especially for an organization operating at a large scale with equally large editorial teams with diverse needs.

Collaborative editing, simply put, is the ability for multiple users to edit a single document simultaneously without the possibility of conflicts arising due to concurrent actions—multiple people writing and editing at the same time can’t lead to a jumbled mess. At minimum, all robust collaborative editing solutions need to be able to merge actions together such that every user ends up with the same version of the document, with all changes merged appropriately.

Collaborative editing requires a balancing act between clients (content editors), communication (whether between client and server or peer-to-peer), and concurrency (resolving multiple people’s simultaneous actions). But there are other obstacles that have only emerged with the hyperconnectivity of today’s global economy: The ability to edit content offline or on slow connections, for instance, as well as the ability to resynchronize said content, is a baseline requirement for many distributed teams.

The provision of a robust edit history is also uniquely difficult in collaborative editing. Understanding what occurs when an “Undo” or “Redo” button is clicked in single editors without the need for real-time collaboration is a relatively trivial question. However, in collaborative editors where synchronization across multiple users’ changes and batch updates from offline editing sessions need to be reflected in all users’ content, the definition of undo and redo actions becomes all the more challenging.

Moreover, real-time collaborative editing solutions also need to emphasize the collaboration element itself and afford users the ability to understand where other users’ cursors are located in documents. Two of the most fundamental features of any collaborative editing solution in today’s landscape are indications of presence and remote cursors, both characteristics of free-to-use collaborative editing solutions such as Google Docs.

Presence indications allow for users in documents to see who else is currently actively working on the document, similar to the user thumbnails in the upper-right corner of a typical Google Docs document. Remote cursors, meanwhile, indicate the content a user currently has selected or the cursor location at which they last viewed or edited text.

During Tag1’s evaluation of the collaborative editing landscape, the team narrowed the field of potential solutions down to these four: Yjs, ShareDB, CKEditor, and Collab. See below for a comparison matrix of where these real-time collaborative editing solutions stand, with further explanation later in the post.

 

 

Yjs

ShareDB

CKEditor

Collab

License

MIT

MIT

Proprietary (On-Prem Hosted)

MIT

Offline editing

Decentralized

Network-agnostic

Shared cursors

Presence (list of active users)

Commenting

Sync after server data loss

✖ (sync error)

✖ (Unsaved changes are lost)

Can implement other collaborative elements (e.g., drawing)

Scalable


(Many servers can handle the same document)

✔ 

(Locking via underlying DB)


(Hosted)

(Needs central source of truth - a single host for each doc. Puts additional constraints on how doc updates are propagated to “the right server”).

Currently supported editors 

ProseMirror

Quill
Monaco
CodeMirror
Ace

Quill
CodeMirror
Ace

CKEditor

ProseMirror

Implementation

CRDT

OT

OT

Reconciliation

Demos

Editing, Drawing,

3D model shared state

Sync

Editing

Editing

Editing in Tip Tap

Whereas the features within a collaborative editor are of paramount importance to its users, the underlying architecture can also play a key role in determining a solution’s robustness. For instance, many long-standing solutions require that all document operations ultimately occur at a central server instance, particularly in the case of ShareDB and Collab.

While a centralized server does confer substantial advantages as a single source of truth for content state, it is also a central source of failure. If the server fails, the most up-to-date state of the content is no longer accessible, and all versions of the content will become stale. For mission-critical content needs where staleness is unacceptable, centralized servers are recipes for potential disaster.

Furthermore, centralized systems are generally much more difficult to scale, which is an understandably critical requirement for a large organization operating at considerable scale. Google Docs, for example, has an upper limit on users who can actively collaborate. With an increasing number of users, the centralized system will start to break down, and this can only be solved with progressively more complex resource allocation techniques.

For these reasons, Tag1 further narrowed the focus to decentralized approaches that allow for peer-to-peer interactions, namely Yjs, which ensures that documents will always remain in sync irrespective, as document copies live on each user’s own instance as opposed to on a failure-prone central server. This means users can always refer to someone else’s instance in lieu of a single authoritative source that may not be available. Resource allocation is also much easier with Yjs because many servers can store and update the same document. It is significantly easier to scale insofar as there is essentially no limit on the number of users that can work together.

The majority of real-time collaborative editors, such as Google Docs, EtherPad, and CKEditor, use a strategy known as operational transformation (OT) to realize concurrent editing and real-time collaboration. In short, OT facilitates consistency maintenance and concurrency control for plain text documents, including features such as undo/redo, conflict resolution, and tree-structured document editing. Today, it is used to power collaboration features in Google Docs and Apache Wave.

Nonetheless, OT comes with certain disadvantages, namely the fact that existing OT frameworks are very tailored to the specific requirements of a certain application (e.g. rich text editing) whereas Yjs does not assume anything about the communication protocol on which it is implemented and works with a diverse array of applications. Yjs leverages commutative replicated data types (CRDT), used by popular tools like Apple Notes, Facebook’s Apollo, and Redis, among others. As Joseph Gentle, a former engineer on the Google Wave product and creator of ShareDB, once wrote:

“Unfortunately, implementing OT sucks. There’s a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. […] Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time.”

The key distinction between OT and CRDT is as follows: Consider an edit operation in which a user inserts a word at character position 5 in the document. In operational transformation, if another user adds 5 characters to the start of the document, the insertion is moved to position 10. While this is highly effective for simple plain text documents, complex hierarchical trees such as the document object model (DOM) present significant challenges. CRDT, meanwhile, assigns a unique identifier to every character, and all state transformations are applied relatively to objects in the distributed system. Rather than identifying the place of insertion based on character count, the character at that place of insertion retains the same identifier regardless of where it is relocated to within the document. As one benefit, this process simplifies resynchronization after offline editing.

If you want to dive deeper, the Conclave real-time editor (which is no longer maintained and therefore was not considered in our analysis) has another great high-level writeup explaining OT and CRDT. Additionally, you can watch or listen to our deep dive on OT vs. CRDT as part of our recent Tag1 Team Talk, “A Deep Dive into Yjs - Part 1”.

While other solutions such as ShareDB, CKEditor, and ProseMirror Collab are well-supported and very capable solutions in their own right, these technologies didn’t satisfy the specific requirements of our client’s project. For instance, ShareDB relies on the same approach as Google Docs, operational transformation (OT), rather than relying on the comparatively more robust CRDT (at least for our requirements). CKEditor, one of the most capable solutions available today, relies on closed-source and proprietary dependencies. Leveraging an open-source solution was strongly preferred by our client for many reasons, foremost among them to meet any potential need by enhancing the software themselves, and they didn’t want to be tied to a single vendor for what they saw as a core technology to their application. Finally, ProseMirror’s Collab module does not guarantee conflict resolution, which can lead to merge conflicts in documents.

Ultimately, the Tag1 team opted to select Yjs, an implementation of commutative replicated data types (CRDT), due to its network agnosticism and conflict resolution guarantees. Not only can Yjs support offline and low-connectivity editing, it can also store documents in local databases on user devices (such as through IndexedDB) to ensure full availability without a stable internet connection. Because Yjs facilitates concurrent editing on tree structures, not just text, it integrates well with view libraries such as React. Also compelling is its support for use cases beyond simple text editing, including collaborative drawing and state-sharing for 3D models. Going beyond text editing to implement other collaborative features is an important future goal for the project.

Furthermore, Yjs performs transactions on objects across a distributed system rather than on a centralized server, the problem of a single point of failure can be avoided, and it’s extremely scalable with no limitations on the number of concurrent collaborators. Moreover, Yjs is one of the only stable and fully tested implementations of CRDT available, while many of its counterparts leverage OT instead.

Finally, because Yjs focuses on providing decentralized servers and connector technology rather than prescribing the front-end editor, there is no dependency on a particular rich text editor, and organizations can opt to swap out the editor in the future with minimal impact on other components in the architecture. It also makes it easy to use multiple editors. For instance, our project uses ProseMirror for collaborative rich text editing and CodeMirror for collaborative Markdown editing (and other text formats can be added easily).

Real-time collaborative editing surfaces unique difficulties for any organization seeking to implement content workflows at a large scale. Over the course of the past decade, many new solutions have emerged to challenge the prevailing approaches dependent on operational transformation. Today, for instance, offline editing and effective conflict resolution on slow connections are of paramount importance to content editors and stakeholders alike. These key requirements have led to an embrace of decentralized, peer-to-peer approaches to collaborative editing rather than a failure-prone central server.

Tag1 undertook a wide-ranging evaluation of available solutions for collaborative editing, including Yjs, ProseMirror’s Collab module, ShareDB, and CKEditor. In the end, Yjs emerged as the winner due to its implementation of CRDT, as well as its scalability and emphasis on network agnosticism and conflict resolution, both areas where the other solutions sometimes fell short. While any robust evaluation of these solutions takes ample time, it’s our hope at Tag1 that our own assessment guides your own thinking as you delve into real-time collaborative editing for your own organization.

Special thanks to Fabian Franz, Kevin Jahns, Michael Meyers, and Jeffrey Gilbert for their feedback during the writing process.

Nov 11 2019
Nov 11

I've had various deep discussions with contributed module maintainers recently about their process to update code to Drupal 9 and one point struck me. We are so attached to "Make it ready for Drupal 9" that a key point of the message may be lost. Check out this section of the State of Drupal keynote from DrupalCon Amsterdam 2019 where Dries Buytaert showcases Johanna's relatively simple site that she prepares for the Drupal 9 upgrade entirely in Drupal 8. Notice that she does all the steps in Drupal 8 other than the final Drupal 9 upgrade itself:

This is the base principle of the process towards Drupal 9, making your Drupal 8 site better and more prepared, so the move to Drupal 9 itself at the end is a relatively small step and you got a better Drupal 8 site in the meantime. You are not jumping over the fence all at once, but in gradual steps. I thought a comparison with Drupal 6 to 7, 7 to 8 and 8 to 9 would help, since people may have assumptions or prior experiences with those, so its worth looking at how our new process compares to the two previous transitions.

Update process with Drupal 6 to 7, 7 to 8 and 8 to 9, as explained in the text too.

Concentrating on the high level steps to do, this is how the three processes compare:

  1. From Drupal 6 to 7, there was a long list of changes, but the general API of Drupal stayed mostly the same. You would need to update some things in your custom code. Even though it may be a few things, your new code would only work with the new major core version. When you went to update from Drupal 6 to 7, you needed to keep your database in the background, use all the new Drupal 7 compatible code, and run update.php to carry over your database.
  2. From Drupal 7 to 8, the whole API paradigm of Drupal changed, so you needed a heavy rewrite of your custom code. There was absolutely no way custom Drupal 7 code would work with Drupal 8. The database also changed a lot, so instead of running an update.php script that would update your core and contributed database data, you would do a big database migration.
  3. The Drupal 8 to 9 process is entirely different again, but in a good way. First of all we are back at you running update.php on your codebase (no data migration!). So we went back to what we already knew worked well earlier. But the code updates also got quite different. This time, we are introducing deprecated APIs in minor Drupal 8 releases and add the new APIs to use instead. So we are gradually making Drupal 9 available in Drupal 8 as much as possible within our own control.

The benefits are huge! This means that custom code and drupal.org projects can be gradually updated within Drupal 8 on the way Drupal 9 step by step. Making improvements on the Drupal 8 site by updating contributed modules and adopting the new APIs in custom code like Johanna did in Dries' demo is making the Drupal 8 site itself better and closing the gap to Drupal 9 at the same time. All the new APIs get more use and thus get more battle tested in the process, all before Drupal 9 is even released. So all there will be left for Drupal 9.0.0 itself is to remove the old APIs and update third party dependencies.

Enough of the marketing, let's talk about the fine print where some contributed modules may not be kept Drupal 8 compatible and become Drupal 9 compatible at the same time.

The 12% size wrench in the system

As I posted in July, 76% of deprecated API use was possible to fix already at the time. I kind of brushed off the "Fixable later" category because it was not yet a pressing need then. But let's look at how we arrive at what can be fixed now vs. later.

In 2018 we extended security coverage of Drupal core minor versions to 12 months. That means that there is a 6 month overlap of when a new minor release comes out but we are still supporting the two versions prior. For example, when Drupal 8.8.0 comes out on December 4, 2019, we'll stop supporting Drupal 8.6.x and still keep supporting Drupal 8.7.x until June 3, 2020, the date of our next minor release.

For custom code written on a site, as Drupal core is updated, the custom code can keep up to date with the latest and greatest APIs. However for contributed modules, the choice is up to the module maintainer. There is no point in supporting core versions which are out of security coverage. But if maintainers keep updating to the latest core minor APIs, their contributed module's latest versions would not be compatible with sites running perfectly supported core versions. This becomes a problem especially if the contributed module in question needs to make a security release, when it needs to create a new branch retroactively for an old core version unless they want to force users of Drupal core to update their security-supported versions.

Deprecated API use breakdown on October 22, 2019 as explained in the text.

So this is where the "Fix now" vs. "Fixable later" differentiation comes in. Here is more fine-grained breakdown of the up to date data from October 22. Still 75% is fixable now. Breaking down the "Fixable later" category, another 6% is fixable once Drupal 8.6.x goes unsupported on December 4, 2019. A good 8% cannot be automatically categorized unfortunately. That leaves 12% that are deprecations introduced in Drupal 8.8. This includes some that have replacements in prior Drupal versions, so the data could use some cleaning up. But nonetheless there will be a measurable amount of deprecated APIs that one cannot fix right now and still be compatible with Drupal 8's all supported branches and Drupal 9.0.0 on day one.

Some contributed project maintainers feel the campaign to update for Drupal 9 makes them look bad, because this appears to be a catch 22 situation that puts them in a difficult corner.

What can you do in this case?

Since you are serious about supporting Drupal 8's supported branches (huge thanks for that!), there are the following options:

  1. Add temporary workarounds to the deprecated APIs, such as conditional code to only use the deprecated APIs when they are still there. Mike Lutz and Sascha Grossenbacher (Berdir) worked a lot in this area and I believe they are both working on posts with tips. (Will link them in when available.)
  2. Create a new major branch of your project for Drupal 8.9.0/9.0.0 support specifically. Or if you are also affected by some dependency update, such as Symfony 4.4 API changes, branch for Drupal 9.0.0+ support in particular and keep your existing branch for Drupal 8 / Symfony 3.4. One thing to note here is Drupal.org does not allow for 9.x releases anymore, so even after you created a 9.x branch, you will not be able to make releases out of it. This is in preparation for the introduction of semantic versioning for contributed projects, which will do away with the 8.x/9.x prefixes entirely. The workaround in the meantime is to create a new 8.x branch and use the right core_version_requirement value.
  3. Assume for the best case scenario, and drop support for Drupal 8.7 earlier. If you need to release a critical bug fix or a security release, you can still branch two new minor versions of your project, one from the prior minor version that was still 8.7 compatible and one from the new code that is not. Include the fixes in both. Add composer and info.yml dependency information to make it impossible to install the incompatible versions on 8.7.x. (Hat tip to Sascha Grossenbacher (Berdir) for this idea).

It may not be possible to attain 100% Drupal 9 compatibility in each contributed module on day one of the Drupal 9.0.0 release, and there will be some where the update is not that simple. I firmly believe our process is still way better then what we had before, and we are learning a lot from the mistakes we make. That said, I'll keep advocating to prepare now because 75% of the work can be done anytime. Assuming the percentages stay the same, this raises to 81% very soon in early December (and may be even higher based on some of the uncategorized items falling into this field). That is a lot of preparation that we can do already! If instead of asking is this entirely Drupal 9 compatible, we switch our minds to gradually improve within Drupal 8 and get much closer to Drupal 9, then we can use our time quite flexibly and make the existing Drupal 8 sites better. Only a relatively small gap remains at the end then, which was the goal to begin with.

Further reading

  1. If you use Upgrade Status, it includes the same categorization and can be used to focus on what is actionable now.
  2. I also covered this topic in my open source State of Drupal 9 slides that you can present at your own meetup, company, conference, etc.
  3. The official Drupal 9 documentation also suggests keeping supported core branches in mind.
Nov 11 2019
Nov 11

    DrupalCon Amsterdam

Select members of the Drupal Association team have just returned from a wonderful DrupalCon Amsterdam. The revival of the European DrupalCon was a tremendous success, and we want to thank all of the volunteers and sponsors who made it possible. 

The content at the conference was great as well. This year the core initiative leads provided a comprehensive update, followed by the traditional #Driesnote the next day, which laid out the vision for Drupal 9's key initiatives moving forward. The DA also shared an update from the Drupal Association board, held a community town hall, and provided our bi-annual update from the engineering team. 

While DrupalCon was action-packed, we also moved forward a lot of other initiatives on the community's behalf in October. 

Project News

Reminder: Drupal 8.8.0 is coming soon! Drupal 8

Drupal 8.8.0-beta1 was released in early November, to be followed by the full release in early December. A variety of great features are landing in version 8.8.0, including improved Composer support in core, an updated Media Library, and updates to JSON:API. This is likely to be the last minor release before the simultaneous release of Drupal 8.9.0 and Drupal 9.0.0 next June.  

If you want to help ensure a smooth release, we invite you to join the Drupal Minor Release beta testing program.

Time to get ready for Drupal 9

The time is now to get ready for Drupal 9. At DrupalCon Amsterdam the core initiative team put out the call for the community at large to get ready. 

If you are a site owner, the best way you can get ready is to make sure you're up to date on the latest version of Drupal 8. From there, it'll be an easy upgrade to 9. 

If you maintain a community module, or your own custom modules, you may have some work to do.  Many contributed or even custom modules only need a one-line change to be ready for Drupal 9. Check yours using: the upgrade status module, or the Drupal Check command line tool.

Drupal.org Update

Automatic Updates Initiative needs your help!

For the last year we've been working together with the European Commission to bring automatic updates to Drupal. The first phase of this work covers updates for Drupal Core only, and only in non-Composer scenarios, but even so it should be able to protect many Drupal 8 and especially Drupal 7 site owners. 

The module is ready for testing now. Your feedback is welcome to help us make this first phase stable for production use by the community. 

You can also help by supporting Phase 2 of this initiative, which will include more advanced features like support for Composer-based sites, database updates, and a robust roll-back capability. We're looking for sponsors for this next round of work now. 

Forming a Contribution Recognition Committee

The Drupal Association's contribution credit system is an industry first in open source, and so we want to take great care on each step on this new journey. 

During the conference we also announced the formation of a Contribution Recognition Committee to govern the contribution credit algorithm which weights the order of the Drupal services marketplace on drupal.org. 

We are now seeking applications from community members who would like to sit on the committee. 

When will we enable GitLab merge requests?

Drupal + GitLabWhen we migrated Drupal.org's git repositories to GitLab, it was the first step on the road to modernizing and improving the project's collaboration tools. The most significant step in that journey will be enabling merge requests, and we know it's a feature that the community has been waiting for. 

So what's the hold up? 

There are a few factors that have held us back from enabling the feature sooner. First, we were waiting for the GitLab team to add support for Git object de-duplication. Beta support for this feature was added in GitLab version 12.0, and then enabled by default beginning with the release of GitLab version 12.1.

While waiting for these features, the Drupal Association engineering team focused on other major commitments: moving forward the Automatic Updates initiative, in partnership with the European commission, co-leading the Composer Initiative to improve support in Drupal core, and preparing Drupal.org to support the release of Drupal 9. 

While these other initiatives have overtaken much of our internal capacity, we're hoping to get back to the merge request feature very soon, and we're just as excited to release the feature as you are to begin using it! 

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who make it possible for us to work on these projects. In particular, we want to thank: 

  • Zyxware - Renewing Signature Supporting Partner
  • EPAM - Renewing Premium Supporting Partner
  • Datadog - Premium Technology Supporter
  • KWALL - Renewing Classic Supporting Partner
  • ANNAI - Renewing Classic Supporting Partner
  • SymSoft - Renewing Classic Supporting Partner
  • Forum One - Renewing Classic Supporting Partner
  • Catalyst IT - Renewing Classic Supporting Partner
  • Old Moon Digital - Renewing Classic Supporting Partner
  • Authorize.Net - *NEW* Classic Technology Supporter
  • SiteGround - Renewing Classic Hosting Supporter

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Nov 11 2019
Nov 11

Table of Contents

What is a Rich Text Editor?
The Modern Rich Text Editor and Emerging Challenges
How we Evaluated Rich Text Editors
Why Tag1 Selected ProseMirror
Conclusion

Among all of the components commonly found in content management systems (CMSs) and typical editorial workflows, the rich text editor is perhaps the one that occupies the least amount of space but presents the most headaches due to its unique place in content architectures. From humble beginnings in discussion forums and the early days of the web and word processing, the rich text editor has since evolved into a diverse range of technologies that support a lengthening list of features and increasingly rich integrations.

Recently, Tag1 embarked on an exploration of rich text editors to evaluate solutions for a Fortune 50 company with demanding requirements. In this blog post, we’ll take a look at what impact the choice of a rich text editor can have down the line, some characteristics of the modern rich text editor, and Tag1’s own evaluation process. In the end, we discuss some of the rationales behind Tag1’s choice of ProseMirror as a rich text editor and some of the requirements leading to a decision that can serve as inspiration for any organization.

At its core, a rich text editor enables content editors not only to insert and modify content but also to format text and insert assets that add to the content in question. They are the toolbars that line every body field in CMSs, allowing for a rich array of functionality also found in word processors and services like Google Docs. Most content editors are deeply familiar with basic formatting features like boldfacing, italicization, underlining, strikethrough, text color, font selection, and bulleted and numbered lists.

There are other features that are considered table-stakes for rich text editors, especially for large organizations with a high threshold for formatting needs. These can include indentation (and outdent availability), codeblocks with syntax highlighting (particularly for knowledge bases and documentation websites for developers), quotations, collapsible sections of text, embeddable images, and last but not least, tables.

While these features comprise the most visible upper layer of rich text editors, the underlying architecture and data handling can be some of the most challenging elements to implement. All rich text editors have varying degrees of customizability and extensibility, and all editors similarly have different demands and expectations when it comes to how they manage the underlying data that ultimately permits rich formatting. In the case of Tag1’s top Fortune 50 customer, for example, the ability to insert React-controlled views and embedded videos into content ended up becoming an essential requirement.

Whereas many of the foregoing rich text editors available in the late 1990s and early 2000s trafficked primarily in basic formatting, larger editorial organizations have much higher expectations for the modern rich text editor. For instance, while many rich text editors historically focused solely on manipulation of HTML, the majority of new rich text editors emerging today manipulate structured data in the form of JSON, presenting unique migration challenges for those still relying on older rich text editors.

Today, there are few to no robust rich text editors available that support swappable document formats between HTML, WYSIWYG, Markdown, and other common formats. Any conversion between HTML, WYSIWYG, and Markdown formats will result in some information loss due to differences in available formatting options. As an illustrative example, a WYSIWYG document can include formatting features that are unsupported in Markdown, such as additional style information or even visually encoded traits such as the width of a table column. While converting a document format to another preserves the majority of information, there will inevitably be data loss due to unsupported features.

Moreover, as rich text editors become commonplace and the expectations of content editors evolve, there is a growing desire for these technologies to be accessible for users of assistive technologies. This is especially true in large companies such as Tag1’s Fortune 50 client, which must provide for content editors living with disabilities. Rich text editors today frequently lack baseline accessibility features such as ARIA attributes for buttons in editorial interfaces, presenting formidable challenges for many users.

Tag1 evaluated a range of rich text editors, including ProseMirror, Draft.js, CKEditor 5, Quill, Slate, and TipTap. Our mission was to find a solution that would be not only robust for content editors accustomed to word processors and Google Docs but also customizable and robust in handling the underlying data. But there were other requirements as well that were particularly meaningful to the client for whom Tag1 performed this evaluation.

An important first requirement was the ability for the chosen rich text editor to integrate seamlessly with collaborative editing solutions like Yjs and Collab out of the box. In addition, because of the wide-ranging use of open-source projects at the organization, a favorable license was of great importance to allow teams to leverage the project in various ways. In addition, other characteristics such as plugin availability, an active contributor community, and some support for accessibility were considered important during the evaluation.

As mentioned previously, other requirements were more unique to the customer in question, including native mobile app support, which would allow for mobile editing of rich text, a common feature otherwise found in many responsive-enabled CMSs; embedding of React view components, which would provide for small but rich dynamic components within the body of an item of content; and the ability to annotate content with comments and other notes of interest to content editors.

The table below displays the final results of the rich text editor evaluation and illustrates why Tag1 ultimately selected ProseMirror as their editor of choice for this project.

* Doesn’t support feature yet, but could be implemented (additional effort & cost) ** Comments are part of the document model (requirements dictate they not be) *** Per CKEditor documentation -- needs to be verified (see review process below) ⑅ In Depth accessibility reviews must be completed before we can grade

Ultimately, Tag1 opted to choose ProseMirror as the rich text editor of choice for their upcoming work with a top Fortune 50 company. Developed by Marijn Haverbeke, the author of CodeMirror, one of the most popular code editors for the web, ProseMirror is a richly customizable editor that also counts on an exhaustive and well-documented API. In addition, Haverbeke is known for his commitment to his open-source projects and responsiveness in the active and growing ProseMirror community. As those experienced in open-source projects know well, a robust and passionate contributor community does wonders to lower implementation and support costs.

Out of the box, ProseMirror as a tool is not particularly opinionated about the aesthetics of its editor or especially feature-rich. But this is in fact a boon for extensibility, as each additive feature of ProseMirror is provided by a distinct module encapsulating that functionality. For instance, while core features that are considered table-stakes among rich text editors today such as basic support for tables and lists are part of the core ProseMirror project, others, like those that provide improved table support and codeblock formatting, are only available through community-contributed ProseMirror modules.

ProseMirror also counts among its many advocates large organizations and publishers that demand considerable heft from its rich text editing solutions. Newspapers like The New York Times and The Guardian, as well as well-known companies like Atlassian and Evernote, are leveraging ProseMirror and its modules. In fact, Atlassian published the entirety of their ProseMirror modules under the highly permissive Apache 2.0 license.

Beyond the fact that many editors are based on ProseMirror as a foundation available through open source, such as Tiptap, the Fiduswriter editor, and CZI-ProseMirror, it was a logical choice for Tag1 and part of Tag1’s commitment to enabling innovation in editorial workflows with a strong and stable foundation at their center. Through an integration between ProseMirror and Yjs, the subject of a previous Tag1 blog post on collaborative editing, all requirements requested by the top Fortune 50 company will be satisfied.

Choosing a rich text editor for your editorial workflows is a decision fraught with small differences with large implications. While basic features such as simple list and table formatting are now ubiquitous across rich text editors like ProseMirror, Draft.js, CKEditor, Quill, and Slate, the growing demands of our customers obligate us to consider ever more difficult requirements than before. At the request of a top Fortune 50 company, Tag1 embarked on a robust evaluation of rich text editors that satisfied some of their unique requirements such as React component embeds, accessibility, and codeblocks with syntax highlighting.

In the end, teams opted to leverage ProseMirror due to its highly active and passionate community, the availability of unique features such as content annotations, native mobile support, and accessibility support. Thanks to its large community and extensive plugin ecosystem, Tag1 and its client can work with a variety of available tools to craft a truly futuristic rich text editing experience for their content editors. As this evaluation indicates, it is always of utmost importance to focus not only on the use cases for required features, but also on the users themselves who will use the product—the content editors and engineers who need to write, format, and manipulate rich text, day in and day out.

Special thanks to Fabian Franz, Kevin Jahns, Jeffrey Gilbert, and Michael Meyers for their feedback during the writing process

Nov 11 2019
Nov 11

There’s never been a better time to be a Drupal user in Canada.

Since I started using Drupal in 2008, I’ve seen anecdotal evidence that Drupal is popular in Canada. Since 2006, and probably earlier, there have been active users groups in Toronto, Vancouver, and Montreal, that organize regular conferences and meetups. These local groups are key to the Drupal community’s culture of open source and knowledge sharing. So I wasn’t surprised to learn that Canada is fourth in the world in terms of active users on Drupal.org (behind the US, Great Britain, and India).

Now, with the trend towards using Drupal for enterprise-level projects, the Canadian federal government, which has been using Drupal for years across a myriad of departments, is moving towards Drupal as the go-to platform for building digital experiences. It’s already being used by almost every government department you can think of in some capacity, but with the government's new Open First philosophy, Drupal is poised to become the default choice.

On November 21, Dries Buytaert (the founder of Drupal) will be in Ottawa talking about digital transformation for government. And at the 2019 DrupalCamp Ottawa this year (a sold-out event) I encountered a wave of government agencies eager to adopt Drupal for building the next wave of digital services and information portals for citizens and public servants. The ability to standardize on a platform that is secure, multilingual, and accessible, and also flexible enough to meet the needs of a whole range of applications and audiences is what makes Drupal so appealing. 

Recently, I’ve also seen growing momentum behind Drupal in the Quebec. Sites for Tourism Quebec, Viarail, and the Aéroports de Montréal are built with Drupal. And beyond public organizations, iconic Quebec brands like Agropur and Videotron are using Drupal for their large-scale web projects.

Suzanne on the Viarail platform in Toronto

At Drupal North earlier this year (the largest Drupal event in Canada), I co-hosted a higher education summit that was attended by web managers and developers from some of the largest educational institutions in our country: the University of Toronto, McGill, Université Laval, and the University of Waterloo to name but a few. Many of these organizations also contribute to the Drupal project, sharing code, best practices, and techniques for scaling Drupal.

Even at non-Drupal web conferences, Drupal is a common topic of conversation. When I presented at year's #PSEWEB in Saskatoon (the Canadian version of HighEdWeb Conference), where I heard that the University of Calgary is continuing to re-platform their sites onto Drupal 8. 

Pantheon sponsored Drupal North this year, announcing that they would be expanding their hosting services to Canada, joining Acquia in catering to organizations that are mandated to host on Canadian soil or who want hosting close to home.

I'm looking forward to seeing the market for Drupal expand further in Canada. I'd love to hear your thoughts on how we can grow the community!

One thing we do at Evolving Web to keep Drupal growing is by providing Drupal training, including free community trainings online and at Drupal Camps, and more in-depth trainings. We have trainings coming up in Ottawa and Vancouver. And training on-demand in Toronto, Montreal, Quebec City, Calgary, Winnipeg, Halifax, Victoria, and your corner of Canada. Just give us a shout if we should do a training in your town.

Nov 08 2019
Nov 08

The program committee talks about how you can share that unique voice and perspective of yours that we want to hear!

Nov 08 2019
Nov 08

From gated content to event enrollments, social logins and easy registration are the go-to for many sites now. The website captures data and permissions from your social accounts and gives you access to the content. There are modules primarily dedicated to integrate social accounts and run the process. 

Drupal offers many modules for seamless integration of social logins for your website. 

Let’s skim through the list and their features! 

a man scanning the phone for login

Check out this list of modules for social login in Drupal 8 and choose the one that suits you best:

OneAll Social Login

Offering 35+ social networks to login from, the OneAll Social Login is a significant module that can benefit you. It is fully compliant with European (GDPR) and U.S. data protection laws. The module monitors changes constantly and automatically updates APIs for a smooth run of the logins. The simplified process helps increase the registration rate, too.

Social Auth

A commonly used module, Social Auth is part of the Social API. It possesses its own content entity and stores all data in the database. With this Drupal 8 module, visitors can register on the website via 30 social networks including Slack, Reddit, Uber, and more.   

Auth0 Single Sign On

It provides a login form powered by Auth0. With implementing authentification for platforms like GitHub and Twitter, Auth0 Single Sign On is another top module in the list. 

Social Auth Google

Primarily for Google account, the Social Auth Google module can help you register and log in with any social site. It performs the authentication on your behalf and gives seamless access.  

Hybrid Auth Social Login

With the aid of the HybridAuth library, this module integrates Drupal and allows the login and registrations for the visitors on sites like Facebook, Twitter, Windows Live, Instagram, AOL, and much more. The HybridAuth Social Login doesn’t depend on external services or load any external CSS or JS files for the authentification process. 

However, it relies on a third-party open source PHP library, HybridAuth, developed and supported on GitHub. 

uLogin (advanced version)

A selective range of forms is available in this module for the social logins to be integrated with your Drupal site. Written in compliance with Drupal coding standards, uLogin has several widgets with default settings that are configurable through UI. The module gives you options to remove username and password fields from the user profile edit form.

Varbase Social Single Sign-On

Varbase module is built using Social API and adds single sign-on using your social networking services. It is supported by the Drupal 8 version and even if installed with the Minimal or Standard profile.

Social Auth Facebook

It is a direct module for Facebook and allows users to register/login to the Drupal site via a Facebook account. The scope of the website’s request is wider and based on the authentication with Facebook services. The module also compares the user id or email address facilitated by Facebook. It automatically accepts the login if it is a returning user to the site.

Social Auth Twitter 

Similar to the above two, this is also exclusive to Twitter. It allows the user to register and login to your Drupal site and is based on Social Auth and Social API. 

The Social Auth Twitter module lets you compare the user id or email address once Twitter brings you back to the site.

Social Auth LinkedIn 

This module allows websites to request any scopes and users to register/login to your Drupal site with the LinkedIn account. The Social Auth LinkedIn module lets you compare the user id or email address once LinkedIn brings you back to the site. The ‘LinkedIn’ button lets you initiate the login in the social auth block.

Conclusion

These modules will definitely support the login process of your website and placidly run the integration. With each module having its own significance one can only decide for themselves which suits them the best.

We at OpenSense Labs have a pioneering experience in site building with Drupal 8 and its modules. For the best of services contact us at [email protected]

Also, connect with us on our social media channels: FacebookLinkedIn, and Twitter for more such insights. 

Nov 08 2019
Nov 08

The Drupal 7 maintainers have published a new D7 roadmap and release schedule:

https://www.drupal.org/core/drupal7-roadmap-release-schedule

There is also a new proposed D7 contributor / maintainer workflow issue:

https://www.drupal.org/project/drupal/issues/3093258

Finally, we're pleased to announce that the 7.x branch's core tests now pass in PHP 7.3!

Feedback is welcome, and thank you very much to all of the many D7 contributors!

Nov 08 2019
Nov 08

If you are a frontend web developer you have probably heard of GraphQL or maybe you have even used it, but what is it? GraphQL is a query language for APIs that allows you to write queries that define the data you receive. No more emailing the backend team to update an endpoint for your application. The client developer defines the data returned in the request.

What is a GraphQL Server/API?

A GraphQL server is a server-side implementation of the GraphQL spec. In other words, a GraphQL server exposes your data as a GraphQL API that your client applications can query for data. These clients could be a single page application, a CMS like Drupal, a mobile app, or almost anything. For example, say you have a MySQL database and you want to expose the content to a React application. You could create a GraphQL server that will allow our React application to query the database indirectly through the GraphQL server.

Why would you consider a GraphQL API?

1. You need an API for your applications

At Mediacurrent, we have been building decoupled static web sites using GatsbyJS. But sometimes we also need some components with dynamic data. You need an API for that and our front-end team was already using GraphQL in Gatsby. Or maybe you might develop a mobile app and need a reliable and fast way to get data from your legacy database. You can use GraphQL to expose only the data you need for your new app but at the same time give your client app developers the ability to control what data they get back from the API.

2. You need data normalization

For our purposes as developers data normalization is simply the process of structuring our data to remove redundancy and create relationships between our data. Data normalization is something database designers think about but developers and software architects should consider as well.

One of the biggest mistakes I’ve seen in my years building web applications is the pattern of including too much business logic in application components. These days, it is not unusual to require data from multiple applications, public REST APIs as well as databases. There is often duplication across these systems and the relationships are rarely clear to the development team. Creating components that require data from multiple systems can be a challenging task. Not only do you have to make multiple queries to retrieve the data, but you also need to combine it in your component. It’s a good pattern to normalize the data outside of your components so that your client application’s components can be as simple and easy to maintain as possible. This is an area where GraphQL shines. You define your data’s types and the relationships between your data in a schema. This is what allows your client applications to query data from multiple data sources in a single request.

3. You love your client application developers

A well-built GraphQL server will avoid these issues that are common with REST APIs.

  • Over-fetching - receiving more data than you need.
  • Under-fetching - not receiving all the data you need.
  • Dependent requests - requiring a series of requests to get the data you need.
  • Multiple round trips - needing to wait for multiple requests to resolve before you can continue.

Over-fetching

In a perfect would we would only fetch the data we need. If you have ever worked with a REST API you will likely know what I mean here. Your client application developers may only need a user’s name but it is likely that when they request the name using the REST endpoint they will get much more data back from the API. GraphQL allows the client to specify the data returned in the request. This means a smaller payload delivered over the web which will only help your app be more performant.

Under-fetching, dependent requests, and multiple round trips

Multiple requests with GraphQL

Another scenario is under-fetching. If you need a user’s name and the last three projects they were active on, you probably need to make two HTTP requests to your REST API. With GraphQL relationships, you can get this data back in a single request. No more callbacks and waiting on multiple endpoints to resolve. Get all your data in one request. Now you are avoiding multiple requests, dependent requests, and multiple round trips to get the data for your app’s components.

GraphQL single request to multiple data sources.

Self documenting

The type based schema that GraphQL provides creates the structure to build powerful tools like Graphiql, an in-browser IDE for exploring GraphQL.

Graphiql example

This schema also allows for what I would call a self documenting API. GraphQL playground is an example of the power of the GraphQL schema. It takes this schema and creates documentation as well as an IDE like Graphiql. When you update your schema, your documentation is automatically updated as well as the IDE. That’s a huge win!

GraphQL Playground example