Feb 15 2019
Feb 15

The recent post on Dries’ blog about REST, JSON:API and GraphQL caused a bigger shockwave in the community than we anticipated. A lot of community members asked for our opinion, so we decided to join the conversation.

Apples and Oranges

Comparing GraphQL and JSON:API is very similar to the never-ending stream of blog posts that compare Drupal and Wordpress. They simply don’t aim to do the same thing.

While REST and JSON:API are built around the HTTP architecture, GraphQL is not concerned with its transportation layer. Sending a GraphQL query over HTTP is one way to use it, and unfortunately, one that got stuck in everybody’s minds, but by far not the only one. This is what we are trying to prove with the GraphQL Twig module. It allows you to separate your Twig templates from Drupal’s internal structures and therefore make them easier to maintain and reuse. No HTTP requests involved. If this sparks your interest, watch our two webinars and the Drupal Europe talk on that topic.

So GraphQL is a way to provide typed, implementation agnostic contracts between systems, and therefore achieve decoupling. REST and JSON:API are about decoupling too, are they not?

What does “decoupling” mean?

The term “decoupling” has been re-purposed for content management systems that don’t necessarily generate the user-facing output themselves (in a “coupled” way) but allow to get the stored information using an API exposed over HTTP.

So when building a website using Drupal with its REST, JSON:API or GraphQL 3.x extension and smash a React frontend on top, you would achieve decoupling in terms of technologies. You swap Drupal’s rendering layer with React. This might bring performance improvements - our friends at Lullabot showed that decoupling is not the only way to achieve that - and allows you to implement more interactive and engaging user interfaces. But it also comes at a cost.

What you don’t achieve, is decoupling, or loose-coupling in the sense of software architecture. Information in Drupal might be accessible to arbitrary clients, but they still have to maintain a deep knowledge about Drupal data structures and conventions (entities, bundles, fields, relations…). You might be able to attach multiple frontends, but you will never be able to replace the Drupal backend. So you reached the identical state of coupling as Drupal had for years by being able to run different themes at the same time.

The real purpose of GraphQL

Back when we finished the automatically generated GraphQL schema for Drupal and this huge relation graph would just pop up after you installed the module, we were very proud of ourselves. After all, anybody was able to query for any kind of entity, field, block, menu item or relation between them, and all that with autocompletion!

The harsh reality is that 99.5% of the world doesn’t care what entities, fields or blocks are. Or even worse, they have a completely different understanding of it. A content management system is just one puzzle piece in our client's business case - technology should not be the focus, it’s just there to help achieve the goal.

The real strength of GraphQL is that it allows us to adapt Drupal to the world around it, instead of having to teach everybody how it thinks of it.

Some of you already noticed that there is a 4.x branch of the GraphQL module lingering, and there have been a lot of questions what this is about. This new version has been developed in parallel over the last year (mainly sponsored by our friendly neighbourhood car manufacturer Daimler) with an emphasis on GraphQL schema definitions.

Instead of just exposing everything Drupal has to offer, it allows us to craft a tailored schema that becomes the single source of truth for all information, operations, and interactions that happen within the system. This contract is not imposed by Drupal, but by the business needs that have to be met.

A bright future

So, GraphQL is not a recommendation for Drupal Core. What does that mean? Not a lot, since there is not even an issue on to pursue that. GraphQL is an advanced tool that requires a certain amount of professionalism (and budget) to reap its benefits. Drupal aims to be used by everyone, and Drupal Core should not burden itself with complexity, that is not to the benefit of everyone. That's what contrib space is there for.

The GraphQL module is not going anywhere. Usage statistics are still climbing up and the 3.x branch will remain maintained until we can provide the same out-of-the-box experience and an upgrade path for version 4. If you have questions or opinions you would like to share, please reach out in the #graphql channel on or contact us on Twitter.

Jan 23 2019
Jan 23

What is “Open Source”? Is it really free?

Publishing software under an open source license means that you grant people the right to use, study, modify and distribute it freely. It does not imply that this process is free of charge. The legal framework just ensures that the operator - at least in theory - has full control over what the software is doing.

That being said, charging for open source isn’t common. The simple reason is that it's hard or in some cases even impossible to track where the software is used. Even if the maintainer added some kind of license check, the open source license grants the user the right to remove it, so any effort in that direction is futile.

Most open source developers generate revenue either by relying on donations or charging for support and maintenance. Since they don’t have to provide a warranty for each installation of their code, these strategies can often at least cover their expenses. In some cases, it’s even enough to make a living.

Should my code be open source?

Writing a piece of code that does something useful can lead down three different paths. These three options could be called lazy, crazy and safe. And that makes the decision a lot easier.

1. Lazy: Just keep that piece of code within the project

In the best case scenario, you will remember it if you stumble upon a similar problem four months down the road and copy it over to project B. You will probably find some bugs and do some improvements, but stakes are not high that they will make it back to project A.

In the worst case is that the lines of code are just left and forgotten and the problem will be solved once again, at the cost of the next project B, while keeping the full maintenance costs in project A.

2. Crazy: The solution is super-useful and so fleshed out that you decide to sell it under a propriety license model

Going down this road means serious marketing to achieve a critical mass, providing guarantees and warranty to customers, and paying a host of lawyers to make sure nobody steals the intellectual property or uses it in unintended ways.

This all boils down to starting a*high risk business endeavour*, and in most cases, it doesn’t make sense.

3. Safe: The solution is moved into a designated package

In the worst case, the code just stays in this package and is never re-used. More commonly, it can be picked up for project B, and all improvements immediately are available for project A. The maintenance costs for this part are shared from now on.

And in the best case, this package is made publicly available and somebody else picks it up and improves it in some way that directly feeds back into project A and B.

Advantages of Open Source in an agency

Client Value 

From our perspective as an agency, there is hardly ever a case where open source is not the best option. Our business model is to get the best possible value out of our clients' investment. We achieve that by contributing as much as we can since every line of code gets cheaper if it can be reused somewhere else. Some clients actively encourage us to share projects even in their name and some don’t care as long as we get the job done.

External Collaboration

Our core business value is our knowledge and experience in providing software-based solutions, not the software itself. And as long as our client agrees, we use our position to spark collaboration were it wouldn’t be without us. If we see requirements that pop up across different projects, we can align these and share the effort, which ultimately helps our customers saving money.

Internal Collaboration

Another reason for us investing into open source is our own setup. As a heavily distributed team, information flow and structure is even more important than for co-located companies.
I often see code not being published openly due to tightly coupled design, missing tests, or insufficient documentation.

The investment to increase quality is often billed against “contribution costs” and therefore the first thing to fall off the edge. But it actually is part of “doing your job properly”, since software should also work reliably and stay maintainable if its only used once.

Since proper architecture and documentation become vital as soon as different timezones need to cooperate on a single codebase, contributing has to become the standard process instead of the exception.

Apart from that, threatening developers with publishing their creations has proven to be a terrific instrument for improving code quality.

Open source products

If the produced software, or - more general - produced knowledge, itself is the product or would expose business critical information then it might not make sense to go open source. But even in such cases, interesting exceptions have happened.

Tesla’s heavily discussed move to release all its patents for electric cars to the public back in 2014 is not exactly the latest news. Some praised Elon Musks goodwill, while others called it a marketing stunt. The fact is, Toyota cancelled the partnership with Tesla around the same time and released its first hydrogen fuel cell car. A behemoth like Toyota focusing on hydrogen cells could have become a serious threat to the electric car industry in total. Releasing the patents was a way to strengthen the technology enough to overcome this obstacle. I wouldn’t dare to judge if the undertaking was successful, or if we would be better off with hydrogen cell cars. But this case illustrates how sharing knowledge can be as powerful as keeping it for oneself.

Another example is our sister company,, who decided to open source their hosting platform “Lagoon” some time ago. Full transparency on how applications are hosted is a huge deal for technical decision makers, and it becomes a lot easier to gain their trust if they can see what’s going on. Sure, you *could* just grab the code, try to get your hands on some engineers, and strap them in front of their computers 24/7 to get the same level of reliability and support. But I doubt there is a legal way to do this with less money than just hiring the creators themselves.

Should everything be open source?

This might ignite some discussions, but I don’t think so. The open source community has suffered a lot from being associated with pure goodwill and altruism. And this has led to serious problems like developer burnout and subsequent oversights that shook platforms as a whole.

The “no license fee” bait did a lot more damage than it helped. There might be no fee, but that doesn’t mean work is for free. Compensation just works through other channels. And if this is not possible, it’s sometimes better to pay for a license than relying on an unsustainable open source solution. 

I personally see open source as a business model that embraces the fact that distribution of information is free. Instead of wasting resources on artificially locking down intellectual property, it focuses on creating actual value. And since I'm making a living of creating this value, I consider this a good thing.

Open Source as a model is one tool that gives us the ability to create innovative and ambitious projects for our clients. Get in touch with us today!

Apr 25 2018
Apr 25

In a previous blog post and Sasa’s talk, we already introduced the idea of using GraphQL with Twig to facilitate a pull based data flow. This sparked a lot of interest, questions and some doubts. In this post we will highlight and answer the most prominent ones.

Whats wrong with Drupal’s approach?

Drupal, as most content management systems, follows the traditional layered approach. The incoming request is mapped to a controller which prepares the information to display and pushes it through a display layer. The controller may be extended by modifying the execution result (process, preprocess) and the display can be altered by template overrides.

There’s nothing wrong with that, since this pattern serves a very specific workflow:

  1. Install a module
  2. Features appear immediately
  3. Tweak it until everybody is happy

This works great, until a product development team is involved that is not willing to bend to what’s available out of the box. If the requirements and expectations are very specific, developers have to work against the process the framework dictates. Lot’s of preprocessing, altering and template overriding scatter responsibilities across multiple files, modules and themes. Just consider how many templates are in play when rendering a single views listing. Without constant compromising, this easily gets out of hand and maintainability is dropping with every new feature.

Opinion Alert: In believe that this is the unspoken reason for the whole fuzz around “decoupling Drupal”.

React, Angular, Vue and friends are awesome and JavaScript clients help to deliver great interactive experiences - but a lot (if not most) of Drupal powered websites just deliver information. There is few interactivity, and a lot of projects don’t really benefit from the possibilities the JavaScript framework of choice brings. Nonetheless teams decide to go down that road, for one simple reason:

Decoupling puts the frontend in charge.

REST, JSONAPI and GraphQL act as interfaces that allow to the display layer to develop without being shackled to a backend implementation that has to follow step by step. But if Drupal becomes truly decoupled, we can achieve this goal with any display layer. Also with Twig running in the same PHP process, which - for a lot of smaller projects - is good enough.

How does it perform?

Thats the first question on every single occasion we talk about the GraphQL in Twig. And to clear out the first basic misunderstanding: There are no HTTP requests involved. When used with Twig, only the GraphQL processor is invoked to resolve a queries result.

Right now, this is not significantly slower than render arrays, since it basically does the same as you would do in a preprocessing step to get the data you want. And it natively integrates with Drupal’s existing render caching mechanisms, so the approach doesn’t bring regressions on this side.

But GraphQL itself brings a lot more potential for performance improvements. Since the structure of the query is known in advance we are able to optimise the tree of requirements on a higher level.

For example, nested entity relations (e.g. a list of articles that also requires information about the authors and related posts) are resolved more efficiently with deferred resolvers that are able to aggregate each level into one fetch statement. The approach is similar to lazy builders in the current theme layer, but applied automatically.

Aren’t we back at putting SQL queries in templates?

At our DrupalCon Nashville talk, one participant stated that he “would like to try it out in a dark room, where nobody see’s him …”, which perfectly sums up the collective gut feeling. It seems like we are breaking a law that we’ve been fighting for the last 10 years, but that’s not true.

We could answer this with another question:

Why is GraphQL allowed in React components, but not in Twig templates?

But thats too easy, so we’ll dig a little deeper. The GraphQL fragments in template files are requirement definitions, but don't contain any rules on how to resolve them. The implementation of GraphQL fields in the background have to adhere the interface definition, but the implementation is fully replaceable.

The annotations are collected during the Twig compile step and assembled to one query per root template. At render time, this single query is executed and the result is passed down the template tree. This means, the fragment in a node--teaser.html.twig is not executed 20 times in a listing of 20 articles, like the infamous SQL template hacks that brought us to Twig in the first place.

That being said, you still can do dumb stuff with too deeply nested queries. But thanks to query complexity analysis it’s possible to catch bad ideas automatically and let the frontend developers work on a fixed performance budget. And since we have full control over the schema and are able to remove fields at will, we can keep a tight grip on what the frontend can do and what not.

More questions

We hope this cleared up some questions, but most likely there are more. Luckily, we are going to host a webinar on this topic on May 11th 2018. You will get a walk through using GraphQL with Drupal and a chance to join the discussion. See you there!

Jan 22 2018
Jan 22

With the growing popularity of GraphQL, the obligatory host of more or less founded opinions - trying to tell you that it's all just a hype - is also on the rise throughout the internet. 

Some of them have a point, some don’t, and you bet we have an opinion too.

The end of the year is now way past us and I crunched some numbers. The most frequent question I’ve been asked was: «Philipp, could you mute yourself? Your keyboard is very loud.» But that one wouldn't promise a good blog post. So, instead, I will write about the second most frequently asked question: «Why would you use GraphQL instead of REST?»

Honestly, because I wanted to avoid a discussion, which I knew would take too long, I often gave one of the following diplomatic answers: «They serve different use cases.», «It’s a matter of taste.», «GraphQL can’t do everything …»

So here’s my new years' confession: I lied

Common perception of GraphQL

When reading opinions about GraphQL distinct patterns keep popping up. Let's have a look at them. 

GraphQL is there to reduce HTTP requests

When fetching complex related data sets with REST, you need to issue multiple requests. GraphQL avoids that by specifying all information requirements upfront. This is true, but just a small part of the picture. HTTP2 would be a better option to just reduce the overhead of multiple requests, without turning everything else upside down.

GraphQL is a supplement to React

That is a widespread misunderstanding since GraphQL was born out of requirements that emerged with complex Javascript clients, which in turn happen to be implemented with React quite often these days. But GraphQL doesn’t make any assumptions about the client technology it is consumed with. It doesn’t even assume it’s used above HTTP.

GraphQL is not cacheable

A GraphQL query may contain information from different entities, varying fields with arbitrary naming and therefore responses can’t be cached. Responses can be cached, but it’s harder. Besides, it is part of the client’s responsibility to construct queries intelligently, so they can be cached instead of blindly cramming everything into one request.

GraphQL is insecure

Or a less drastic wording: GraphQL has a larger attack surface. Depending on your application, that’s true. Since one query can request a cascading amount of related entities, there’s a lot more potential for something going south. This can be mitigated by designing the schema in a way that doesn’t allow funky constructs or using static query complexity analysis to reject queries that could get out of hand. But both approaches require experience and engineering. It’s definitely easier to safeguard a REST API.

GraphQL is a replacement for REST

That's the big misunderstanding. In my opinion, GraphQL shouldn’t be perceived as an alternative to REST, but as the layer underneath. Conceptually, a REST endpoint is nothing but a persisted GraphQL query.

From a consumers perspective, GraphQL can do anything REST can. Period. There is no valid reason to choose REST over GraphQL.

From a providers perspective, the reduced subset of actions and predictable responses of a REST API are a lot easier to manage.

GraphQL’s elevator pitch

This brings me to the 3rd most asked question of 2017: What’s GraphQL’s elevator pitch?

GraphQL shifts control from data storage and structures to client and product development.

This also answers the question of “when” to use GraphQL: Whenever you want your client to be more powerful. This might not be the case for a public HTTP API. But whenever you control the client, GraphQL is the better choice. And keep in mind that “client” doesn’t necessarily mean web browser, React frontend or smartphone application. GraphQL provides a structured way to describe information requirements that are not limited to HTTP.

It is for example possible to use GraphQL in combination with Twig to turn Drupal’s push-based rendering model upside down and give theme developers all the power they longed for. But this story has already been told.

Jan 15 2018
Jan 15

Using GraphQL in Drupal with Twig instead of React, Vue or the next month's javascript framework might sound like a crazy idea, but I assure you it’s worth thinking about.

Decoupling is all the rage. Javascript frontends with entirely independent backends are state of the art for any self-respecting website right now. But sometimes it’s not worth the cost and the project simply does not justify the full Drupal + React technology stack.

Besides the technological benefits of a javascript based frontend like load times and responsiveness, there’s another reason why this approach is so popular: It moves control to the frontend, concept and design unit, which matches project workflows a lot better.

Status quo

Traditionally Drupal defines data structures that provide a “standard” rendered output, which then can be adopted by the so-called “theme developer” to meet the clients' requirements. Template overrides, preprocess functions, Display Suite, Panels, Layouts - there are myriads of ways how to do this, and twice as many opinions determining the right one. When taking over a project the first thing is to figure out how it was approached and where the rendered information actually comes from. Templates only have variables that are populated during processing or preprocessing and altered in modules or themes, which makes it very hard to reason with the data flow, if you were not the person that conceived it in the first place.

There are ideas to improve the situation, but regarding the success of decoupling, perhaps it’s time to approach the problem from a different angle.

Push versus Pull

The current push model used by Drupal scatters responsibilities across modules, preprocess functions and templates. The controller calls the view builder to prepare a “renderable” that is altered 101 times and results in a set of variables that might or might not be required by the current theme’s template.

If we would turn this around and let the template define it’s data requirements (as it happens in decoupled projects naturally), we could achieve a much clearer data flow and increase readability and maintainability significantly.

And that’s what the GraphQL Twig module is supposed to do. It allows us to add GraphQL queries to any Twig template, which will be picked up during rendering and used to populate the template with data.

A simple example node.html.twig:

This is already enough to pull the information we need and render it. Let’s have a look at what this does:

The graphql comment on top will be picked up by the module. When the template is rendered, it tries to match the queries input arguments to the current template variables, runs the GraphQL query and passes the result as a new graphql variable to the template. Simple as that, no preprocessing required. It works for every theme hook. Be it just one complex node type, an exceptional block or page.html.twig.

Imagine we use GraphQL Views to add a contextual GraphQL field similarArticles that uses SOLR to find similar articles for a given node. It could be used immediately like this:

The module even scans included templates for query fragments, so the rendering of the “similar article” teaser could be moved to a separate component:



No preprocessing, a clear path where data flows and true separation of concerns. The backend provides generic data sources (e.g. the similarArticles field) that can be used by the product development team at will. All without the cost of a fully decoupled setup. And the possibility to replace single theme hooks allows us to use existing Drupal rendering when it fits and switch to the pull-model wherever we would have to use complex layout modules or preprocessing functions to meet the requirements of the project.

Future development

There are some ideas for additional features, like mutation based forms and smarter scanning for query fragments, but first and foremost we would love to get feedback and opinions on this whole concept. So if you are interested, join on Slack or GitHub and let us know!

Jan 09 2018
Jan 09

The graphql_core module bundled with the graphql module automatically exposes types and fields to traverse Drupal’s entity system. However, since beta 1 it does not do the same for mutations anymore. The fact that it is not possible to write or update data using GraphQL caused much confusion. I want to shed light on this topic and explain the way mutations are intended to work.

Before releasing the first beta version of the GraphQL module for Drupal, we removed a feature that automatically added input types and mutation fields for all content entities in the GraphQL schema. This may seem to be counter-intuitive, but there were ample reasons.

Why automatic mutations were removed

While GraphQL allows the client to freely shape query and response, mutations (create, update or delete operations) are by design atomic. A mutation is a root level field that is supposed to accept all necessary information as arguments and return a queryable data object, representing the state after the operation. Since GraphQL is strictly typed, this means that there is one mutation field and one distinct input type for every entity bundle. Also, because Drupal entities tend to have a lot of fields and properties, this resulted in very intricate and hard to use mutations, increasing the schema size, even though  90% of them were never used. 

On top of that, some entity structures added additional complexities: For example, just trying to create an article with a title and a body value while the comment module is enabled results in a constraint violation, as the comment field requires an empty list of comments at the least. 

These circumstances led to a technically correct solution that unfortunately burdened the client with too much knowledge about Drupal internals and was therefore not usable in practice. It became apparent that this had to break and change in the future. Now, because we removed it, the rest of the GraphQL API can become stable. The code is still available on Github for reference and backwards compatibility, but there are no plans to maintain this further.

How to use mutations

So there is no way to use mutations out of the box. You have to write code to add a mutation, but that doesn’t mean it’s complicated. Let’s walk through a simple example. All code is available in the examples repository.

First, you have to add an input type that defines the shape of the data you want your entity mutation to accept:


namespace Drupal\graphql_examples\Plugin\GraphQL\InputTypes;

use Drupal\graphql\Plugin\GraphQL\InputTypes\InputTypePluginBase;

 * The input type for article mutations.
 * @GraphQLInputType(
 *   id = "article_input",
 *   name = "ArticleInput",
 *   fields = {
 *     "title" = "String",
 *     "body" = {
 *        "type" = "String",
 *        "nullable" = "TRUE"
 *     }
 *   }
 * )
class ArticleInput extends InputTypePluginBase {


This plugin defines a new input type that consists of a “title” and a “body” field.

The first mutation plugin is the “create” operation. It extends the CreateEntityBase class and implements only one method, which will map the properties of our input type (above) to the target entity type and bundle, as defined in the annotation.


namespace Drupal\graphql_examples\Plugin\GraphQL\Mutations;

use Drupal\graphql\GraphQL\Type\InputObjectType;
use Drupal\graphql_core\Plugin\GraphQL\Mutations\Entity\CreateEntityBase;
use Youshido\GraphQL\Execution\ResolveInfo;

 * Simple mutation for creating a new article node.
 * @GraphQLMutation(
 *   id = "create_article",
 *   entity_type = "node",
 *   entity_bundle = "article",
 *   secure = true,
 *   name = "createArticle",
 *   type = "EntityCrudOutput",
 *   arguments = {
 *      "input" = "ArticleInput"
 *   }
 * )
class CreateArticle extends CreateEntityBase {

   * {@inheritdoc}
  protected function extractEntityInput(array $inputArgs, InputObjectType $inputType, ResolveInfo $info) {
    return [
      'title' => $inputArgs['title'],
      'body' => $inputArgs['body'],


The base class handles the rest. Now you already can issue a mutation using the GraphQL Explorer:

Creating an article node with GraphQL.

The mutation will return an object of type EntityCrudOutput that already contains any errors or constraint violations, as well as - in case the operation was successful - the newly created entity.

If you try to create an article with an empty title, typed data constraints will kick in, and the mutation will fail accordingly:

Typed data constraint violations in GraphQL.

The update mutation looks almost the same. It just requires an additional argument id that contains the id of the entity to update and extends a different base class.


namespace Drupal\graphql_examples\Plugin\GraphQL\Mutations;

use Drupal\graphql\GraphQL\Type\InputObjectType;
use Drupal\graphql_core\Plugin\GraphQL\Mutations\Entity\UpdateEntityBase;
use Youshido\GraphQL\Execution\ResolveInfo;

 * Simple mutation for updating an existing article node.
 * @GraphQLMutation(
 *   id = "update_article",
 *   entity_type = "node",
 *   entity_bundle = "article",
 *   secure = true,
 *   name = "updateArticle",
 *   type = "EntityCrudOutput",
 *   arguments = {
 *      "id" = "String",
 *      "input" = "ArticleInput"
 *   }
 * )
class UpdateArticle extends UpdateEntityBase {

   * {@inheritdoc}
  protected function extractEntityInput(array $inputArgs, InputObjectType $inputType, ResolveInfo $info) {
    return array_filter([
      'title' => $inputArgs['title'],
      'body' => $inputArgs['body'],


Now you should be able to alter any articles:

Updating an existing article with GraphQL.

And the delete mutation is even simpler.


namespace Drupal\graphql_examples\Plugin\GraphQL\Mutations;

use Drupal\graphql_core\Plugin\GraphQL\Mutations\Entity\DeleteEntityBase;

 * Simple mutation for deleting an article node.
 * @GraphQLMutation(
 *   id = "delete_article",
 *   entity_type = "node",
 *   entity_bundle = "article",
 *   secure = true,
 *   name = "deleteArticle",
 *   type = "EntityCrudOutput",
 *   arguments = {
 *      "id" = "String"
 *   }
 * )
class DeleteArticle extends DeleteEntityBase {


This will delete the entity, but it’s still available in the response object, for rendering notifications or for subsequent queries.

Deleting an article using GraphQL mutations.

That’s a wrap. With some trivial code, we can implement a full CRUD interface for an entity type. If you need multiple entity types, you could use derivers and services to make it more DRY.


This way we can create entity mutations that precisely fit the needs of our current project. It requires a little boilerplate code and might not be the most convenient thing to do, but it’s not terrible and works for now.

That doesn’t mean we are not planning to improve. Currently, the rules module is our best hope for providing zero-code, site-building driven mutations. The combination would be tremendously powerful.

If you want out-of-the-box mutations in GraphQL, go and help with #d8rules!

Aug 24 2017
Aug 24

The last blog post in this series culminated in the epic achievement of adding a "page title" field to every URL object in our schema. Now we can request the page title for every internal URL. But menus and link fields can also store external addresses.

Wouldn't it be cool if we can request their page title's just the same way?

Overriding a field

Let's try it and ask questions later:

query {
  route(path: "") {

Unfortunately, this doesn't work out. The route field checks if the provided path is a Drupal route and if the user has access to it, and will return null if either of the two doesn't apply. So, the first thing we will do is extend the route field so it also can handle external URLs.

Note: At the time of writing there is a pending pull request that adds exactly this enhancement. If you are reading this in a couple of weeks from now (my now, not yours - unless you own a DeLorean), there's a chance that this already works for you. But since this is a nice example of overriding a field, we stick with it. If you don't just want to read but really play through this tutorial, make sure you work based on the 8.x-3.0-alpha3 version of the GraphQL module.

We create a new field called ExampleRoute in our graphql_example module. If you are not yet proud owner of one, please refer to the last blog post. This new field simply extends the existing Route field and even copies its annotation.

With one difference: We add a new property called weight which we set to "1". It's quite simple. When the schema builder assembles all field plugins for a given type and stumbles upon two with the same name, the higher weight takes precedence. That's how we tell GraphQL to use our custom implementation of a field.

The resolveValues method checks if the path is an external Url. In this case, it just constructs a Url object, else it will pass down to the parent implementation.

The result is still not satisfying. The route field now returns an Url object, but our page title field can only retrieve internal page titles.

So let's modify the PageTitle field. First, we check if the current value is a routed URL. In this case, we still leave it to the title resolver. Otherwise, we fire up Drupal's http_client (aka Guzzle), fetch the content behind the address, load it into an XML document, search for the title element and yield its contents. I am aware that this is not the most performant solution, but I'm trying to keep these examples short and concise.

It worked. Our query for an external page title yields the correct result.

  "data": {
    "route": {
      "pageTitle": "Drupal - Open Source CMS |"

The result is correct, but it doesn't feel right. Internal and external URLs are fundamentally different. The page title might make sense on both, but the similarities end there. External URLs won't route to an entity or provide any other information specific to Drupal. These fields won't break and will just return NULL instead, but that doesn't seem very elegant.

Diff: Page title of external URLs

Interfaces and Types

We have already met the Url type, and we know that it connects a certain value with a list of fields that can be executed on it. A GraphQL interface is in some ways similar to interfaces in an object oriented language. It gives a group of types with shared fields a common name.
Right now we've got the Url type provided by the GraphQL module, representing internal URLs (not 100% true, but for the sake of simplicity we leave it there). And we have our external URL which is emitted by the same route field, but operates differently. So what we need to do now:

  1. Create a GraphQL interface called GenericUrl
  2. Change the route field to return this interface instead.
  3. Attach our pageTitle field to this interface.
  4. Add a ExternalUrl GraphQL type that implements this interface.

Creating the interface

GraphQL interfaces live in their own plugin namespace Plugin\GraphQL\Interfaces where the schema builder will pick them up.

The plugin annotation for interfaces is quite simple. In most cases, it consists of the plugin id and a name to be used within the schema. The base class for interfaces contains an abstract method: resolveType. This method will receive a runtime value and has to select the appropriate GraphQL type for it. In our case, it checks if the URL is external or not and uses the schema manager service to return an instance of either Url or ExternalUrl.

Using the interface

This won't have any effect as long as we don't use this interface type somewhere. So we change the pageTitle field to attach it to the GenericUrl instead of Url and adapt our override of the route field to return a GenericUrl.

Creating the new type

The new type we need is rather simple. It's an empty class, extending TypePluginBase. The most important part is the annotation that defines a list of interfaces. Just the GenericUrl interface in our case.

GraphQL type source

Diff: Generic Url interfaces

Now our query still works. But there is a new problem. Internal URLs don't work anymore but emit an error message instead:

Type "Url" does not implement "GenericUrl"

We need to adapt the of the Url type, which is defined in another module. Sounds like a job for the hero we don't deserve, but we need right now. You can't say Drupal without screaming hook_alter from the top of your lungs!

Altering plugins

There's an alter hook for each plugin type in GraphQL. So, all we need is to implement hook_graphql_types_alter and add the GenericUrl interface to the Url types interface list.
Note that the types are indexed by their plugin-ID.

Diff: Altering existing plugins

Great! Now we are able to fetch page titles from both internal and external urls.

query {
  admin:route(path: "/admin") {
  drupal:route(path: "") {

Will return:

  "data": {
    "admin": {
      "pageTitle": "Administration"
    "drupal": {
      "pageTitle": "Drupal - Open Source CMS |"

But you will notice that we lost all the other fields attached to the Url type. Thats because they are not attached to the GenericUrl type, but to the Url type. And that makes sense, since you can't request for example an entity or the current user context for an external path.

Query composition and fragment selection

And this brings us to the most important and powerful aspect of interfaces and types. We are able to apply different query fragments and fetch different information based on the result type.

Assume the following scenario: Our Article type has a Links field that can contain links to either other articles or external URLs, as well as a Description field. Additionally, we extended our ExternalUrl type with an additional meta field that pulls meta tags out of the XML tree (Bonus objective: implement that yourself). Now we could do this:

query {
  route(path: "/node/1") {
    ... on Url {
      nodeContext {
        ... on NodeArticle {
          fieldLinks {
            url {

fragment InternalLink on Url {
  nodeContext {
    ... on NodeArticle {

fragment ExternalLink on ExternalUrl {
  description:meta(property: "og:description")  

The first part simply routes to the article with id 1 and fetches it's Links field, which will emit a list of URLs that might be internal or external. There we first pull the common page title and then include two fragments that apply on either type of URL and invoke different fields based on that information. So elegant!

The finish line

We've reached the (preliminary) end of our streak of practical GraphQL blog posts. Next up will be a peek into the future of the GraphQL module with planned features and possible use cases. But if you are interested in more advanced topics like performance optimisation, caching or deeper integration with Drupal subsystems (fields, views, contexts ...) ping me @pmelab and I'll see what I can do.

Aug 16 2017
Aug 16

The last blog post might have left you wondering: "Plugins? It already does everything!". Or you are like one of the busy contributors and already identified a missing feature and can't wait to take the matter into your own hands (good choice).

In this and the following posts we will walk you through the extension capabilities of the GraphQL Core module and use some simple examples to show you how to solve common use cases.

I will assume that you are already familiar with developing Drupal modules and have some basic knowledge of the Plugin API and Plugin Annotations.

The first thing you will want to do is disabling GraphQL schema and result caches. Add these parameters to your

    result_cache: false
    schema_cache: false

This will make sure you don't have to clear caches with every change.

As a starting point, we create an empty module called graphql_example. In the GitHub repository for this tutorial, you will find the end result as well as commits for every major step.

Diff: The module boilerplate

A simple page title field

Can't be too hard, right? We just want to be able to ask the GraphQL API what our page title is.
To do that we create a new class PageTitle in the appropriate plugin namespace Drupal\graphql_example\Plugin\GraphQL\Fields.

Let's talk this through. We've created a new derivation of FieldPluginBase, the abstract base class provided by the graphql_core module.

It already does the heavy lifting for integrating our field into the schema. It does this based on the meta information we put into the annotation:

  • id: A unique id for this plugin.
  • type: The return type GraphQL will expect.
  • name: The name we will use to invoke the field.
  • nullable: Defines if the field can return null values or not.
  • multi: Defines if the field will return a list of values.

Now, all we need to do is implement resolveValues to actually return a field value. Note that this method expects you to use the yield keyword instead of return and therefore return a generator.

Fields also can return multiple values, but the framework already handles this within GraphQL type definitions. So all we do is yield as many values as we want. For single value fields, the first one will be chosen.

So we run the first GraphQL query against our custom field.

query {

And the result is disappointing.

  "data": {
    "pageTitle": null

Diff: The naive approach

The page title is always null because we extract the page title of the current page, which is the GraphQL API callback and has no title. We then need a way to tell it which page we are talking about.

Adding a path argument

Lucky us, GraphQL fields also can accept arguments. We can use them to pass the path of a page and get the title for real. To do that, we add a new annotation property called arguments. This is a map of argument names to the argument type. In our case, we added one argument with name path that expects a String value.

Any arguments will be passed into our resolveValues method with the $args parameter. So we can use the value there to ask the Drupal route matcher to resolve the route and create the proper title for this path.

Let's try again.

query {
  pageTitle(path: "/admin")

Way better:

  "data": {
    "pageTitle": "Administration"

Congratulations, MVP satisfied - you can go home now!

Diff: Using arguments

If there wasn't this itch every developer has when the engineering senses start to tingle. Last time we stumbled on this ominous route field that also takes a path argument. And this ...

query {
  pageTitle(path: "/node/1")
  route(path: "/node/1") {

... smells like a low hanging fruit. There has to be a way to make the two of them work together.

Attaching fields to types

Every GraphQL field can be attached to one or more types by adding the types property to its annotation. In fact, if the property is omitted, it will default to the Root type which is the root query type and the reason our field appeared there in the first place.

We learned that the route field returns a value of type Url. So we remove the argument definition and add a types property instead.

This means the $args parameter won't receive the path value anymore. Instead, the $value parameter will be populated with the result of the route field. And this is a Drupal Url object that we already can be sure is routed since route won't return it otherwise. With this in mind, we can make the solution even simpler.

Now we have to adapt our query since our field is nested within another.

query {
  route(path: "/admin") {

Which also will return a nested result.

  "data": {
    "route": {
      "pageTitle": "Administration"

The price of a more complex nested result might seem high for not having to pass the same argument twice. But there's more to what we just did. By attaching the pageTitle field to the Url type, we added it wherever the type appears. Apart from the route field this also includes link fields, menu items or breadcrumbs. And potentially every future field that will return objects of type Url.
We just turned our simple example into the Swiss Army Knife (pun intended) of page title querying.

Diff: Contextual fields

I know what you are thinking. Even an achievement of this epic scale is worthless without test coverage. And you are right. Let's add some.

Adding tests

Fortunately the GraphQL module already comes with an easy to use test base class that helps us to safeguard our achievement in no time.

First, create a tests directory in the module folder. Inside that, a directory called queries that contains one file - page_title.gql - with our test query. A lot of editors already support GraphQL files with syntax highlighting and autocompletion, that's why we moved the query payload to another file.

The test itself just has to extend GraphQLFileTestBase, add the graphql_example module to the list of modules to enable and execute the query file.

Diff: Adding a test


We just created a simple field, passed arguments to it, learned how to attach it to an already existing type and finally verified our work by adding a test case. Not bad for one day's work. Next time we will have a look at Types and Interfaces, and how to use them to create fields with complex results.

Aug 10 2017
Aug 10

More than data

After being educated by more than a decade of thinking in terms of RESTfulness and pure data, the solution seems obvious. Drupal 8 has Typed Data and therefore we know everything about our data structure and can just automatically generate GraphQL types, fields and interfaces from it. Awesome, we’re done!

Well you only need the light when it's burning low
Only miss the sun when it starts to snow
Only know you love her when you let her go

I suspect that Michael David Rosenberg was trying to build a decoupled Drupal website, gave up and wrote this song instead. The moment you reduce Drupal to a pure object database you will learn what else it does for you. How did you plan to handle ...

  • Output filtering and XSS protection
  • Routing and Redirecting
  • Arbitrary Listings
  • Image processing
  • Menus and Breadcrumbs
  • Blocks
  • Translations
  • The editable and context sensitive “Contact” block in the footer …?

And that’s just what ships with Core. Obviously, you can build that all into your frontend application, but that’s a lot of development time spent on implementing things that have already been tested and are there for you to use.
As stated in the first post in this series, GraphQL is an application level query language, which means data and operations are equally important. Actually, this brings it rather close to SOAP. This might send shivers down some spines, but stay with me - we learned from the lessons of the past and gained some valuable insights.

Submodule overview

After experiencing some painful “Aha”-moments, we started to implement a GraphQL Schema that allows us to tap into very different levels and areas of Drupal’s feature-set instead of typed data alone. This growing collection of types and fields has been split up into submodules that can be enabled on demand. Let’s have a look at them, one by one.

GraphQL Core

The base module all others depend on. It introduces a system of overridable plugins (Fields, Scalars, Types, Interfaces, Enumerations and Input Types) that are automatically assembled to form our GraphQL Schema. In addition to that, it includes plugins that expose some basic Drupal mechanisms: Routing, Context and Entities.

For every entity type, there is a corresponding query field that makes it possible to run property-filtered queries and will return a list of entity objects that contain the UUID. This might not seem very helpful, but more on this later.

Example of a GraphQL node query.

More interesting, however, is the route field. It takes a path as an argument and returns an object of type Url which contains fields for every defined context in Drupal. This way it’s possible to pull the content language, current user or node for a given path. Any context provided by a contrib module will also be picked up automatically.

Example of a GraphQL context query.

GraphQL Content

As mentioned above, the entity objects returned by entity queries and contexts only contain the UUID, and are therefore not terribly useful. That’s where the second base module takes to the stage. In the last blog post, we already used it to configure a view mode and defined the fields that are exposed to the GraphQL API. Let’s have a closer look what happened there.

The module will pick up all enabled content entity types and bundles and transform them into corresponding GraphQL interfaces and types with fields for common entity properties. If a view mode has been assigned, the configured fields will be attached to the GraphQL type. The fields will return string values rendered according to their field formatter configuration. This means we keep input filters on our body field and all other field formatters that are out there as well.

Retrieving a rendered field value with GraphQL.

The latter is what happens by default. It is, however, possible to define special field formatters that alter this behaviour and return custom GraphQL types. One example is the built-in "GraphQL Raw value" formatter, which returns Typed Data objects. By that, we successfully closed the circle and are able to get back to raw, unprocessed data values when necessary.

Querying raw Typed Data values with GraphQL.

A bunch of modules in the repository do something similar to “GraphQL Raw value”, but with a different spin for the corresponding field type. Below is a brief rundown on each module’s purpose:

GraphQL Boolean

Provides a GraphQL Boolean formatter that will make sure that the response value is an actual boolean and not a string. As simple as that.

Example of a boolean GraphQL value.

GraphQL File

Turns file fields into objects containing useful file information like mime type, file size and the absolute file URL for downloading it.

GraphQL file field query.

GraphQL Image

One of the more important ones. Image fields will return types that contain “derivative” fields that allow us to pull information about specific image styles. This also makes it easily possible to grab multiple image styles at once.

Query image styles with GraphQL.

GraphQL Entity Reference

Instead of returning the rendered entity or the raw target_id value, this allows us to traverse the entity relation and query the related type. The result type will again expose fields based on it’s configured view mode for the entity type retrieved.

Traverse entity references with GraphQL.

GraphQL Link

Exposes the link field title, attributes and URL. So far so “raw value”. But there’s  a catch! The URL field is not a simple string, but an object of type Url.

Remember, the one also returned by the route field with all defined contexts attached? It is possible to pull contextual information for any path inserted into a link field. But beware: this will issue a kernel sub-request, which will have a performance impact when overdoing it. Not sayin’ it’s a good idea, just sayin’ it’s there.

Link field properties and magic.

GraphQL Content mutation

Adds GraphQL mutations for all content entities, which means it enables you to write any content data out of the box -provided the user has required permissions since this and all other entity features will respect entity access.

Creating a node with GraphQL.

GraphQL Block

This module adds a blocksByRegion field to Url objects, which will retrieve block entities displayed in this region. Access conditions will be evaluated against the Url object’s path and the default theme. Until configuration entities are fully supported, this field has limited use on its own.

However, when used in combination with “GraphQL Content”, returned Content Blocks contain the configured fields just as nodes do. This way we maintain the ability to inject blocks of meta-information that are configurable from the Drupal backend and even respect any visibility settings and contexts.

Retrieving blocks by region.

GraphQL Menu

This module provides us with a root level field for querying menu trees. The structure returned will respect access permissions but it does not mark active states. This is done in the client implementation when necessary so we don’t have to re-fetch the whole menu tree on every location update.

Querying menu trees.

GraphQL Breadcrumbs

Retrieve a list of breadcrumb links from any URL object.

Retrieve the breadcrumb path with GraphQL.

GraphQL Views

Last but not least, there is the views integration. One of the most versatile tools in Drupal is also a must have in your decoupled application. We already used it in the last instalment of this series to create the listing of posts, so we can skip boring examples and get to the juicy theory right away!

Any “GraphQL” views display will provide a field that will adapt to the views configuration:

  • The fields name will be composed of the views and displays machine names or configured manually.
  • If the view is configured with pagination, the field will accept pager arguments and return the result list and count field instead of the entity list directly.
  • Any exposed filters will be added to the “filters” input type that can be used to pass filter values into the view.
  • Any contextual filters will be added to the “contextual_filters” input type.
  • If a contextual filters validation criteria match an existing GraphQL type, the field will be added to this type too, and the value will be populated from the current result context.
    TL;DR: We are able to create a view that takes a node as context argument and the field will be added to all GraphQL node objects.

Querying data using GraphQL and views.

Future Features & Lookout

That’s a wrap! We covered all features that help you to build an automated schema included in the GraphQL module at the time of writing. There a lot going on and the community is working hard to implement improvements like:

  • Support for views field displays.
  • Partial query caching.
  • A dedicated user interface for graph configuration.
  • Mutations based on actions and rules.
  • Performance optimization with deferred resolving and lookaheads.

In the next blog post we will take a look at the underlying API and show how you can use it to extend the schema with custom functionality and - even more important - contribute and help to make these features come alive!

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web