Nov 07 2018
Nov 07

Last September Dropsolid sponsored and attended Drupal Europe. Compared to the Northern America’s conferences, getting Europeans to move to another location is challenging. Certainly when there are many conferences of such high quality that compete such as Drupalcamps, Drupal Dev Days, Frontend United, Drupalaton, Drupaljam, Drupal Business Days. I’m happy for the team they succeeded in making Drupal Europe profitable, this is a huge accomplishment and it also sends a strong signal to the market!

Knowing these tendencies, it was amazing to see that there is a huge market-fit for the conference that Drupal Europe filled in. Also a great sign for Drupal as a base technology and the growth of Drupal. Hence, for Dropsolid it was a must to attend, help and to sponsor such an event. Not only because it helps us getting the visibility in the developer community but also to connect with the latest technologies surrounding the Drupal ecosystem.

The shift to decoupled projects is a noticeable one for Dropsolid and even the Dropsolid platform is a Drupal decoupled project using Angular as our frontend. Next to that, we had a demo at our booth that showed a web VR environment in our Oculus Rift where cotent came from a Drupal 8 application.

People trying our VR-demo at Drupal Europe

On top of that, Drupal Europe was so important to us that our CTO helped the content team by being a volunteer and selection the sessions that were related to Devops & Infrastructure. Nick has been closely involved in this area and we’re glad to donate his time to help curate and select qualitative sessions for Drupal Europe.

None of this would have been possible without the support of our own Government who supports companies like Dropsolid to be present at these international conferences. Even though Drupal Europe is a new concept, it was seen and accepted as a niche conference that allows companies like Dropsolid to get brand awareness and knowledge outside of Belgium. We thank them for this support!

Afbeeldingsresultaat voor flanders investment and trade

From Nick: “One of the most interesting sessions for me was the keynote about the “Future of the open web and open source”. The panel included, next to Dries, Barb Palser from Google, DB Hurley from Mautic and Heather Burns. From what we gathered Matt Mullenberg was also supposed to be there but he wasn’t present. Too bad, as I was hoping to see such a collaboration and discussion. The discussion that got me the most is the “creepifying” of our personal data and how this could be reversed. How can one gain control the access of your own data and how can one revoke such an access. Just imagine, how many companies have your personal name and email and how could technology disrupt such a world where an individual controls what is theirs. I recommend watching the keynote in any case!”

[embedded content]

We’ve also seen how Drupal.org could look like with the announced integration with Gitlab. I can’t recall myself being more excited when it comes to personal maintenance pain. In-line editing of code being one of the most amazing ones. More explanation can be found at https://dri.es/state-of-drupal-presentation-september-2018.

[embedded content]

From Nick: 
“Another session that really caught our eye and is worthy of a completely separate blogpost is the session of Markus Kalkbrenner about Advanced Solr. Perhaps to give you some context, I’ve been working with Solr for more than 9 years. I can prove it with a commit even!  https://cgit.drupalcode.org/apachesolr_ubercart/commit/?id=b950e78. This session was mind blowing. Markus used very advanced concepts from which I hardly knew the existence of, let alone found an application for it. 

One of the use cases is a per-user sort based on the favorites of a user. The example Markus used was a recipe site where you can rate recipes. Obviously you could sort on the average rating but what if you want to sort the recipe’s by “your” rating. This might seem trivial but is a very hard problem to solve as you have to normalize a dataset in Solr which is by default a denormalized dataset. 

Now, what if you want to use this data to get personalized recommendations. This means we have to learn about the user and use this data on the fly to get these recommendations based on the votes the user applied to recipes. Watch how this work in the recording of Markus and be prepared to have your mind blown.”

[embedded content]

There were a lot of other interesting sessions and most of them had recordings and their details can be found and viewed at https://www.drupaleurope.org/program/schedule. If you are interested in the future of the web and how Drupal plays an important role in this we suggest you take a look. If you are more into meeting people in real-time and being an active listener there is Drupalcamp Ghent (http://drupalcamp.be) at the 23rd and the 24th of November. Dropsolid is also a proud sponsor of this event.

And an additional tip: Markus’s session will also be presented there ;-)

Jul 16 2018
Jul 16

16 Jul

Nick Veenhof

Recently, I was invited to go on the Modern CTO podcast as a guest. We talked about developer culture, how to measure efficiency and velocity and, more importantly, how you can make the teams as independent as possible without losing that team and company feeling.

Modern CTO is the place where CTOs hang out. Listen in on our weekly podcast while we hang out with interesting Fortune 500 CTO’s in Aerospace, Artificial Intelligence, Robotics + Many more industries. As of 2018: 72k listeners we are incredibly grateful to each and everyone one of you.

It was a real honour to talk to Joel Beasley and have this back-and-forth conversation about how we transformed Dropsolid into a great place to work, but measurable and technically innovative!

[embedded content]

Some of the topics that we talked about in the podcast were also seen at the presentation I gave at Drupal Developer Days in Lisbon.  Feel free to scroll through the slides to get more context out of the podcast!

Apr 24 2018
Apr 24

We all use searches multiple times a day without ever giving them a second thought. Browsing a webshop to find that one particular product, searching through forums to find a solution to your specific problem, or filtering stores based on your location to find the closest one, etc.

All of these examples require the same thing: content that is indexed in such a way that it can be filtered. In general, this is quite easy to set up: all you need is a database and a query to get you started.

However, what should you do if your visitors are more demanding and expect to be fed the right content when searching for plurals or a combination of words or synonyms? In the majority of cases, such complex queries fall beyond the reach of default search solutions, leading to dreaded messages like ‘Your search yielded no results’. This very quickly leads to user frustration and, subsequently, fewer conversions on your website. And this is only the start of it… What if your website also serves Germanic languages other than English? Suddenly, you are confronted with concatenations of words such as the infamous ‘Rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz’ or ‘Chronischevermoeidheidssyndroom’.

In this blogpost, we explain how you can configure Apache Solr to improve your multilingual content indexing and querying. We will shed some light on the ways indexing and querying is configured and can be tested, so you can make sure that Solr can understand your content and return a better result when users are searching through it. We will be using Dutch as an example, because of its compound word complexity. The underlying principles, however, are valid in plenty of use cases for other languages and search-related problems.


What are compound words?

First things first: let’s analyze our definitions. If we stick with the example of ‘chronischevermoeidheidssyndroom’. This word consists of multiple building blocks: the adjectives ‘Chronische’ and ‘vermoeidheid’ and the noun ‘syndroom’. In Dutch, it is perfectly acceptable to combine these elements into one long noun - and the exact same principle applies to German. In English, the direct translation looks very similar: ‘Chronic Fatigue Syndrome’. The only difference, of course, is those handy spaces in between the individual components! Most language processing tools split words by spaces, which makes it easy to search for parts of a search term, as they already appear split up in the text. In the case of the German and Dutch examples above, this isn’t so easy to do. Because of this added complexity, we will need to configure our language processing tool to understand what the possible compound words are and how they are combined. Luckily, there are certain grammar tools around that make it possible to tackle this added complexity through handy algorithms!


Getting started

First of all, we must make sure to install the necessary modules:

The Search API acts as a bridge between different search servers and the Search API Solr allows to communicate with a Solr server.
Once those modules have been installed, the most important one for multilingual sites is the Search API Solr Multilingual module. This module allows to connect with a Solr server with better support for non-English languages.

If you are using the 8.2.x branch of Search API Solr, you won’t have to download the multilingual module, as it is merged into the Search API Solr module.
Once these modules have been installed, you will be able to set up a connection to a Solr server using the Multilingual backend connector. We will not go any deeper into the whole installation process, as the modules all come with their own detailed installation instructions.


Configuring your Solr server for multilingual content

The multilingual Solr module also provides a download mechanism that generates the Solr configuration files that are needed to support multilingual content indexation. 
One of the most important files in this configuration is the schema_extra_types.xml file.


<!--
 Dutch Text Field
 5.0.0
-->
<fieldType name="text_nl" class="Solr.TextField" positionIncrementGap="100">
 <analyzer type="index">
   <charFilter class="Solr.MappingCharFilterFactory" mapping="accents_nl.txt"/>
   <tokenizer class="Solr.WhitespaceTokenizerFactory"/>
   <filter class="Solr.WordDelimiterFilterFactory" catenateNumbers="1" generateNumberParts="1" protected="protwords_nl.txt" splitOnCaseChange="0" generateWordParts="1" preserveOriginal="1" catenateAll="0" catenateWords="1"/>
   <filter class="Solr.LengthFilterFactory" min="2" max="100"/>
   <filter class="Solr.LowerCaseFilterFactory"/>
   <filter class="Solr.DictionaryCompoundWordTokenFilterFactory" dictionary="nouns_nl.txt" minWordSize="5" minSubwordSize="4" maxSubwordSize="15" onlyLongestMatch=""/>
   <filter class="Solr.StopFilterFactory" ignoreCase="1" words="stopwords_nl.txt"/>
   <filter class="Solr.SnowballPorterFilterFactory" language="Kp" protected="protwords_nl.txt"/>
   <filter class="Solr.RemoveDuplicatesTokenFilterFactory"/>
 </analyzer>
 <analyzer type="query">
   <charFilter class="Solr.MappingCharFilterFactory" mapping="accents_nl.txt"/>
   <tokenizer class="Solr.WhitespaceTokenizerFactory"/>
   <filter class="Solr.WordDelimiterFilterFactory" catenateNumbers="0" generateNumberParts="1" protected="protwords_nl.txt" splitOnCaseChange="0" generateWordParts="1" preserveOriginal="1" catenateAll="0" catenateWords="0"/>
   <filter class="Solr.LengthFilterFactory" min="2" max="100"/>
   <filter class="Solr.LowerCaseFilterFactory"/>
   <filter class="Solr.SynonymFilterFactory" synonyms="synonyms_nl.txt" expand="1" ignoreCase="1"/>
   <filter class="Solr.StopFilterFactory" ignoreCase="1" words="stopwords_nl.txt"/>
   <filter class="Solr.SnowballPorterFilterFactory" language="Kp" protected="protwords_nl.txt"/>
   <filter class="Solr.RemoveDuplicatesTokenFilterFactory"/>
 </analyzer>
</fieldType>

This file declares a field type called text_nl, which has some filters declared when indexing the content and some when performing a query on the index. Some names speak for themselves, for example:

  • MappingCharFilterFactory: this uses the accents_nl.txt file to map a certain character to another. Using this, we can filter out special characters. This means that  the search mechanism can still understand what you’re looking for if you search without using this special character.
  • CharacterWhitespaceTokenizerFactory: this tokenizer splits words based on whitespace characters - this way each word get indexed.
  • LengthFilterFactory: Filters out words based on the max and min values. Example: min=2, max=100 will filter out words that are less than 2 characters and longer than 100 characters.
  • LowerCaseFilterFactory: makes the token lower case.
  • StopFilterFactory: Filters out words which are mentioned in the stopwords.txt file attached to this filter. This list contains words with no added value, like eg. but, for, such, this or with.
  • SnowballPorterFilterFactory: The most important argument for this factory is the language argument, as this will define how the stem of a certain word is defined. As you can see in the example, we are using Kp and not NL for Dutch stemming. This is because the Kp stemmer better stems Dutch words. Want to know more about this algorithm? You can find all the details via this link
    In short, this filter will result in plural words being indexed together with the stem of the word. E.g. 'modules' → 'modules', 'module'
  • RemoveDuplicatesTokenFilterFactory: removes any duplicate token.

Some filters are more complex, so let’s explain them more in-depth:

  • WordDelimiterFilterFactory: this filter will split words based on the arguments.
    • CatenateNumbers: If non-zero number parts will be joined: '2018-04' → '201804'.
    • GenerateNumberParts: If non-zero it splits numeric strings at delimiters ('2018-04' → '2018', '04'.
    • SplitOnCaseChange: If zero then words are not split on CamelCase changes. Example: 'splitOnCaseChange' → 'split', 'On', 'Case', 'Change'.
    • GenerateWordParts: If non-zero words are splitted at delimiters. 'DrupalCon' → 'Drupal', 'Con'.
    • PreserveOriginal: if non-zero the original entry is preserved. 'DrupalCon' → 'DrupalCon', 'Drupal', 'Con'.
    • CatenateAll: If non zero words and number parts will be concatenated. 'DrupalCon-2018' → 'DrupalCon2018'
    • CatenateWords: If non zero word parts will be joined: 'high-resolution-image' → 'highresolutionimage'
    • Protected: the path to a file that contains a list of words that are protected from splitting.
  • DictionaryCompoundWordTokenFilterFactory: this filter splits up concatenated words into separate words based on the list of words given as dictionary argument. Example: 'flagpole' → 'flag', 'pole'
  • SynonymFilterFactory: This filter allows to define words as synonyms by passing along a list of synonyms as synonyms argument. This list is a comma separated list of words which are synonyms. This can also be used to solve spelling mistakes.
    • Example: drupal, durpal - Will make sure that when a user searches for durpal, results with drupal indexed will be returned as possible matches.

With this setup, you should be able to make your search indexing and querying a lot smarter. You can find different synonyms.txt, nouns.txt and accent.txt files if you search the web for your language. 


Where can I find these txt files?

Remember the section about the compound words? This is where this knowledge comes in handy. We spent a long time browsing the web to find a good list of compound words and stop words. To make your life easier, we’ve attached them to this blog post as GitLab links for you to see, edit and collaborate on. These files are for Solr version 5 and above and are for the Dutch, English & French language.

Pay attention, however! When adding these files to an existing index, you will need to use a multilingual server connection and then reindex your data. If you don’t do this, your index and query will no longer be in sync and this might even have a negative impact on your environment.


Testing indexed and query results

When you have installed a Solr server and core, you can visit the Solr dashboard. By default, this can be reached on localhost:8983/
If you select your core, you will be able to go to the Analysis tab.

This screen allows to perform a search and see how the index (left input) or the query (right input) will handle your value. It’s important to select the field type, so the analysis knows what filters it needs to use on your value.


Things to avoid

Let’s stick with our example of the Dutch word ‘Chronischevermoeidheidssyndroom’ and see how the index will handle this word. If you don’t configure Apache Solr with support for Dutch, it will only store ‘chronischevermoeidheidssyndroom’ in the index. If someone were to look for all kinds of ‘syndromes’, this item wouldn’t show up in the website’s results. Perhaps you would expect otherwise, but Apache Solr isn’t that smart.


What you do want to happen

However, if the index is configured correctly with support for Dutch words, it will return the following results:
 

 Name of filter

 

 MappingCharFilterFactory

 C|h|r|o|n|i|s|c|h|e|v|e|r|m|o|e|i|d|h|e|i|d|s|s|y|n|d|r|o|o|m

 WhitespaceTokenizerFactory

 Chronischevermoeidheidssyndroom

 LengthFilterFactory

 Chronischevermoeidheidssyndroom

 LowerCaseFilterFactory

 chronischevermoeidheidssyndroom

 DictionaryCompoundWordTokenFilterFactory

“chronischevermoeidheidssyndroom”, “chronisch”, “chronische”, “scheve”, “vermoeid”, “vermoeidheid”, “syndroom”, “droom”, “room”

 StopFilterFactory

“chronischevermoeidheidssyndroom”, “chronisch”,  “chronische”, “scheve”, “vermoeid”, “vermoeidheid”,  “syndroom”, “droom”, “room”

 SnowballPortretFilterFactory

“chronischevermoeidheidssyndroom”, “chronisch”,  “chronische”, “scheve”, “vermoeid”, “vermoeidheid”,  “syndroom”, “droom”, “room”

 RemoveDuplicatesTokenFilterFactory

“chronischevermoeidheidssyndroom”, “chronisch”,  “scheve”, “vermoeid”, “syndroom”, “droom”, “room”


The word ‘Chronischevermoeidheidssyndroom’ will eventually be indexed with the following result: ‘chronischevermoeidheidssyndroom’, ‘chronisch’, ‘scheve’, ‘vermoeid’, “‘syndroom’, ‘droom’, ‘room’. If somebody searches for any of these words, this item will be marked as a possible result.

If, for example, we run a search for ‘Vermoeidheid’, we should expect that our beloved  ‘Chronischevermoeidheidssyndroom’ pops up as a result. Let’s try this out with the Solr analysis tool:

 MappingCharFilterFactory

 V|e|r|m|o|e|i|d|h|e|i|d

 WhitespaceTokenizerFactory

 Vermoeidheid

 WordDelimiterFilterFactory

 Vermoeidheid

 LengthFilterFactory

 Vermoeidheid

 LowerCaseFilterFactory

 vermoeidheid

 SynonymFilterFactory

 vermoeidheid

 StopFilterFactory

 vermoeid

 SnowballPortretFilterFactory

 vermoeid

 RemoveDuplicatesTokenFilterFactory

 vermoeid


Eventually, our query will search for items indexed with the word ‘vermoeid’, which is also a token index when indexing the word 'Chronischevermoeidheidssyndroom'.


In short

When up a Solr core for multilingual content, it’s important that we provide extra field types that handle the text in the correct language. This way, Solr can index the word in such a way that plurals and concatenations of words are understood. This, in turn, provides a better experience to the user who is looking for a certain piece of content. With everything configured correctly, a user running a search for ‘syndroom’ will be served all compound words as a possible result, providing the user a better overview of your site’s content.
 

You can find our Dropsolid resources here: https://gitlab.com/dropsolid/multilingual-solr-config 

More blogs

Apr 18 2018
Apr 18

Our team had been using Varnish a long time for our Dropsolid Drupal 7 project, and we thought the time had come to get it working for Drupal 8 as well. That is why our CTO, Nick Veenhof, organized a meetup about Caching & Purging in Drupal 8. Niels van Mourik gave an elaborate presentation about the Purge module and how it works.
I definitely recommend watching the video and the slides on his blog. In this blog post, we’ll elaborate and build on what Niels explained to us that day. 
First, let’s start off with a quick crash course on what Varnish actually is and how it can benefit your website.

Varnish 101

“Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.” (Source: Varnish-cache.org)

In layman’s terms, Varnish will serve a webpage from its own internal cache if it has it available. This drastically reduces the number of requests to the webserver where your application is hosted. This will, in turn, free up resources on your webserver, so your web application can handle more complicated tasks and more users.

In short, Varnish will make your web application faster and will allow you to scale it more efficiently.


How we use Varnish and Drupal 7

How did things typically work in D7? Well, you’d put a Varnish server with a Drupal compatible Varnish configuration file (vcl) in front of your Drupal 7 site and it would start caching it right away - depending, of course, on what is in the vcl and the headers your Drupal site sent.
Next step would be to install the Varnish module from Drupal.org. This module’s sole purpose is to invalidate your Varnish cache and it does so using telnet. This also requires the Varnish server to be accessible from the Drupal backend. This isn’t always an ideal scenario, certainly not when multiple sites are being served from the same Varnish.

The biggest issue when using Drupal 7 with the Varnish module is that invalidation of content just isn’t smart enough. For instance, if you would update one news item you’d only want that page and the ones where the news item is visible to be removed from Varnish’s cache. But that isn’t possible. This isn’t the module’s fault at all - it’s simply the way Drupal 7 was built. There are a few alternatives that do make it a little smarter, but these solutions aren’t foolproof either.

Luckily, Drupal 8 is a whole new ballgame!


How we use Varnish with Drupal 8

Drupal 8 in itself is very smart at caching and it revolves around the three following main pillars (explained from a page cache perspective, it will also cache parts of pages):

  • Cache tags: A page will get a list of tags based on the content that is on it. For instance if you have a news overview, all rendered news items will be added as tags to that page. Allowing you to invalidate that cache if one of those news items change. 
  • Cache context: A page can be different based based on variables from the current request. For instance if you have a news overview that filters out the news items based on a query parameter. 
  • Cache max-age: A page can be served from cache for X amount of time. After the time has passed it needs to be built up again. 

You can read more about Drupal 8’s new caching system here.


All about invalidation

Niels van Mourik created a module called Purge. This is a modular external cache invalidation framework. It leverages Drupal’s cache system to provide easy access to the cache data so you only need to focus on the communication with an external service. It already has a lot of third-party integrations available like Acquia Purge, Akamai and Varnish Purge.

We are now adding another one to the list: the Dropsolid Purge module.


What does Dropsolid Purge do and why do I need it?

The Dropsolid Purge module enables you to invalidate caches in multiple Varnish load balancers. It also lets you cache multiple web applications by the same Varnish server. The module was very heavily inspired by the Acquia purge module and we reused a lot of the initial code, because it has a smart way of handling the invalidation through tags, but we’ll get in to that a little later. The problem with the Acquia purge module is that it is designed to work on Acquia Cloud, because it depends on certain environment variables and the Varnish configuration is proprietary knowledge of Acquia. This means that it isn’t usable on other environments. 

We also experimented with the Varnish purge module, but it lacked support for cache invalidation in case you have multiple sites/multisites cached by a single Varnish server. This is because the module actually doesn’t tell Varnish which site it should invalidate pages for, so it just invalidates pages for all the sites. It also doesn’t have the most efficient way of passing along the invalidation requests. It contains two ways of sending invalidation request to Varnish: one by one or bundled together. The one by one option results in a lot of requests if you know that updating a single node could easily invalidate 30 tags. Using the Bundle purger could get you to reach the limit of your header size, but more on that later.


What's in the bag?

Currently we provide the following features:

  • Support for tag invalidation and everything invalidation,
  • The module will only purge tags for the current site by using the X-Dropsolid-Site header,
  • The current site is defined by the name you set in config and the subsite directory,
  • Support for multiple load balancers,
  • There is also a default vcl in the examples folder that contains the logic for the bans.

It can be used for any environment if you just follow the installation instructions in the readme.


Under the hood

Preparing and handling the responses for/by Varnish

By default, the module will add two headers to every response it gives:

  • X-Dropsolid-Site: A unique site identifier as a hash based on config (you provide through settings.php) and site parameters:
    • A site name
    • A site environment
    • A site group
    • The path of your site (e.g. sites/default or sites/somesubsite)
  • X-Dropsolid-Purge-Tags: A hashed version of each cache tag on the current page. (hashed to keep the length low and avoid hitting the maximum size of the header)

When the response reaches Varnish, it will save those headers along with the cache object. This will allow us to target these specific cache objects for invalidation.

In our vcl file we also strip those headers, so they aren’t visible by the end user:

sub vcl_deliver {
    unset resp.http.X-Dropsolid-Purge-Tags;
    unset resp.http.X-Dropsolid-Site;
}


Invalidation of pages

For Drupal 8 we no longer use telnet to communicate with Varnish, but we use a BAN request instead. This request will get sent from our site, and it will only be accepted when it comes from our site.  We currently do this by validating the IP of the request against a list of IPs that are allowed to do BAN requests. 

As we mentioned earlier, we provide two ways of invalidating cached pages in Varnish:

  • Tag invalidation: We invalidate pages which have the same cache tags as we send in our BAN request to Varnish.
  • Everything invalidation: We invalidate all pages which are from a certain site.

Tag invalidation

Just like the Acquia purge module, we send a BAN request which contains a group of 12 hashed cache tags which then will be compared to what Varnish has saved. We also pass along the unique site identifier so we indicate we only want to invalidate for a specific site.

Our BAN request has the following headers:

  • X-Dropsolid-Purge: Unique site identifier
  • X-Dropsolid-Purge-Tags: 12 hashed tags

When Varnish picks up this request, it will go through the following logic:

sub vcl_recv {
    # Only allow BAN requests from IP addresses in the 'purge' ACL.
    if (req.method == "BAN") {
        # Same ACL check as above:
        if (!client.ip ~ purge) {
            return (synth(403, "Not allowed."));
        }

        # Logic for banning based on tags
        # https://Varnish-cache.org/docs/trunk/reference/vcl.html#vcl-7-ban
        if (req.http.X-Dropsolid-Purge-Tags) {
            # Add bans for tags but only for the current site requesting the ban
            ban("obj.http.X-Dropsolid-Purge-Tags ~ " + req.http.X-Dropsolid-Purge-Tags + " && obj.http.X-Dropsolid-Site == " + req.http.X-Dropsolid-Purge);
            return (synth(200, "Ban added."));
        }
    }

We check if the request comes from an IP that is whitelisted. We then add bans for every cache object that matches our unique site identifier and matches at least one of the cache tags we sent along. 

You can easily test this by updating a node and seeing that Varnish will be serving you a new version of its page. 
 

Everything invalidation

When the everything invalidation is triggered, a BAN request is sent with the following headers:

  • X-Dropsolid-Purge-All: True
  • X-Dropsolid-Purge: Unique site identifier

And we execute the following logic on Varnish’s side:
 

sub vcl_recv {      
    # Only allow BAN requests from IP addresses in the 'purge' ACL.
    if (req.method == "BAN") {
        # Same ACL check as above:
        if (!client.ip ~ purge) {
            return (synth(403, "Not allowed."));
        }
        # Logic for banning everything
        if (req.http.X-Dropsolid-Purge-All) {
            # Add bans for the whole site
            ban("obj.http.X-Dropsolid-Site == " + req.http.X-Dropsolid-Purge);
            return (synth(200, "Ban added."));
        }
    }
}

When Varnish receives a BAN request with the X-Dropsolid-Purge-All header, it will ban all cache object that have the same unique site identifier. You can easily test this by executing the following command: drush cache-rebuild-external.

Beware: a normal drush cache-rebuild will not invalidate an external cache like Varnish.


Why this matters

To us, this is yet another step in making our cache smarter, our web applications faster and our servers leaner. If you have any questions about this post, you can always leave a comment in the comment section below or open an issue on drupal.org.

Are you looking for a partner that will help you to speed up your site, without having to switch hosting? The Dropsolid Platform helps you to adjust and streamline your development processes, without the typical vendor lock-in of traditional hosting solutions. At Dropsolid, we also offer dedicated hosting, but we never enforce our own platform. Dropsolid helps you to grow your digital business - from every possible angle!

Blog overview     Get in touch

Apr 04 2018
Apr 04

04 Apr

Nick Veenhof

Drupal

Yesterday a highly critical security issue in Drupal was released. The issue itself is considered critical, because, the way we understood, it makes it possible to execute code as an anonymous user. This could lead to a complete hack of your site and complete exposure of your content - or, worse, if your webserver is badly configured, a full-scale hostile takeover of your server. (More background info available here and here.)

The issue was announced to the Drupal community a week early, so our Dropsolid team had plenty of time to anticipate and prepare. Currently, Dropsolid serves 482 unique and active projects, which contain on average three environments. To be more precise, this gave us a whopping 1316 active Drupal installations to patch. These environments are located on 65 different servers. 45 of those servers are out of our hands and are managed by other hosting companies, such as Combell or even dedicated hardware on site with the customer. At Dropsolid we prefer to host the websites within our own control, but to the Dropsolid Platform this ultimately makes no difference. For some customers we also collaborate with Acquia - these clients are taken care of by Acquia’s golden glove service.

So, back to preparing to patch all the different Drupal installations. We would be lying if we said that all Drupal installs were running on the latest and greatest, so we used Ansible and the Dropsolid Platform to gather all the necessary data and perform a so-called dry run. This was a real-world test across all our installations to verify if we could pass on a patch and then deploy it as soon as we have confirmed that the patch works for all the versions that we have available on our Dropsolid Platform. For example, it verified if the patch tool is available on the server, it injected a text file that we then patched to make sure the flow of patching a Drupal installation would go smoothly, etc. Obviously we detected some hiccups as we were testing, but we were left with enough time to resolve all issues in advance.

Throughout the evening, we had plenty of engineers on stand-by, ready to jump in should something in the automated process go wrong. The entire rollout took us about 2 hours - from the release of the patch over verifying the patch on all the different Drupal releases to rolling it out on all sites and, finally, relax with a few beers. This doesn't mean we had it easy. We had to work a lot, but a lot of hours just to make sure we could handle this load in this amount of time. That is why we are continuously building on our Dropsolid Platform.

Those who joined our hangout could bear witness to exactly how comfortable and relaxed our engineers were feeling during the rollout.

You might ask, joined our hangout? What are we on about exactly? Well, since the Drupal community was in this together, I suggested on Twitter to all join in together and at least make it a fun time.

A few nice things that happened during this hangout:

  • Someone played live ukelele for us while we waited
  • Someone posted a fake patch and made everyone anxious, but at least it was a good test!
  • People were able to watch Dropsolid in total transparency how we coped with this patch and were also able to interact and talk to others in the hangout.

It made the whole evening a fun activity, as witnessed by Baddy Sonja.

Obviously this couldn’t have happened without the help of our great engineers at Dropsolid - and also because we invest a lot of our R&D time into the development of the Dropsolid Platform, so we can do the same exercise times 10 or times 100 without any extra human effort. Thanks to the Drupal security team for the good care and the warning ahead of time. It made a tremendous difference!

All our Dropsolid customers can rest assured that we have their backs, all the time!

If you are not a Dropsolid customer yet and you are interested to see how we can help you make your digital business easy, we’d be more than happy to talk. If you are running a Drupal site and need help with your updates or with your processes, we’d be glad to to help out and onboard you onto our Dropsolid Platform. You can keep your server contract while benefiting from our digital governance and expertise. Are you in charge of many many digital assets and feeling the pain? Maybe it’s time you can start doing the fun things again - just have a chat with us!

Get in touch

Feb 07 2018
Feb 07

In part 2 of this three-part series, we showed you how to set up you config management flow. In this part we’ll be looking at some use cases and common issues related to config management.
 

Back to part 1     Part 2


Starting on a new issue, or finishing one

There is an important must-do before you start developing on a new issue, after you’ve pulled the latest code with Git. Don’t forget to always do an import of the configuration first! It should become second nature to do a “drush config-import -y” or shorthand “drush cim -y” just after a Git pull or checkout.

When you finish one, you should do a “drush config-export -y” or shorthand “drush cex -y” and check the exported files before committing your changes.


Live-specific configuration

To set live-specific configuration, you have two options:

  1. Set the config using the settings.php file directly on that environment
  2. Make split that is active on that environment

The general rule is to export your configuration in the sync folder and override for the environment that differs.

The following example should make this a bit clearer. Imagine you have an API that you need to connect to. Your local, development and staging environment should all connect to the development version of the API, while the live environment should connect to the live API. 

If you have a lot of configuration to do for your API connection, you can choose to use config split to solve your problem. We can state that most environments need to connect to the development API; therefore this config should be stored in the sync folder, so it gets imported onto all environments. We then need to override that config for the live environment by using a config split that is only active on the production environment. 

In practice, you would go about it in the following way:

  1. Make sure you have set up your environment settings files as described above
  2. Configure the dev settings in the configuration form
  3. Use “drush cex” to export all updated configuration
  4. Check Git if your changes have been added to the sync folder
  5. Go back to the config form and configure the live settings as if you were on a live site
  6. Use “drush csex live” to export your updated configuration only to your live split folder based on the live split settings.
  7. Check git if your changes have been added to the live sync folder
  8. Commit all your changes 
  9. Use “drush cim” to import all the config again according to your currently active splits

Important: Make sure your drush version is higher than 8.1.10 or it won’t pickup the splits if you use drush cex or drush cim. Be aware that you still need to use csex and csim if you want to export or import from specific splits.
 

When you are adding configuration that should be ignored or split

If you are just creating new functionality and some of the configuration should be ignored or split. Beware that on the first import you do on other environments the config that should have been ignored or split won’t be.

This is because the newly added config to the Config Ignore or Config split module has not been taken into account yet. Sometimes, this can be to your advantage - but more often, the contrary is true.

The way to get around this, is to manually add the “Config Ignore” or “Config Split” config to the environments where you want to deploy or take a two-step approach at deployment.


Avoid changing your default language

Changing your default language after you’ve already been exporting and importing your config can result in some very strange side effects with translations. 

If you ever find yourself in the situation, a good fix is to manually change the langcode in the yml files in your sync folder to your default language again. Also make sure that the strings you’ve used are in the same language as defined by the langcode.

A good rule to bear in mind: Make sure all your config exports are done in the same language. This sometimes leads to errors, because it uses the language defined by the language negotiation plugins. If you only use Drush to import and export your config, this module can help enforcing a specific language. It will force all your Drush commands to be done in a language you define. The Dropsolid dev team be working together with Bart (Xano on Drupal.org) to see if we can help him get the module to a stable release.


Config translations can be tricky

You can easily translate your configuration using the Config Translation module from core. This sometimes can lead to unexpected side effects if you consider translations to be in the grey zone between config and content.

The issue here is that config translation and interface translation sometimes overlap - take for example the name of your content type. It is stored not only in config translation, but also in string translation. 

When you do an import, it will by default overwrite the translation you’ve set in interface translation. This is because the locale module has a subscriber that imports those strings on the import event. (See this issue.)
Be aware that this can even happen to custom translated strings that you use in your own code. If there happens to be a same string that is used in config, it will get overridden. In this case you can always assign a context to your translated string so it doesn’t get overridden. 

For those cases where you can’t set a context on the translated string, we’ve developed and contributed the Locale: Config import module. This module will let you change the behaviour of how the translated config gets imported.

The module shows the following options:

Config Import behaviour - screenshot


Agreeing on best practices for those grey areas for content

As a team of developers, you should agree in advance on a couple of best practices concerning content grey areas. Below I’ve listed a few rules that we’ve been playing with at Dropsolid:

  • If you create a custom block plugin, use translatable markup with context for titles.
  • Initially, treat Roles and permissions as config, until you go live. Next, configure the Config Ignore module to ignore them and update them through update hooks.
  • Always treat webforms as if they are content. Deploy changes through update hooks

Working towards a better flow


I hope this three-part post has provided you with some valuable insight!
This article has been a true team effort, so thanks to everyone who contributed (most notably Brent, Marek, Laurens, Kevin, Thomas and Nick). Together, we can make configuration management a little bit clearer for everyone who uses it!

No doubt this article will be changing through the coming months as everyone - including the team here at Dropsolid - is still looking to find that perfect workflow.
 

Back to part 1     Blog overview

Feb 07 2018
Feb 07

In the first part of this three-piece blogpost, we explained the existing options for managing your configuration across environments. We also shed some light on grey areas between configuration and content. In this part, I’ll be showing you how to set it all up.
 

Back to part 1     Skip to part 3

Setting up configuration management

Configuring the splits

Start by installing the config split module, available here.

We like to keep our splits simple and clear. Currently, have four splits:

  • Blacklist: A split that doesn’t have a folder configured, so it doesn’t end up in our Git repository. This split usually has the same configuration as the config ignore module. The reason why will be explained in the part about config ignore.
  • Dev: A split that is used for our development environment-only configuration
  • Staging: A split that is used for our staging environment-only configuration
  • Live: A split that is used for our live environment only-configuration

Our blacklist configuration looks like this:

Blacklist configuration

If you don’t fill in the folder name, the Config Split module will export the config to a table in the database. That way, it doesn’t pollute your Git commits. For our other splits we use a folder outside our docroot for security reasons - e.g. for a dev split: ../config/splits/dev

Another important value here is the weight of the split. This weight will be used when multiple splits are active. The weight defines the order in which conflicting settings from splits get resolved. I will explain this in the part about the settings files, a little further down this post.

For the active checkbox you can choose whatever you want, because we’ll overwrite it later using the settings.php file. 

We’ve only been actively using the complete split configuration because it is pretty straightforward in how it handles your configuration. The additional configuration is handy because you can use wildcards. In the case of our blacklist, for example, we wanted to exclude everything from webform by using “webform.*”

We haven’t come across any use cases where we needed the conditional split, because we have a split active on every environment or the environment just needs the configuration from the sync folder.

Conditional split configuration

If you were to use it, the following use case would be perfect for it. For instance, you have some configuration you need on all environments but you want it to be different on one environment. In our example we want a different email to be set in the reroute e-mail settings on the staging environment. You would change it in the interface and then do a “drush csex staging” to export it to the config split staging folder. This will allow the config file to be present in the sync folder and in the split folder. 


Configuring ignore

First, install the config ignore module.

The important thing to know about this module is that it only ignores config on import. This is why we have currently set up a blacklist split. A new module called Config Export Ignore (available here) is also at your disposal, but we haven’t used it in our workflow because it doesn’t have a stable release yet. We will look into this in the future though, because the “blacklist” split feels like a hack in some way.

Our default config ignore settings look like this:

config ignore settings screenshot

As you can see, you can also add some special rules to your ignore settings. The only downside is that config split does not support these special rules. You’ll have to tweak your blacklist split a little bit and make your peace with the added files in your repo, until we find or create a solution for this. 


The settings files

To get this working across all environments, you need to have a settings file that is aware of its environment. This can be done using an environment variable that you check or use different files that get deployed based on the environment. There you set which config should be active on which environment. Sometimes it happens that you need multiple splits to be active on a single environment. You need to consider these splits as layers of config that you want to manage across environments. 

Our settings.php file on our development environment is identical to our local environment. It contains the following lines:

// Set your config directory outside of your docroot for security
$config_directories[CONFIG_SYNC_DIRECTORY] = '../config/sync';

// Configure config split directory
$config['config_split.config_split.blacklist']['status'] = TRUE;
$config['config_split.config_split.dev']['status'] = TRUE;
$config['config_split.config_split.staging']['status'] = TRUE;
$config['config_split.config_split.live']['status'] = FALSE;

You might think that setting the staging split to true on the dev/local environment might be a mistake, but this is very intentional. As we are mostly using the complete split settings, this means that a config file can only be in one place. So we leverage the weight of the splits to also have certain staging config on our dev/local environment active. 
For instance, if we only want the reroute e-mail module to be active on local/dev/staging environments, we would add it to the complete split settings of the staging split.

Our settings.php for our staging environment will look as follows:

// Set your config directory outside of your docroot for security
$config_directories[CONFIG_SYNC_DIRECTORY] = '../config/sync';

// Configure config split directory
$config['config_split.config_split.blacklist']['status'] = TRUE;
$config['config_split.config_split.dev']['status'] = FALSE;
$config['config_split.config_split.staging']['status'] = TRUE;
$config['config_split.config_split.live']['status'] = FALSE;

Consequently, the settings.php for our live environment looks like this:

// Set your config directory outside of your docroot for security
$config_directories[CONFIG_SYNC_DIRECTORY] = '../config/sync';

// Configure config split directory
$config['config_split.config_split.blacklist']['status'] = TRUE;
$config['config_split.config_split.dev']['status'] = FALSE;
$config['config_split.config_split.staging']['status'] = FALSE;
$config['config_split.config_split.live']['status'] = TRUE;

Our Rocketship install profile (blog post in Dutch) has all these things preconfigured, which enables us to create more value for our clients.


Deploying changes

When you want to deploy changes to your environments, the only thing you need to do is make sure the config files that you’ve exported using ‘drush config-export’ are on the environment you want to deploy. Using a versioning system like Git can greatly help you manage those files across environments. 

When you are ready to deploy your changes, just run the “drush config-import” command on that environment. Settings up drush aliases can save you a lot of time when you want to execute drush commands on remote environments. Read more about them here on this Drupal.org page.

Once you have this in place, you’ll be able to easily manage your configuration across environments. Go check out part 3 if you want to explore a couple of real world use cases and issues we’ve encountered.
 

Revisit part 1     Read part 3

Feb 07 2018
Feb 07

This is the first of a series of three config management blog posts. In this series, we’ll help you set up a good starting point and provide you with a few solutions for everyday configuration issues. The first part of this multi-part blog post will provide you a bit of context. The second part goes into the nitty gritty of configuration management, and the third part demonstrates some concrete use cases, pitfalls and their solutions!

Features and configuration management

Drupal 8 has been around for a while now and at Dropsolid we have substantial experience with large projects that use both the contributed Features module and the core configuration management system.

Essentially, Features leverages the configuration management system to package configuration with your module and overwrites certain rules for importing config from a packaged module after first install.

The alternative is to use configuration management from Drupal 8 core with the contributed Config Split and Config Ignore module.

Config split lets you set up rules for splitting your configuration per environment. Config ignore lets you ignore certain configuration to be imported.

The way you handle configuration fundamentally differs between both solutions. Features lets you whitelist the configuration you want to import. Configuration management from Drupal 8 core with addition of the mentioned contributed modules works more like blacklisting the configuration you don’t want to import. As a developer I have always found it easier to narrow down what I want to have control over, instead of what I don’t want to have control over. 

We’ve come to the conclusion that the latter option actually works faster, which means more value for the client, but only if you have a good configuration to start from. As it turns out, configuration that you don’t want to control, is more often shared between different projects than config that you do want to control.

Content and config: blurred lines

One of the most wonderful things about Drupal 8 is config entities, a uniform way of creating and saving configuration throughout your site. It has been leveraged by many contributed modules to give end users a great experience in configuring their website. 

The downside of these configuration entities is that they often cross the line between what is considered content and configuration. We consider content as everything a client needs to be able to change. A good example of this are webforms. Every webform you create is an instance of a configuration entity - whereas the submissions of the webform are instances of a content entity. If you want to know more about the difference between config and content entities, I advise you to read this article by Acquia.

We want clients to have full control over the kind of webforms they create, so they can use them effectively across their site to gather leads.

This brings us to the following issue. As a company we believe that a website is an ever-changing platform that grows over time. To have it grow at the speed that customers require, we need to work on multiple features with multiple teams at once. It needs to be stable and traceable. 

Part of this stability and traceability is having what we developers define as structural configuration being versioned (git) and easily deployable, all without hindering the client to keep on working on their content. 

Thanks to configuration management, config split and config ignore we’ve been able to achieve all this for Drupal 8!

Ready to set up your configuration? Read on in part two of the blog post series!
 

Read part 2

Nov 16 2017
Nov 16

Building commerce websites always means building integrations. Every time we work on project that is aimed at selling something (products, subscriptions, paid access, etc.), we have to choose a payment provider that will take care of our transactions.

Almost every payment provider out there gives us the ability to test their services using some sort of testing or sandbox environment. This is a nice chance to fully test the checkout funnel and avoid stressful situations when deploying the application to a live environment. (If you haven't read our previous article on Drupal payment integrations, start here!)

 

While setting up your commerce checkout flow locally, you’ll probably run into one (or all) of the following hiccups:

  • Issues with setting up a payments provider account,
  • No ability to parse incoming webhooks,
  • Problems with redirecting your customers back to the website.


The reason is simple: you’re hidden behind your NAT/firewall, so there is no way to reach your website from any remote server. As a workaround, you could probably create a manual payment method (for example bank transfer) and use this to browse all steps of your checkout flow. But you'll have to admit: this won't solve your main problem. Alternatively, you could clone your project to a remote environment, but is this always necessary? There has to be a better way to save time and keep working locally whilst keeping the full ability to test and debug remote services. Let’s have a closer look at how to wrap it all together with Drupal 8, Commerce 2 and a small tool called ngrok. In this example, I will be using the Mollie payments platform - scroll down to find out more! (Additional info about how to use Mollie and Drupal together in this post)

Download and set up ngrok

Ngrok is a very simple command line tool that allows you to expose a local web server to the internet. It takes only a few moments to set it up and its variety of configuration options is quite impressive (see the corresponding documentation).
 

Create ngrok account and download library files

Three easy steps:

  1. Simply visit ngrok.io and create a new account: https://dashboard.ngrok.com/user/signup
  2. Download and unpack the ngrok binary: https://ngrok.com/download
  3. Install your authtoken. Your personal account comes with its own token, which is available for reference via this link: https://dashboard.ngrok.com/get-started

When you get your token, locate the ngrok file and run the following command:

# ./ngrok {command} {param_is_a_token_conaining_lot_of_strange_chars_like_3Adacx$$321!}

./ngrok authtoken 5RpgRe8UA4qFWXtZoZb5P_3KeqMMsh6BjYMtWnJpgJt


Tune up your virtual host

Tune up the configuration of virtual host with the website you want to test by adding the server alias. For example, your apache2 vhost, with URL http://ngork.local, should have a *.ngrok.io alias, so you conf file should start like this:

<VirtualHost *:80>

  ServerName ngrok.local
  ServerAlias *.ngrok.local *.ngrok.io

Run ngrok

The very basic usage to start a simple HTTP tunnel goes like this:

# ./ngrok {command} {local_hostname}:{port}

./ngrok http ngrok.local:80
  • http command says start an HTTP tunnel
  • local_hostname argument says forward tunnel to this hostname
  • and port argument says listen on that port

After running the command, you will see ngrok up and running:

Testing Drupal Commerce 2 and Mollie payments with ngrok - preview of started session

That should do the trick for now: you should be able to visit your page at URL provided by ngrok. The unfortunate thing about a free account is that every time you start a new tunnel, it creates a new URL. You can visit the ngrok web interface at http://127.0.0.1:4040 to check the status, incoming requests, connections, headers and much more.

Set up Commerce 2 / Drupal 8

Our test project needs a basic setup, with the following commerce 2.x modules turned on:

  • Commerce Checkout
  • Commerce Order
  • Commerce Payment
  • Commerce Price
  • Commerce Product
  • Commerce Store
  • Commerce Cart - not really required, but helpful to easily jump through the UI
  • Commerce Mollie

Configure basics - set up you default store, product type etc. at the end your checkout flow will contain following panes:

Testing Drupal 8 - Commerce 2 Checkout UI example screenshot

Set up a Mollie payment account

Log in to the Mollie dashboard and jump to the Website profiles page, where you can create a profile or edit an existing one. Under contact information, provide the URL that is generated by your running ngrok process. Mollie will use it to send you all the webhooks:

Testing Drupal Commerce 2 and Mollie payments - preview of Mollie dashboard profiles pane

Next, expand the Live end Test API keys pane to find the credentials that you need to configure the payment method in commerce. Install the Mollie Payment module and navigate to the Payment UI ( /admin/commerce/config/payment-gateways). Next, fill out the new payment method form with all the required details:

Testing Drupal 8 - Commerce 2 Payment UI example screenshot

Test it...

That’s it! You can start with testing your payments, add any product to your cart and run through the checkout. After reviewing your order, you will automatically get redirected to the payment page, select any method and submit your transaction. For payment providers where you can define a full return URL and not only the domain - which is not the case with Mollie -, you can add XDEBUG_SESSION_START={YOUR-IDEKEY}, and start a debug session, as if you were working  on a regular local environment.

Testing Drupal Commerce 2 and Mollie payments - preview of Mollie payment selection screen

With ngrok up and running and your Mollie profile properly configured, you will get back to your local website. In the following few minutes, your local project should be receiving a bunch of webhooks. Whilst you wait for them coming in, you can preview your transactions in the Dashboard.

Key takeaways

Ngrok is definitely a great time-saver. It provides a lot of useful features and helps to resolve a variety of problems that you’ll often encounter during local development. In this article, I just demonstrated one example of its usage. It’s good to keep in mind that ngrok allows you to do much more:

  • Demoing - with ngrok running on your machine, you can share your local project with your teammates and clients
  • Insecure/mixed content check - with ngrok you can access your project via https:// which allows you to check insecure content warnings (no SSL needed!)   
  • Analysis - using the dashboard that is located at http://localhost:4040, you can inspect all requests/responses
  • Secure connection - you can secure your tunnel with a password  
  • Receiving webhooks - finally, you’re able to receive all webhooks and respond to them
  • Developing headless Drupal applications - while working locally you can expose your endpoints to your teammates ( check our article about Drupal 8 and React )

Drupal 8 is a really powerful and API-ready framework. It works very well together with lots of external services. Considering this, in addition to the never-ending technological progress, this flexible situation forces us to leave our self-created comfort zone and “I only use one tool” mentality. There are plenty of options to learn more about and use external tools, APIs and services. This provides a lot of flexibility, but, on the other hand, also requires some adaptation, has a learning curve and requires focus at every single stage of product development.

As developers - who don’t like mistakes, especially when payment integrations and real money are involved - we can’t afford to miss a beat here. Checkout funnels are leads that convert into real sales. They provide crucial business value to our clients and have to be extremely reliable; every payment needs to come through successfully. Testing your payment services through ngrok will allow you to stay in close control of your project, reduce stress and track down any issues before you spin up your application. It doesn’t take much time to set up, but the payoff is tremendous.

As always, you can discover more Drupal secrets on our blog. Automatic quarterly updates? Subscribe to our newsletter!

Nov 09 2017
Nov 09

Be aware, this is a longread with extensive value. Only read this if you are ready to uncover our Dropsolid team's exciting dev tool and platform secrets!

Update - April 2018: Our platform has entered the next development phase on its roadmap, and we are proud to announce that James and Jenny have now been succesfully merged together onto the Dropsolid platform. The technical architecture, as explained below, is still highly relevant, but we no longer have to distinguish between the two core elements from a user perspective.

James & Jenny might sound more like a comedy double act or the protagonists of a long-forgotten tale, but they are in fact very much alive and kicking. They are the names we gave to the platforms that we developed in-house to spin up environments faster and get work done more efficiently. How? Read on!

In practice

Whenever we want to spin up a new server, start a new project or even create a new testing environment, we still rely on our infrastructure team. A while ago we managed to automate our build pipeline with some smart configuration of Jenkins, an open source piece of software. Combined with a permission system, we are already able to let technical clients or consultants participate in the development process of a site by triggering a build of an environment. We decided to call this home-coded piece of software James, our in-house Drupal Cloud Butler. However, this UI was very cluttered and it was easy to break the chain. Maintenance-wise, it wasn’t the friendliest system either. James 0.1 was very helpful, but needed polishing.

Behind the scenes we started building a proper platform that was designed to supersede this existing system and take over the creation of new servers, projects and environments by adding a layer on top of this - a layer that could talk to Jenkins and would be able to execute Ansible playbooks through a managed system via RabbitMQ. You could see this as James 0.2. This version of James only has one account and isn’t built with a great many permissions in mind. Its purpose is very simple: get stuff done. This means we still can’t let clients or internal staff create new environments on James directly or set up new projects. But we’d really like to.

This is why we’re currently also investing heavily in the further development of Jenny, the site-spinning machine. Jenny’s aim is to be a user-friendly layer on top of James and it consists of of two parts: a loosely decoupled Angular application consuming a Drupal 8 backend exposed through a REST API, which in turn talks to James through its REST API. Because Jenny makes sure only calls that are allowed go through to James, James can stay focused on functionality without having to add a ton of logic to make sure the request is valid. If the person who wants that new environment isn’t allowed to request one, Jenny won’t ask James to set it up in the first place.

How it works

 

A Jenny user will be able to create a new organization, and within that organization create new projects or clone existing ones. These projects can be housed on our servers or on external hosting (with or without VPN, Firewalls or anything else that’s required). They’ll be able to create new environments, archive entire projects or just a single environment, build, back up, restore, sync across environments, log in to an environment’s site, etc. It will even contain information about the health of the servers and also provide analytics about the sites themselves.

Now, because single-person organisations are rather non-existent, that user will be able to add other users to their organization and give them different permissions based on their actual role within the company. A marketeer doesn’t need to know the health of a feature-testing environment, and a developer has little use in seeing analytics about the live environment.

The goal of this permission system is to provide the client enough options that they can restrict a developer from archiving live but allow them to create a new testing environment and get all needed information and access for that environment. On a sidenote: these aren’t standard Drupal permissions, because those are for members within an organization, and a single user can be a part of many organizations and have different permissions for each one.

End-to-end

But all these layers have to be able to talk to each other before any of that can happen. JennyA(ngular) has to talk to JennyB(ackend), JennyB then has to make sure the request is valid and talk to James. And whatever information James returns, has to be checked by JennyB, stored in the database if needed, and then transformed into a message that JennyA can do something with.

To make sure we can actually pull this off, we created the following test case:

How do we trigger a build of an environment in Jenkins from JennyA, and how do we show the build log from Jenkins in JennyA?

JennyA: build the page, get project and environment info from JennyB, create a button and send request to API. How this process happens exactly, will be explained in a different post.

JennyB

For this REST resource we need two entities: Project and Environment.
We create some new permissions (defined as options in an OrgRole entity) for our Environment entity type:

  • Create environment
  • Edit environment
  • Delete environment
  • Archive environment
  • View environment
  • View archived environment
  • Build environment

Next to this, we build a custom EntityAccessControlHandler that checks these custom permissions. An AccessControlHandler must have two methods: checkAccess() and checkCreateAccess(). In both we want to make sure Drupal’s normal permissions (which for this entity we reduce to simply ‘manage project environment entities’) still rule supreme, so superadmins can debug everything. Which is why both access checks start with a normal, bog-standard $account->hasPermission() check.

if ($account->hasPermission('administer project environment entities')) {
 return AccessResult::allowed();
}

But then we have to add some extra logic to make sure the user is allowed to do whatever it is they’re attempting to do. For that we grab that user’s currently active Membership. A Membership is a simple entity that combines a user, an organization, and an OrgRole entity which says what permissions the user has within that organization. For non-Create access we first check if this user is even a part of the same organization as the entity they’re trying to save.

// Get the organization for this project environment
$organization = $entity->getProject()->getOrganization();
// Check that the active membership and the attached organization match
$accessResult = Membership::checkIfAccountIsPartOfCorrectOrganization($organization, $account);
if ($accessResult->isForbidden()) {
 return $accessResult;
}

UPDATE: it is important to add all cacheability metadata that you need to your AccessResults. If, like in our case, the result varies per user, their active membership and that membership's roles we have to add those as a dependency. Sometimes the result also depends on the environment, project and organization. When writing access checks, just remember to take a step back and think of any other entities or general contexts that influence the result of your access check. For example:

$result = AccessResult::allowedIf($condition)
  ->addCacheableDependency($user)
  ->addCacheableDependency($activeMembership)
foreach ($activeMembership->getRoles() as $role) {
  $result->addCacheableDependency($role);
}

For brevity’s sake, I won’t explain how exactly checkIfAccountIsPartOfCorrectOrganization does its checks. But it returns an AccessResultInterface object and does exactly what it says on the tin. It also includes a reason for forbidding access, so we can more easily debug problems. You can just add a string to the creation of an AccessResult or use $accessResult->setReason() and you can then grab it using $accessResult->getReason(). Take note: only forbidden and neutral implement that method. Make sure the result implements the AccessResultReasonInterface before calling either method.

if ($accessResult instanceof AccessResultReasonInterface) {
 $accessResult->getReason();
}

We use this extensively with our unit testing, so we know exactly why something fails.
Assuming our test passes, we can finally check if this user has the correct permissions.

$entityOrganizationMembership = User::load($account->id())->getActiveMembership();

switch ($operation) {
 case 'view':
   if (!$entity->isActive()) {
     return $this->allowedIf($entityOrganizationMembership->hasPermission('view archived project environment'), 'member does not have "view archived project environment" permission');
   }
   return $this->allowedIf($entityOrganizationMembership->hasPermission('view project environment'), 'member does not have "view project environment" permission');
 case 'update':
 case 'delete':
 case 'archive':
 case 'build':
   return $this->allowedIf($entityOrganizationMembership->hasPermission($operation . ' project environment'), 'member does not have "' . $operation . ' project environment" permission');
}

// Unknown operation, no opinion.
return AccessResult::neutral('No operation matches found for operation: ' . $operation);

As you might have noticed, normally when you load a User you don’t get a getActiveMembership() method. But we extended the base Drupal User class and added it there. We also set that new class as the default class for the User entity, which is actually very easy:

function hook_entity_type_build(&$entity_types) {
 if (isset($entity_types['user'])) {
   $entity_types['user']->setClass('Drupal\my_module\Entity\User');
 }
}

Now loading a user returns an instance of our own class.

For createAccess() things get trickier, because at that point the entity doesn’t exist yet.This makes it impossible to check if it’s part of the correct organization (or in this case, the correct project, which is in turn part of an organization). So here we’ll have to also implement a field level Constraint on the related project field. This article explains how to create a field level Constraint.

In this Constraint we can do our Membership::checkIfAccountIsPartOfCorrectOrganization check and be sure nobody will be able to save an environment to a project for an organization they are not a part of, regardless if they are creating one or saving one (somehow having bypassed our access check). To make doubly sure, we also set the $validationRequired property on our Environment class to true. This way entities will always demand to be validated first. If they are not, or they have errors, an exception will be thrown.

Now we can finally build our rest resource. Since a Jenkins build doesn’t exist as a custom entity within JennyB (yet), we create a custom REST resource. We use Drupal console for this and set the canonical path to “/api/project_environment/{project_environment}/build/{id}” and the “create” path to “/api/project_environment/{project_environment}/build”. We then create another resource and set that one’s canonical to “/api/project_environment/{project_environment}/build”, the same as our first resource’s “create” path. This way, when you POST to that path you trigger a new build and when you GET you receive a list of all builds for that environment. We have to split this off into two resources, because each resource can only use each method once.


We generate these resources using Drupal console. But before we can begin with our logic proper, we have to make sure the ProjectEnvironment entity gets automatically loaded. For this we need to extend the routes method from the parent class.

public function routes() {
 $collection = parent::routes();
 // add our paramconverter to all routes in the collection
 // if we could only add options to a few routes, we would have
 // to loop over $collection->all() and add them to specific ones.
 // Internally, that is exactly what the addOptions method does anyway
 $options['parameters']['project_environment'] = [
   'type' => 'entity:project_environment',
   'converter' => 'paramconverter.entity'
 ];
 $collection->addOptions($options);
 return $collection;
}

In the routes method you can add or remove options and requirements to your heart’s content. Whatever you can normally do in a routes.yml file, you can also do here. We've explained this in more detail in this blog post.

Let’s take a closer look at our create path. First we’ll need to make sure the user is allowed to build. Luckily thanks to our custom access handler this is very easy.

// check if user can build
$entity_access = $projectEnvironment->access('build', NULL, TRUE);
if (!$entity_access->isAllowed()) {
 // if it’s not allowed, we know it’s a forbidden or neutral response which implements the Reason interface.
 throw new AccessDeniedHttpException($entity_access->getReason());
}

Now we can ask James to trigger the build.

// Talk to James
$data['key'] = self::VALIDATION_KEY;
$url = self::API_URL . '/project/' . $projectEnvironment->getProject()
   ->getRemoteProjectID() . '/environment/' . $projectEnvironment->getRemoteEnvironmentID() . '/build';
$response = $this->httpClient->request('POST', $url, array('json' => $data));
$responseData = json_decode($response->getBody()->getContents(), TRUE);

For this test we use a simple key that James uses for authentication and build the URL in our REST resource. Eventually this part will be moved to a library and the code might look something like this:

$remoteProjectID = $projectEnvironment->getProject()->getRemoteProjectID();
$remoteEnvironmentID = $projectEnvironment->getRemoteEnvironmentID();
$response = $this->jamesConnection->triggerNewBuild($remoteProjectID, $remoteEnvironmentID, $data);
$responseData = json_decode($response->getBody()->getContents(), TRUE);

We check the data we get back and if everything has gone well, we can update our local ProjectEnvironment entity with the new currently deployed branch.

if ($response->getStatusCode() == 200 && $data['key'] !== $projectEnvironment->getCurrentlyDeployedBranch()) {
 // Everything went fine, so also update the $projectEnvironment to reflect what
 // the currently deployed branch is
 $projectEnvironment->setCurrentlyDeployedBranch($data['branch']);

 // validate the entity
 $violations = $projectEnvironment->validate();
 foreach ($violations as $violation) {
   $errors[] = $violation->getMessage();
 }
 if (isset($errors)) {
   throw new BadRequestHttpException("Entity save validation errors: " . implode("\n", $errors));
 }

 // save it
 $projectEnvironment->save();
}

Running validate is necessary, because we set the $validationRequired property to TRUE for our entity type. If something goes wrong, including our custom Constraints, we throw a Bad Request exception and output the validation errors.

Then we simply return what James gave us.

return new ResourceResponse($responseData, $response->getStatusCode());

On James’ end, it’s mostly the same but instead of checking custom access handlers, we (for now) just validate the key. And James in turn calls Jenkins’ API. This will also change, and James will hand off the build trigger to RabbitMQ. But for the purpose of this test, we communicate with Jenkins directly.

James then returns the ID of the newly triggered build to JennyB, who returns it to JennyA. JennyA then uses that ID to call JennyB’s canonical Build route with the given ID until success or failure has occurred.

Curious to read more interesting Drupal-related tidbits? Check out the rest of our blog. Or simply stay up to date every three months and subscribe to our newsletter!

Oct 17 2017
Oct 17

When going live with a big project, it is all about reassuring the client that the project will be able to handle all those excited visitors. To achieve that state of zen, it is paramount that you do a load test. The benefits of load tests go beyond peace of mind, however. For example, it enables you to spot issues that only happen during high load or lets you spot bottlenecks in the infrastructure setup. The added bonus is that you can bask in the glory of your high-performance code - on the condition the test doesn’t fail, of course.

Need help with your load and performance testing?
Contact us 

When doing a load test it is important to do the following steps:

  • Analyse existing data
  • Prepare tests
  • Set up tools
  • Run the tests
  • Analyse the results

Analyse existing data

If you are in luck, you will already  have historic data available to use from Google Analytics. If this isn’t the case, you’ll have to get in touch with your client and ask a few to-the-point questions to help you estimate all the important metrics that I’ll be covering in this post.

A couple of tips I can give if you lack the historic data:

  • Ask if the client has a mailing (digital or old-school) list and how many people are on it
  • If you have made comparable sites in the past, look at their Google Analytics data
  • Ask the client how they are going to announce their new website
  • When you are working on an estimate, it is always better to add an extra 15% to it. Better safe than sorry!

The first thing you need to do, is set a reference frame. Pick a date range that has low activity as well as the highest activity you can find. Then start putting that data into a spreadsheet, as pictured below:

An example spreadsheet for load testingYou can download an example copy of the file here

The most important metrics we are going to calculate are:

  • Peak concurrent user (Hourly sessions x Average sessions Duration / 3600)
  • Peak page views per second

The values you need to find or estimate are:

  • Peak daily page views
  • Peak hourly page views
  • Total page view for period
  • Peak hourly sessions
  • Total amount of sessions
  • Average session duration in seconds

As you can see, we mainly focus on the peak activity, because you test with the worst-case scenario in mind - which is, funnily enough, usually the best-case scenario for your client.

Before we start preparing our test, it is also handy to check which pages receive the most traffic. This benefits the validity of your test scenario.

Prepare the tests

For our tests we are going to start out with Apache JMeter, which you can grab here.

With JMeter you can test many different applications/server/protocol types, but we’re going to use it to make a whole lot of HTTP requests.

Make sure you have the required Java library and go boot up the ApacheJMeter.jar file.

Adding and configuring a Thread Group

Start by adding a Thread Group to your test plan by right clicking your Test plan and selecting Add > Threads (Users) > Thread Group

Thread group

Eventually you will need to fill in the number of (concurrent) users and ramp-up period based on your analysis, but for now keep it low for debugging your test.

Adding and configuring User-Defined Variables

Then right click the thread group to add User Defined Variables (Add > Config Element > User Defined Variables).

Add two variables named url and protocol and assign them a value.

User-defined variables - examples

Using these user-defined variables makes it easy to choose another environment to test on. It avoids the painstaking and error-prone work of finding all references and changing them manually.

You can use these variables in input fields in your test by doing this: ${url} or ${protocol}

Adding and configuring HTTP config elements

 Next up, you need to add the following HTTP config elements to your thread group:

  • HTTP Request Defaults
  • HTTP Header Manager
  • HTTP Cookie Manager

On the first one, you use your variables to fill in the protocol and the server name.

On the second one, you can set default headers for each one of your requests. See the screenshot below for what I’ve put in default.

HTTP config elements

For the third one, you only select cookie policy: standard.

A simple page request sampler

Right-click your test again and add the HTTP request sampler (Add > Sampler > HTTP Request).

Here we are going to call the home page. The only things you need to set here are:

  • Method: GET
  • Path: /

We don’t fill in the protocol or server name because this is already covered by our HTTP Request Defaults.

Posting the contact form

In this one we are going to submit the contact form (which is located at www.example.com/contact), so add another HTTP Request like we did before. Now only fill in the following values:

  • Method: POST
  • Path: /contact
  • Follow redirects: True
  • Use KeepAlive: True

In order for Drupal to accept the submit, we need to add some parameters to our post, like this:

Contact form parameters

The important ones here are form_build_id and form_id. You can manually get the form id because it always stays the same. The form build ID can vary, so we need to extract this from the page. We’ll do this using the CSS/JQuery Extractor (right-click your HTTP Request sampler: Add > Post Processors > CSS/JQuery Extractor)

Configure it like the screenshot below:

JQuery extractor example

It will now get that form_build_id from the page and put into a variable the sampler can use.$

Posting some Ajax on the form

Imagine our contact form has some Ajax functionality and we also want to test this. The way we go about it is identical to posting the regular form like we did before. The only difference is the post parameters, the path and an extra HTTP Header Manager.

You should set the path in your sampler to: /system/ajax

Then right click your sampler to add your new HTTP Header Manager (Add > Config Element > HTTP Header Manager). Configure it like shown in the screenshot:

adding Ajax - example
 
Saving the results of your test

Now that we’ve configured samplers, we need to add some listeners. You can add these listeners everywhere, but in our example we’ve added it to the test in a whole.

We’ll add three listeners:

  • View Results in Table:
    • Show every request in a table format
    • Handy for getting some metrics like latency and connect time
  • Simple Data Writer:
    • Writes test data to a file
    • Handy for debugging when using Blazemeter (check out this link)
    • Just load the file into the View Results Tree
  • View Results Tree:
    • It shows you the actual response and request.
    • Uses a lot of resources (so only good for debugging)

There is a lot more you can do with JMeter. You can read all about it here.


Test-run the test

Now that we’ve configured our test it is time to try it out. So make sure not to put too much concurrent users in there. Just run the test by pressing the green ‘Play’ icon.

Test run

If you get errors, debug them using the feedback you got from your listeners.

As this wise man once said: "Improvise. Adapt. Overcome."

After you’ve validated your test, it’s always handy to turn up the concurrent users until your local site breaks. It’ll give you a quick idea of where a possible bottleneck could be.

Just a small warning: doing that load test on your local machine (running the test and the webserver) will take up a lot of resources and can give you skewed results.

You can download an example here.

Set up tools

Load testing with Blazemeter

When you have a project that will have a lot of concurrent users, your computer is most likely not able to handle doing all those calls and that is why it is good to test from a distributed setup like Blazemeter does.

You can have multiple computers running the same test with only a part of the concurrent users or you can pay for a service like Blazemeter.

The downside of using multiple computers is that they still use the same corporate WiFi or ethernet, blocking yourself possibly to the lowest common denominator, which is most likely unknown and could cause trouble that might skew your test. On top of that you will also have to aggregate all those results yourself, costing you precious time.

To us, the major benefits of Blazemeter are the following:

  • Simulate a massive amount of concurrent users with little hassle
  • Persistence of test results and comparison between tests
  • Executive report to deliver to a technical savvy client
  • Sandbox mode tests that don’t count against your monthly testing quota

Adding your JMeter test in Blazemeter is very easy and straightforward. Just click ‘Create Test’ in the menu and select JMeter Test.

Blazemeter screenshot

Upload the file and you can start to configure your test to reflect your test scenario from the analysis chapter. We suggest to choose to ‘Originate a load’ from a service that is closest to your target population.

Blazemeter - load test set-up screenshot

Before you run your test, it is important to have set up your monitoring of the environment you want to test.

Monitoring performance

At Dropsolid, we like to use New Relic to monitor performance of our environments but you could also use open source tools like Munin.

The most important factors in your decision of monitoring tool should be:

  • Persistence of monitoring data
  • Detail of monitoring data
  • Ease of use

If you are using New Relic, we recommend to install both APM and Server. The added value of having APM is that you can quickly get an overview of possible bottlenecks in PHP and MySQL.

Run the test

Now that everything is set up, it is important to have an environment that is a perfect copy of your production environment. That way you can easily optimize your environment without having to wait for a good moment to restart your server.

Run your test, sit back and relax.

Analyse the results

If everything has gone according to plan, you should now have reports from both Blazemeter and New Relic.

Blazemeter test reportBlazemeter report of a test of 854 concurrent usersNew relic monitoring during the same testNew Relic monitoring during the same test

If your server was able to handle the peak amount of users, then your job is done and you can inform the client that they can rest assured that it won’t go down.

If your server couldn’t handle it, it is time to compare the results from Blazemeter and New Relic to find out where your bottleneck is.

Common issues are the following:

  • Not the right memory allocation between parts of the stack.
  • Misconfiguration of your stack. For example, MySQL has multiple example configuration files for different scenarios
  • Not using extra performance enhancing services like varnish, memcache, redis,...
  • Horrible code

If the issue is horrible code, then use tools like xhprof or blackfire.io to profile your code.

Need expert help with your performance tests? Just get in touch!

Contact us for performance testing 


Final note

As Colin Powell once said: "There are no secrets to success. It is the result of preparation, hard work and learning from failure." That is exactly what we did here: we prepared our test thoroughly, we tested our script multiple times and adapted when it failed.

Jul 06 2017
Jul 06

06 Jul

Nick Veenhof

A month ago I received the honour to present at DrupalJam 2017. What a wonderful event! I had been invited to talk about deploying Drupal 8 onto kubernetes, which can be found as a hosted service in Google Cloud.


Our move to Google

Recently, we made the decision at Dropsolid to move from regular virtual machine instances in Gandi towards instances and services in Google Cloud, as we believe that the capabilities of such a cloud provider offer possibilities that are unprecedented. GC is not only offering affordable virtual machines (instances) but also affordable and competitive offerings regarding hosted MySQL. But that’s not all... Since we like our R&D environment and are looking for achieving greater and bigger goals, it is in our interest to see that Google is publishing new AI and data-analysis APIs at a pace that we don’t see anywhere else.


In practice

So... Back to the technicalities. I wanted to run an experiment on how I could run Drupal on an infrastructure that did not need any humans behind the wheel, nor any maintenance. I found this in the way of three components:

An overview of Kubernetes and the setup can be seen in the following video:

[embedded content]

 One component that I found to be missing, was a shared filesystem between the two ‘Pods’ (Containers). Drupal relies on user files or images and these should be stored somewhere. We do not want to alter the behaviour of Drupal or get into the application itself, as that introduces risk. Not all the websites that we would like to host, are modifiable.

  • We could map the folder to an AWS S3 bucket or Google Cloud Storage bucket, but that would be too slow for our needs. What we actually wanted is a competitor of AWS EFS, but unfortunately Google Cloud did not have this available.
     
  • We can work our way around it by setting up a NFS server or Gluster server in kubernetes, but that drives us away from our initial goal - less maintenance, so we can focus on building awesome experiences, which is the Drupal application.

If you are interested how I did the setup of the NFS, the slides go into deep detail how to set up this NFS cluster. The code is also available at https://github.com/nickveenhof/drupal-docker-with-volume

I recorded a video how this deployment works. Caution, I did speed it up quite a bit.

[embedded content]

visual - stacked cubes

Key findings

Now, what is the key take-away from all this? That I moved the particular website back to regular hosting, eg a shared space with a human behind the wheels here at Dropsolid. The reason was that for a single site, the cost outweigh the benefits and even though it is claimed to be fault-tolerant, I had numerous occasions where my pod did not want to recover, since the ‘failed’ one refused to be deleted. This ate up precious CPU space - on a server that barely had enough CPU. This can be solved with throwing more money at it, but that was not the intent.

I also discovered that constraining a pod to a fixed amount of CPU is not very useful when sharing a single server between multiple Drupal sites. Websites can have variable load and for small to medium sites with little traffic it is hard to justify the cost of pre-allocating those resources. I am curious to explore and test the Vertical Pod Autoscaling once they are finished, as this could certainly help applications with burstable workloads.

Having said that, I did learn a lot about what the future could hold. Going towards a system like this gets us really close to the 12-factor app ideology and I am completely in favour of a future like that.

Comments, questions? I'm curious to find out your take on this. Let me know in the comments box below or reach out directly on Twitter via @Nick_vh

Make sure to check out my the slides from this presentation here: 
https://www.slideshare.net/nickvh/drupaljam-2017-deploying-drupal-8-onto-hosted-kubernetes-in-google-cloud
 

Subscribe to our newsletter

Jun 21 2017
Jun 21

One day you might wake up with the next big idea that will shake the world in the most ungentle way. You decide to build an app, because you’ll have full access to all features of the device that you want your solution to work on. But then it dawns on you: you will actually need to build multiple apps in completely different languages while finding a way for them to serve the same content...

Then you start to realise that you won’t be able to step into the shoes of the greats, because web technology is holding you back. Fortunately, Drupal 8 and React Native are here to save your day - and your dream!

In this blog post you'll read how you can leverage Drupal 8 to serve as the back-end for your React Native app.

Update (03-10): After DrupalCon Vienna, Dries Buytaert posted his thoughts on the further adoption of React and Drupal. You can read his blog on his personal website.

 

First, a quick definition of what these technologies are:

  • Drupal is an open source content management system based on PHP.
  • React Native is a framework to build native apps using JavaScript and React.

If you want to read more about Drupal 8 or React Native, you're invited to check the sources at the bottom of this article.
 

Why React Native?

There are a myriad of front-end technologies available to you these days. The most popular ones are Angular and React. Both technologies allow you to build apps, but there is a big difference in how the apps will be built.

The advantage of employing React Native is that it lets you build an app using JavaScript, while converting the JavaScript into native code. In contrast, Angular or Ionic allow you to create a hybrid app, which basically is a website that gets embedded in a web view. Although the benefit here is that you're able to access the native features of a device.

In this case, we prefer React Native, because we want to build iOS and Android applications that run natively.
 

Headless Drupal

One of the big buzzwords that's been doing the rounds in the Drupal community lately is 'Headless'. A headless Drupal is actually a Drupal application where the front-end is not served by Drupal, but by a different technology.

You still get the benefits of a top notch and extremely flexible content management system, but you also get the benefits of your chosen front-end technology.

In this example, you'll discover how to set up a native iOS and Android application that gets its data from a Drupal website. To access the information, users will have to log in to the app, which allows the app to serve content tailored to the preferences of the user. Crucial in the current individualized digital world.

 

Drupal 8 - React Native Android & iOS

So this already brings us to our first hurdle. Because we are using a native application, authenticating users through cookies or sessions is not possible. So we are going to show you how to prepare your React Native application and your Drupal site to accept authenticated requests.
 

The architecture

The architecture consists of a vanilla Drupal 8 version and a React Native project with Redux.

The implemented flow is as following:

  1. A user gets the login screen presented on the app.
  2. The user fills in his credentials in the form
  3. The app posts the credentials to the endpoint in Drupal
  4. Drupal validates the credentials and logs the user in
  5. Drupal responds with a token based on the current user
  6. The app stores the token for future use
  7. The app now uses the token for all other requests the app makes to the Drupal REST API.
     

Creating an endpoint in Drupal

First we had to choose our authentication method. In this example, we opted to authenticate using a JWT or JSON web token, because there already is a great contributed module available for it on Drupal.org (https://www.drupal.org/project/jwt).

This module provides an authentication service that you can use with the REST module that is now in Drupal 8 core. This authentication service will read the token that is passed in the headers of the request and will determine the current user from it. All subsequent functionality in Drupal will then use that user to determine if it has permission to access the requested resources. This authentication service works for all subsequent requests, but not for the original request to get the JWT.

The original endpoint the JWT module provides, already expects the user to be logged in before it can serve the token. You could use the ready available basic authentication service, but we preferred to build our own as an example.
 

Authentication with JSON post

Instead of passing along the username and password in the headers of the request like the basic authentication service expects, we will send the username and password in the body of our request formatted as JSON.

Our authentication class implements the AuthenticationProviderInterface and is announced in json_web_token.services.yml as follows:

services:
 authentication.json_web_token:
   class: Drupal\json_web_token\Authentication\Provider\JsonAuthenticationProvider
   arguments: ['@config.factory', '@user.auth', '@flood', '@entity.manager']
   tags:
     - { name: authentication_provider, provider_id: 'json_authentication_provider', priority: 100 }

The interface states that we have to implement two methods, applies and authenticate:

public function applies(Request $request) {
 
 $content = json_decode($request->getContent());
 
 return isset($content->username, $content->password) && !empty($content->username) && !empty($content->password);
}

Here we define when the authenticator should be applied. So our requirement is that the JSON that is posted contains a username and password. In all other cases this authenticator can be skipped. Every authenticator service you define will always be called by Drupal. Therefore, it is very important that you define your conditions for applying the authentication service.

public function authenticate(Request $request) {
 $flood_config = $this->configFactory->get('user.flood');
 $content = json_decode($request->getContent());
 
 $username = $content->username;
 $password = $content->password;
 // Flood protection: this is very similar to the user login form code.
 // @see \Drupal\user\Form\UserLoginForm::validateAuthentication()
 // Do not allow any login from the current user's IP if the limit has been
 // reached. Default is 50 failed attempts allowed in one hour. This is
 // independent of the per-user limit to catch attempts from one IP to log
 // in to many different user accounts.  We have a reasonably high limit
 // since there may be only one apparent IP for all users at an institution.
 if ($this->flood->isAllowed(json_authentication_provider.failed_login_ip', $flood_config->get('ip_limit'), $flood_config->get('ip_window'))) {
   $accounts = $this->entityManager->getStorage('user')
     ->loadByProperties(array('name' => $username, 'status' => 1));
   $account = reset($accounts);
   if ($account) {
     if ($flood_config->get('uid_only')) {
       // Register flood events based on the uid only, so they apply for any
       // IP address. This is the most secure option.
       $identifier = $account->id();
     }
     else {
       // The default identifier is a combination of uid and IP address. This
       // is less secure but more resistant to denial-of-service attacks that
       // could lock out all users with public user names.
       $identifier = $account->id() . '-' . $request->getClientIP();
     }
     // Don't allow login if the limit for this user has been reached.
     // Default is to allow 5 failed attempts every 6 hours.
     if ($this->flood->isAllowed('json_authentication_provider.failed_login_user', $flood_config->get('user_limit'), $flood_config->get('user_window'), $identifier)) {
       $uid = $this->userAuth->authenticate($username, $password);
       if ($uid) {
         $this->flood->clear('json_authentication_provider.failed_login_user', $identifier);
         return $this->entityManager->getStorage('user')->load($uid);
       }
       else {
         // Register a per-user failed login event.
         $this->flood->register('json_authentication_provider.failed_login_user', $flood_config->get('user_window'), $identifier);
       }
     }
   }
 }
 
 // Always register an IP-based failed login event.
 $this->flood->register('json_authentication_provider.failed_login_ip', $flood_config->get('ip_window'));
 return [];
}

Here we mostly reimplemented the authentication functionality of the basic authorization service, with the difference that we read the data from a JSON format. This code logs the user into the Drupal application. All the extra code is flood protection.

Getting the JWT token

To get the JWT token we leveraged the REST module, and created a new rest resource plugin. We could have used the endpoint the module already provides, but we prefer to create all our endpoints with a version in it. We defined the plugin with the following annotation:

/**
* Provides a resource to get a JWT token.
*
* @RestResource(
*   id = "token_rest_resource",
*   label = @Translation("Token rest resource"),
*   uri_paths = {
*     "canonical" = "/api/v1/token",
*     "https://www.drupal.org/link-relations/create" = "/api/v1/token"
*   }
* )
*/

The uri_paths are the most important part of this annotation. By setting both the canonical and the weird looking Drupal.org keys, we are able to set a fully custom path for our endpoint. That allows us to set the version of our API in the URI like this: /api/v1/token. This way we can easily roll out new versions of our API and clearly communicate about deprecating older versions.

Our class extends the ResourceBase class provided by the REST module. We only implemented a post method in our class, as we only want this endpoint to handle posts.

public function post() {
 
 if($this->currentUser->isAnonymous()){
   $data['message'] = $this->t("Login failed. If you don't have an account register. If you forgot your credentials please reset your password.");
 }else{
   $data['message'] = $this->t('Login succeeded');
   $data['token'] = $this->generateToken();
 }
 
 return new ResourceResponse($data);
}
 
/**
* Generates a new JWT.
*/
protected function generateToken() {
 $token = new JsonWebToken();
 $event = new JwtAuthIssuerEvent($token);
 $this->eventDispatcher->dispatch(JwtAuthIssuerEvents::GENERATE, $event);
 $jwt = $event->getToken();
 
 return $this->transcoder->encode($jwt, array());
}

The generateToken method is a custom method where we leverage the JWT module to get us a token that we can return. 
 
We do not return a JSON object directly. We return a response in the form of an array. This is a very handy feature of the REST module, because you can choose the formats of your endpoint using the interface in Drupal. So you could easily return any other supported format like xml, JSON or hal_json. For this example, we chose hal_json. 

Drupal has some built-in security measures for non-safe methods. The only safe methods are HEAD, GET, OPTIONS and TRACE. We are implementing a non-safe method, so we have to take into account the following things:

  • When the app does a post it also needs to send a X-CSRF-Token in the header to avoid cross site request forgery. This token can be gotten from /session/token endpoint.
  • In case of a POST we also need to set the Content-type request header to “application/hal+json” on top of the query parameter “_format=hal_json”.

Putting things together

The only thing left is to enable our endpoint through the interface that the rest modules provides on /admin/config/services/rest. 

Update: As Shaksi rightly mentioned, to get this overview you need to download and enable the Rest UI module (https://www.drupal.org/project/restui

Drupal 8 & React Native - REST resources - Drupal blog

As you can see, we’ve configured our token endpoint with our custom json_authentication_provider service and it is available in hal_json and json formats.

Update: Shaksi was so kind to recreate the code and host it on github (https://github.com/shaksi/json_web_token). We haven't been able to test it yet, but if there are any issues report them on github and Shaksi will be able to get in touch with us if he needs some help. 

Calling the endpoint in our React Native application

The login component

Our login component contain two input fields and a button.

<Item rounded style={styles.inputGrp}>
   <Icon name="person"/>
   <Input
       placeholder="Username"
       onChangeText={username => this.setState({username})}
       placeholderTextColor="#FFF"
       style={styles.input}
   />
</Item>
 
<Item rounded style={styles.inputGrp}>
   <Icon name="unlock"/>
   <Input
       placeholder="Password"
       secureTextEntry
       placeholderTextColor="#FFF"
       onChangeText={password => this.setState({password})}
       style={styles.input}
   />
</Item>
 
<Button
   rounded primary block large
   style={styles.loginBtn}
   onPress={() => this.login({
       username: this.state.username,
       password: this.state.password
   })}
>
   <Text style={Platform.OS === 'android' ? {
       fontSize: 16,
       textAlign: 'center',
       top: -5
   } : {fontSize: 16, fontWeight: '900'}}>Get Started</Text>
</Button>

When we click the login button we trigger the login action that is defined in our bindActions function.

function bindActions(dispatch) {
   return {
       login: (username, password) => dispatch(login(username, password)),
   };
}

The login action is defined in our auth.js:

import type { Action } from './types';
import axios from 'react-native-axios';
 
export const LOGIN = 'LOGIN';
 
export function login(username, password):Action {
 
   var jwt = '';
  
   var endpoint = "https://example.com/api/v1/token?_format=hal_json";
  
   return {
       type: LOGIN,
       payload: axios({
           method: 'post',
           url: endpoint,
           data:  {
               username: username,
               password: password,
               jwt: jwt,
           },
           headers: {
               'Content-Type':'application/hal+json',
               'X-CSRF-Token':'V5GBdzli7IvPCuRjMqvlEC4CeSeXgufl4Jx3hngZYRw'
           }
       })
   }
}

In this example, we set the X-CSRF-token fixed to keep it simple. Normally you would get this first. We’ve also used the react-native-axios package to handle our post. This action will return a promise. If you use the promise and thunk middleware in your Redux Store you can set up your reducer in the following way.

import type { Action } from '../actions/types';
import { LOGIN_PENDING, LOGOUT} from '../actions/auth';
import { REHYDRATE } from 'redux-persist/constants';
 
export type State = {
   fetching: boolean,
   isLoggedIn: boolean,
   username:string,
   password:string,
   jwt: string,
   error: boolean,
}
 
const initialState = {
   fetching: false,
   username: '',
   password: '',
   error: null,
}
 
export default function (state:State = initialState, action:Action): State {
 
   switch (action.type) {
 
       case "LOGIN_PENDING":
           return {...state, fetching: true}
 
       case "LOGIN_REJECTED":
           return {...state, fetching: false, error: action.payload}
 
       case "LOGIN_FULFILLED":
 
           return {...state, fetching: false, isLoggedIn: true, jwt:action.payload.data.token}
 
       case "REHYDRATE":
           var incoming = action.payload.myReducer
           if (incoming) return {...state, ...incoming, specialKey: processSpecial(incoming.specialKey)}
           return state
 
       default:
           return state;
   }
}

The reducer will be able to act on the different action types of the promise:

  • LOGIN_PENDING: Allows you to change the state of your component so you could implement a loader while it is trying to get the token.
  • LOGIN_REJECTED: When the attempt fails you could give a notification why it failed.
  • LOGIN_FULFILLED: When the attempt succeeds you have the token and set the state to logged in.

So once we had implemented all of this, we had an iOS and Android app that actually used a Drupal 8 site as it main content store.

Following this example, you should be all set up to deliver tailored content to your users on whichever platform they may be.

The purpose of this article was to demonstrate how effective Drupal 8 can be as a source for your upcoming iOS or Android application.
 

Useful resources:

More articles by our Dropsolid Technical Leads, strategist and marketeers? Check them out here.
 

Subscribe to our newsletter

Jun 13 2017
Jun 13

In this blog post, our technical lead Kevin guides you through the best caching strategies for Drupal 8. Interested in how D8's flow has been improved and how to use Memcache for yourself in the best possible way? Read on!


Flow improvements with Drupal 8

The way data is cached has been overhauled and optimized in Drupal 8. This means that cached data is aware of where it is used and when it can be invalidated, which resolved in two important cache bins responsible for holding the rendered output, cache_render and cache_dynamic_page_cache. In previous versions of Drupal, the page cache bin was responsible for rendered output of a whole page.

Consequently, the chance of having to rebuild a whole page in Drupal 8 is far lower than in previous versions, because the cache render bin will contain some blocks already available for certain pages - for example a copyright block in your footer.
Nevertheless, having to rebuild the whole render cache from scratch on a high-traffic website can result in a lot of insert query statements for MySQL. This forms a potential performance bottleneck.
 

Why use Memcache?

Sometimes you need to rebuild the cache. Doing this on large sites with a lot of real-time visitors can lead to a lock timeout of MySQL, because the cache tables are locked by the cache rebuild function. This means that your database is unable to process the cache sets queries in time and in worst case resulting into a down time of your website.

Using Memcache allows you to directly offload cache bins into RAM, which makes cache sets, speeding up the cache along the way and allowing MySQL more breathing space.
 

How to install Memcache?

Before you can connect to memcache, you need to be sure that you have a memcache server up and running. You can find a lot of tutorials how to do this for your distribution, but if you use MAMP PRO 4 you can simple spin the memcache server up. By default, memcache will be running on port 11211.

When you have the memcache server specifications, host IP and port you need to download and install the Memcache module, available here: https://www.drupal.org/project/memcache

This module is currently in alpha3 stage and ready to be used in production sites.

Once you have installed the module, it should automatically connect to memcache using the default settings. This means that the memcache server is running on localhost and listening on port 11211. If your server is running on a different server or listening on another port you need to modify the connection by changing the following line in your settings.php.

$settings['memcache']['servers'] = ['127.0.0.1:11211' => 'default'];


Configuring Memcache

Once you have installed memcache and have made the necessary changes to the settings.php file to connect to the memcache service, you need to configure Drupal so it uses the Memcache cache back end instead of the default Drupal cache back end. This can be done globally.

$settings['cache']['default'] = 'cache.backend.memcache';

However, doing so is not recommended because it cannot be guaranteed that all contrib modules only perform simple GET and SET queries on cache tables. In Drupal 7, for example, the form caching bin could not be offloaded to Memcache, because it can happen that the cache key gets overwritten with something else resulting in a cache miss for specific form cache entries.

Therefore it is recommended to always check if the cache bin is only used to store cache entries and to fetch them later on while not depending on it to be in cache.

Putting cache_render and cache_dynamic_page_cache into memcache is the safest and most beneficial configuration: the larger your site, the more queries those tables endure. Setting up those specific bins to use Memcache can be done with the following lines in settings.php.

$settings['cache']['bins']['render'] = 'cache.backend.memcache';
$settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.memcache';


How does it work?

To be able to test your setup and finetune Memcache, you should know how Memcache works. As explained before, we are telling Drupal to use the cache.backend.memcache service as cache back end. This Service is defined by the Memcache module and implements like any other cache back end the CacheBackendInterface.This interface is used to define a cache back end and forces classes to implement the necessary cache get, set, delete, invalidate, etc. functions.

When the memcache service sets a cache entry, it stores this as a permanent item in Memcache, because validation is always checked in cache get.

Invalidation of items is done by setting the timestamp in the past. The entry will stay available in RAM, but when the service tries to load it it will detect it as an invalid entry. This allows Drupal to recreate the entry, which will then overwrite the cache entry in Memcache.

Conclusion: when you clear all cache with Memcache installed, you will not remove all keys in Memcache but simple invalidate them by setting them with an expiration time in the past.
 

Optimizing your Memcache setup

Simply using Memcache will not always mean that your site will be faster. Depending on the size of your website and the amount of traffic, you will need to allocate more RAM to Memcache.

How best to define this amount? If you know how much data is currently cached in MySQL, this can help to summarize the sizes of all cache tables and check how much of these tables are then configured to go into Memcache.

Let me give an example: consider a 3GB cache_render table and a 1GB cache_dynamic_page_cache table, resulting in 4GB of data that would be offloaded to Memcache. Starting with a 4GB RAM setup for Memcache would give you a good start.

But how can you check if this setup is sufficient? There are a few simple rules to check if you have assigned sufficient -or perhaps too much - RAM to Memcache.

  • If your evictions are increasing, meaning that memcache is overwriting keys to make space. And your hit rate is lower than 90% and dropping, you should allocate more memory.
  • If your evictions are 0 but the hit rate is still low, you should review your caching logic. You are probably flushing caches to often or your cached data is not reused, meaning that your cache contexts are too wide.
  • If your evictions is at 0 and your hit rate is 90 and higher, and the written bytes in memcache is lower than the allocated RAM, you can reduce the amount of RAM allocated to Memcache.

It is very important that you never assign more RAM than available. If your server needs to start swapping, the performance will drop significantly.


Conclusion

If you are considering using memcache for Drupal, you need to think a few things through in advance:

  • Which cache bins will be offloaded into Memcache? Only offload cache tables that do not depend on an cache entry.
  • Does the site has a lot of traffic and a lot of content? This will result in larger render cache tables.
  • The amount of RAM allocated to Memcache, depending on the amount available on your server and the size of the cache bins you offloaded to Memcache.

Also keep in mind that the allocation of RAM for Memcache is not a fixed configuration. When your website grows, the cache size grows with it. This implies that the amount of necessary RAM will also increase.
 

We hope this blog post has been useful! Check our training page for more info about our Drupal training sessions for developers and webmasters.

 

Subscribe to our newsletter

Apr 26 2017
Apr 26

Content is always one of the most important parts of a website. After all, the internet was designed for information sharing. As a result, upgrading to a new CMS implies the migration of content in some form or another. Taking this into account in the early stages is crucial when considering and preparing a CMS upgrade.
The Drupal community is very aware of content migration as a key success factor. Therefore, Drupal’s latest version (D8) has the migrate module included in its core. This key functionality allows you to upgrade from an older Drupal version to Drupal 8, using a few simple configuration steps. Below, I will explain how this works and put forward a few alternatives for custom migrations. A Dutch translation of this article is available here.

Why should you migrate to Drupal 8?

Site speed is not only important for SEO; it also affects your visitors’ browsing time and exit rate. Drupal 8 comes with an improved caching system that makes Drupal fly - all while taking into account content modification. There are no more endless waits for caches to invalidate, thanks to the real-time cache invalidation using cache tags.
Another reason for migration is the Drupal community. Plenty of features are available to integrate in your own site for free, which in turn enables you to spend time and money on other things. The community also keeps an eye on continuous improvements to the existing code. Drupal 8 is a great example of this, with its foundations in the Symfony2 framework. Everything in D8 has been standardised in such a way that maintenance is a lot easier and time-effective.
Let there be no doubt that migrating to Drupal 8 is an excellent long-term move!

How exactly should I migrate to Drupal 8?

You can use the Drupal migrate module that is included in core to upgrade from an older Drupal version to Drupal 8. Make sure to install and enable required modules first.
An example: if your site uses special field types, those modules should also be installed in your new Drupal 8 website. When you’re done configuring your site, you just need to enable the following modules:

  • Migrate
  • Migrate Drupal
  • Migrate Drupal UI

This last module will direct you to a configuration page, where you can start the actual migration. Simply enter the database information from your existing Drupal site and let Drupal review the upgrade.

The review will give you a list of available upgrade paths, next to a list of modules that are currently missing. If you’re happy about the review, you can choose to start the upgrade. Drupal will start importing content, users and taxonomies into your Drupal 8 website. Be aware that a rollback mechanism through the UI of Drupal is not available at this time. Since the Drupal core migrate is built to support a certain number of cases, it is possible that your site is too complicated to import correctly with the Migrate Drupal module. Sometimes, writing a customised migration is a better approach.
 

How to write a customized migration?

In most cases, the Migrate Drupal module will result in a reinstall of your Drupal website because some parts have been imported in the wrong way. You can opt to play things safe and write the migration yourself.
Writing a migration in Drupal 8 is done with the Migrate Plus module. This module allows you to create a new Migration entity. Those entities are created in YAML.

# Migration configuration for News content.
id: news_node
label: News Content Type
migration_group: demo_news
source:
 plugin: news_node
destination:
 plugin: entity:node
process:
 type:
   plugin: default_value
   default_value: news
 langcode:
   plugin: default_value
   source: language_code
   default_value: nl
 title: post_title
 field_title: post_title
 path: path
field_tags:
 plugin: migration
 migration:
   - news_terms
 source: tags_terms

migration_dependencies:
 required:
   - news_terms

Example of a Migration entity: migrate_plus.migration.news_node.yml

Each migration entity can belong to a Migration Group entity and is defined in YAML with the key ‘migrate_group’. A whole group of migrations can be imported or rolled back at once with drush by installing the Migrate Tools module.

  • drush mi --group=”demo_news” Import all migration in group demo_news
  • drush mr --group=”demo_news” Rollback all migrations in group demo_news

The main key of a migration is a Drupal 8 plugin that tells the migration where the source information comes from. There are plenty of base source plugins available for Drupal 8.

  • SqlBase - in Drupal Core: lets you migrate from an SQL source.
  • URL - in Migrate Plus: lets you migrate from a URL which can return JSON, XML, SOAP.
  • CSV - in Migrate Source CSV: lets you migrate from a CSV source file.
  • Spreadsheet - in Migrate Spreadsheet: lets you migrate from a csv, xls or xlsx source file.

If this does not suffice, you can start from your own source that extends from the SourcePluginBase class.

We extended the SqlBase source for our news_node source plugin.

public function query() {
 $query = $this->select('post', 'p');
 $query->fields('p', [
   'ID',
   'post_title',
   'post_name',
   'post_date',
 ]);
 $query->condition('p.type', 'news');
 return $query;
}

Query function in news_node source.

The query function returns a Select object with the information needed during the migration. This object will be the source for our migration.
Next we need to tell the migration which fields we want to map to the migration. This is done with the fields method.

public function fields() {
 $fields = [
   'post_title' => $this->t('The Post Node title'),
   'post_date' => $this->t('The Post creation time'),

   // ...
 ];

 return $fields;
}

In Drupal 7 we used prepareRow to provide field information that couldn’t be selected with a single query. In Drupal 8, this function is also available. In our example we fetch a teaser image and then add the file ID and file alt to the migration source.

public function prepareRow(Row $row) {
// Find related teaser attachment image.
$file = $this->getTeaserImage($content_id);

// Set a new property file_id.
$row->setSourceProperty('file_id', $file['id']);
$row->setSourceProperty('file_alt', $file['alt']);
return parent::prepareRow($row);
}

Add extra information to the migration in preparerow.

When we go back to the YAML confirmation of our migration entity, we see that there is also a destination key configured. In most cases this will be an entity:entity_type destination plugin. Migrate will then automatically create entities of the configured type. In our example, new nodes will be created. If needed, you can also simply create a new destination plugin, which performs extra actions during the import function.

The progress key in our configuration defines the field value mapping. It contains a mapping of keys and values where the key is the Drupal field name and the value is the source field name. In some cases, like ‘type’ or ‘language’, we use a default_value plugin which allows us to set the value of the field to a fixed value. In our example, we are creating new nodes of type news in Dutch.

In some cases, the source value comes from another migration. In our example the value of ‘field_tags’ comes from another migration, this is defined by using the ‘migration’ plugin and then specify the migration(s) in which the value is migrated. Whenever such migration dependent fields are presents an extra key ‘migration dependencies’ is necessary. This is an array of migrations which needs to run first.


I hope this post has helped you to provide some insight in migrating your website from D7 to D8! As always, you can reach out to me and the rest of the team via our website.
 

Subscribe to our newsletter

Apr 20 2017
Apr 20

20 Apr

Nick Veenhof

Advisory by the Drupal security team

Recently, the References module started receiving some attention (read here, here and here). The reason for this is that the Drupal security team posted an advisory to migrate away from the References module for Drupal 7 and move to the entity_reference module. At the time of writing (20 April), 121.091 sites are actively reporting to Drupal.org that they are using this module. That makes for a lot of unhappy developers.

Things kicked off after a security vulnerability was discovered in the References module. The security team tried to contact the existing maintainers of that module, but there was no response. The security team had no choice but to mark the module as abandoned and send out the following advisory explaining that the details would be made public in a month and that everyone should upgrade, as there was no fix available.

Migrate efficiently

At Dropsolid, we noticed that for many of our older Drupal 7 installs we were still using this module extensively. Migrating all of the affected sites would have meant a very lengthy undertaking, so I was curious to find a way to spend less time and effort while still fixing the problem. We immediately contacted one of the people who reported the security issue and tried to get more information other than what was publicly available. That person stayed true to the rules and did not disclose any information about the issue.

We didn’t give up, but made an official request to the security team offering to help and requesting access to the security vulnerability issue. The Drupal security team reviewed the request and granted me access. In the Drupal Security issue queue there was some historical information about this vulnerability, some answers and a proposed patch. The patch had not been tested, but this is where Dropsolid chimed in. After extensively testing the patch on all the different scenarios on an actual site that was vulnerable, we marked the issue as Reviewed and Tested by the Community (RTBC) and stepped up to maintain the References module for future security issues.

It pays off to step in

I’d like to thank Niels Aers, one of my colleagues, as his involvement was critical in this journey and he is now the current maintainer of this module. He jumped straight in without hesitation. In the end, we spent less time fixing the actual issue compared to the potential effort for changing all our sites to use a different module. So remember: you can also make a similar impact to the Drupal community by stepping up when something like this happens. Do not freak out, but think how you can help your clients, company and career by fixing something for more than just you or your company.

Apr 14 2017
Apr 14

In this day and age, it’s very hard to imagine a world without online payments. They permeate every possible sector and purpose, ranging from banking apps to online ticket ordering and charity donations.

Drupal has kept pace with this evolution and is offering enterprise-quality solutions to tackle most online payment needs, most notably Drupal Commerce with secure payment integrations. Drupal Commerce allows developers to implement different gateways to PayPal, Stripe, iDeal, Mollie and Ingenico (previously known as Ogone).

In this blog post, I will explain the possibilities of the Drupal Payment module and describe an example of how to apply it together with Mollie, a rising star in the realm of payment service providers.

 

Drupal Payment module

Are you looking to make people pay for their membership when they register for an account? Then you will have to integrate an easily manageable payment system into your application.

In situations like these and more, Drupal’s Payment module can act as a bridge to a secure payment integration. You can implement different payment gateways that communicate directly with the Payment module. This means that all incoming payments from various payment service providers are stored in a centralised location.

The Payment module integrates well with Drupal Commerce and Ubercart, but you can even integrate the module into any kind of entity with both the Payment form field and the Payment reference field.

Do you think this might suit your need as an out-of-the-box solution for a simple integration with Drupal Webforms or a basic donation form with Drupal Payment integration? They are available for download on drupal.org.
 

Payment Service Providers

If you would like to receive online payments through you website, you'll have to implement an actual payment service provider. The most commonly used payment providers in the Benelux are Ingenico, Paypal and Mollie.


Mollie

Mollie has become very popular very quickly, because it charges a transaction-based fee instead of a monthly subscription. This means that you will not be charged if there are no transactions, which is perfect for projects that do not (yet) generate a lot of transactions.

To allow for easy integration, Mollie provides developers with a very good API. Drupal (and other) developers can access the available RESTful service or a PHP API library, which makes it possible to implement logic - for example to refund a customer through the API.
If your Drupal project does not require automatic refunding of customers, you can use the mollie_payment module, which uses Mollie’s PHP API library.
 

Example: enabling a payment method

To enable payments with Mollie, you have to define a payment method using the so-called MolliePaymentMethodController. The controller is defined in the Mollie Payment module and uses Mollie's PHP API library to process the requests.

You can add the Payment method through the module install file:

/**
* Add payment method (Mollie)
*/
function MYMODULE_update_7001(){
  $mollie = new PaymentMethod(array(
    'controller' => payment_method_controller_load('MolliePaymentMethodController'),
    'controller_data' => array('mollie_id' => 'test_AA11bb22CC33dd44EE55ff66GG77hh'),
    'name' => 'pay_with_mollie',
    'title_generic' => 'Pay with Mollie',
    'title_specific' => 'Pay with Mollie',
  ));
  entity_save('payment_method', $mollie);
}

Forms embedding a payment form

Start by defining a simple form, extendable with multiple form elements available in Drupal’s Form API.

/**
* Callback function to build a basic payment form.
*
* @param array $form
*   The form build array.
* @param array $form_state
*   The form state information.
*/
function MYMODULE_form($form, $form_state) {
 $form = array();

 // Add form actions.
 $form['actions'] = array(
   '#type' => 'actions',
 );
 $form['actions']['save'] = array(
   '#type' => 'submit',
   '#value' => t('Pay with Mollie'),
 );

 return $form;
}

This form is then capable to embed a payment form, provided by the Payment module. In order to do this, you should first define a Payment object. This will provide all the payment methods that have to be integrated in the payment form. You can pass context and context data for reference and later use, the currency you are making a payment in and the callback that has to be executed after a payment has been done.

// Define a payment object.
$payment = new Payment();
$payment->context = 'donation';
$payment->context_data = array(
  'time' => time(),
  'type' => 'donation',
);
$payment->currency_code = 'EUR';
$payment->description = 'Basic payment form';
$payment->finish_callback = 'MYMODULE_finish_callback';

A single payment object can contain multiple items. Useful if you would like to implement this in a commerce environment. In this example, a single line item will define the total amount that has to be paid. Don't forget to define the price without taxes, because the Payment module will handle all tax calculations.

// Define a payment line item.
$line_item = new PaymentLineItem();
$line_item->amount = 100.00 / 1.21;
$line_item->name = t('EUR 100');
$line_item->tax_rate = 0.21;
$line_item->quantity = 1;

// Add the payment line item to the payment object.
$payment->setLineItem($line_item);

 

By assigning the payment object to the form, you can use the transferred information in a later stage - for instance during validation.
 

// Add the payment object to the form.
$form_state['payment'] = $payment;

You can use multiple payment methods with the payment module. In this example, Mollie is forced as the only payment option available. It is of course also possible to add multiple methods in the payment options and to allow people to pick their payment method of choice.

 

// Get available payment methods and limit this form to Mollie payment.
$payment_methods = $payment->availablePaymentMethods();
$payment_options = array();
foreach ($payment_methods as $payment_method) {
  if ($payment_method->enabled && $payment_method->name == 'pay_with_mollie') {
    $payment_options[] = $payment_method->pmid;
  }
}

To include the payment form into your custom form, you have to call the payment_form_embedded function. The function will use the payment object and the available payment options to build the required form elements and form actions. Then assign the payment elements and submit action to your custom form in order to enable the payment.

// Get the payment embed elements.
$payment_embed_form = payment_form_embedded($form_state, $payment, $payment_options);

// Add the embedded payment form element.
$form['payment'] = $payment_embed_form['elements'];

// Define the form submit callback.
$form['#submit'] = $payment_embed_form['submit'];

When defining the payment object, you actually define a finished callback. This callback will be triggered after a successful payment from the Mollie payment service provider. To be certain, you could check if there is a payment success status within the payment object and run any additional callbacks if needed.

/**
* Handle successful payment from Mollie.
*
* @param \Payment $payment
*   The returned payment object containing all relevant information.
*/
function MYMODULE_finish_callback(Payment $payment) {
  $payment_complete = FALSE;

  // Check if the payment status contains a successful state.
  foreach ($payment->statuses as $status) {
    if ($status->status == 'payment_status_success') {
      $payment_complete = FALSE;
      break;
    }
  }

  if ($payment_complete) {
    drupal_set_message('Your payment has been received.', 'success');
    // @TODO: Implement custom callbacks.
  }
}

Conclusion

As you noticed, it's not that hard to implement a payment workflow in your own form! 

One final tip: use form validation to check if all requirements are met before people are redirected to the payment service provider.

I hope this blog post has helped you to make payments in Drupal easier. Feel free to leave a comment below if you have any questions or if you would like to share some thoughts on different possible approaches.

Request your Drupal demo      Subscribe to our newsletter

Mar 30 2017
Mar 30

Did you miss last week's Lunch & Learn Event about open marketing in Ghent? Dropsolid CTO Nick Veenhof explains his quick-and-dirty proof of concept for integrating the Showpad environment with the Drupal CMS.
 

In the current economic and digital environment, companies are realising that integrated approaches are the future. The time that, for example, sales and marketing act as individual parts of a company is definitely long gone. Luckily, there are plenty of options available for companies to drive business results, such as open marketing platforms.

Recently we have had the honor to present, together with Showpad & Survey Anyplace, a lunch and learn about leveraging these kind of open marketing platforms to boost customer experience. Showpad and Drupal have many things in common, but how do you make sure you are not managing your assets in two separate places - for example in Drupal and in Showpad? As a company, you don't just want to use the right platforms, but you want to use them in the most efficient way.

In preparation for the lunch and learn, we were asked to demonstrate how the technologies that Dropsolid and Showpad use (Drupal & Showpad API) integrate with one another to maximise efficiency.

The first step was consulting Showpad’s API, which is well documented and can be found when logging into their system.

Showpad's API for Drupal

Here's the short version of the demo:

[embedded content]

If you're interested in more details, you can watch the extended version on our YouTube channel as well.

Some of the challenges we faced is that Showpad didn’t have a fully featured SDK for PHP. However, there were proofs of concept available that suggested ways of working with the Showpad API. After some exploration I decided to adapt one and contribute it back to github. The adaptation makes the existing, non compatible, library compatible with Guzzle 6 so that it natively works with Drupal 8’s composer dependencies and thus, the Guzzle version that is shipped with Drupal.

After the library was working as expected, I tried to make a quick and dirty implementation in Drupal. Just a warning: this Drupal code does not adhere to the code standards, nor do I recommend you to implement it this way. It is merely proof that integrating the two is not a work of months, but of mere hours. You can find the Drupal code I used at https://github.com/nickveenhof/showpad-api.

Did you miss our Lunch & Learn? Read the complete recap here.

Aug 26 2013
Aug 26

Why a multisite platform?

When your organization has multiple websites online, it might be worth considering a multisite platform.

What do you gain from a multisite platform? You can group the efforts of different projects.

  • This enables you to reuse the custom development from one site on others.
  • You can also shorten the startup process of analyses and the project by working with one party instead of many different ones.
  • Your people can use a fixed system for content management instead of a jumble of different systems.
  • You can also reuse styles & templating on all your sites.
  • Maintenance of all these websites doesn't need to be hosted on different systems either. 
  • They no longer need to be hosted by different teams.
  • On an oraganizational level you can move workloads within your organization by having different services deal with the maintenance of your complete web representation.

 In short: you can save a lot of money when you see the long term value of a multisite platform. The more websites you have, the more you can profit of a cost reduction.  
 

Who Preceded You?

There are famous examples in Belgium regarding the use of multisite platforms.

http://cooldrops.be, a KMO website provider who uses a multisite platform in order to service the websites of all its customers. Customers benefit form an integrated approach whereby all the sites are centrally maintained, and of updates that are made for the entire system.

http://stedelijkonderwijs.be ensured that roughly 200 schools got their own website within one platform, which is maintained by one party. They eliminated the individual costs for schools to build their own websites by building one platform on which the functionality only needed to get build once and was reused 200 times. In the past the functionality was developed per site, over and over again. The team that maintains the site can let the schools do their own content management.

Why drupal For A multisite platform?

Drupal is extremely suitable fo ra series of multisite platforms. Because Drupal is Open Source software, you don't have to pay for the amount of installations. There are no license fees. Therefore you don't have to calculate how many sites you would like to generate. The brilliance of Drupal is that you only have the code developed once and you can keep reusing it as many times as you like.

Drupal has a modular structure that enables it to upgrade all sites simultaneously on the platform. It's also possible to simultaneously upgrade seperate components.

Drupal has a tradition to be installed as a multisite. Ever since early versions of Durpal had a multisite installer that enabled people to run multiple sites on shared hosting using only one Drupal codebase. 
 

What to look For When Choosing a multisite platform?

There are 5 criteria for choosing a multisite platform.

  • Maintenance of code base
  • Maintenance of the database
  • Sharing content and users on sites
  • Adminstration per site
  • Custom coding possibities per site

Each of these five criteria has a different solution. 

Domain access

Domain access is a solution based on the use of the domain access module. It's an approach whereby you host multiple sites form one code base, one database and which requires one admin. It's the most basic form of multisite that's available. 

Who uses domain access for multisite?

It's mainly used for multisites that don't differ a great deal from each other.
An example would be a firm that sells wine and would like a separate site for every type of wine, e.g.: 

http://cabernet-sauvignon.wijnhandel.be 
http://bordeaux.wijnhandel.be

These sites contain a lot of the same functionality and content.
These sites are managed by one admin. 

Features

  • Ability to have one admin maintain all content of all websites.
  • There's a pretty tight limit on how much you can customize per subsite. This is partly because it has one codebase and one database.
  • Custom design and theming is possible.
  • One database and one codebase = easy maintenance.
  • Users and content can be shared between subsites.
  • You can have more than one URL.
  • Single Sign On is possible.

Organic groups spaces features

This setup is slightly more complex but also more powerful. Similar to domain access you have one code base and one database, but you can have a multisite system that enables every site to have its own role and permission system. This enables you to have the content of the different websites maintained by different services or divisions within your company or organization.

Who uses this?

For example an educational insitution, that has 200 school websites, but wants to enable each school to maintain their own website. 

An organization that has several services all of which want to represent themselves independently online. Every service can use the common functionality to create own content and manage users.

Features

  • Ability to configure an administrator and a role/permission system per subsite.
  • Custom code per site possible within limits, because it still only contains one codebase.
  • Custom theming and design per subsite is possible.
  • There is only one database and codebase, so maintenance is easy.
  • Users and content are shared when necessary.
  • It can carry more than one URL.
  • Single Sign on is possible.

Multisite drupal

In this case we're talking about a common code base, but seperate Drupal installations (different databases).

Who uses this?

This is used when people want to be able to control every installation, and the configuration is very different form other subsites. A central installation profile is used whereby a couple standard functionalities can be enabled. This setup requires much more maintenance and is only possible when the differences between subsites are significant enough.

Features

  • Different databases.
  • One code base.
  • Possibility for meerdere URLs.
  • Different accounts per site.
  • Users and content are not shared by default. There are possibilities to share anyways.
  • SSO possible.
  • Custom code is possible per site but there are limits regarding development setup and maintenance, and it's not recommended.
  • Maintenance is harder due to the many databases and a system for automation is advisable.
  • Custom theming/design possible.
  • It's hard to setup Dev-Staging-Live during development on each website.

Aegir (advanced multi site)

This offers the same regarding multisite as above, but there's a system available to automate all the maintenance and installation. You can use multiple installation profiles. Additionally it's possible to start maintaining multiple platforms spread over multiple servers (multiserver).

Who uses this?

Parties that run one or more different multisite setups. It usually involves more than 10 different websites.

An example is an organization that has a multisite aegir for all its products. Every product gets its own installation but the layout and functionality afterwards might be slightly different. Moreover this same organization can also have a platform to showcase all its services.

Features surplus to multisite

Very difficult setup of staging workflow, because the platform contains all the code of all the projects.

This system automates a lot of tasks: migrating, updating, backups, ...

An aegir can deploy multiple install profiles.

Multiserver support

Dropsolid platform

By working with platforms for many years, Dropsolid has also created its own platform with the big advantage, on top of all the other platforms, that you can custom develop on each site seperately without interfering with other sites.

Who uses this?

Organizations that have their own development team that can develop on their Drupal sites and that wants to reuse functionality. They can get access to the Dropsolid Shared platform where they can develop and deploy code for each site. They have everything under control. Every letter code, every deploy, every commit and every data transfer.

Since every site has its own setup, you can deploy very similar code, but when necessary also custom build everything. And everything can still be maintained by a central system. You can work independently from other sites in the multisite.

You learn to work on Drupal standards and learn to deploy code driven. You control your projects and reuse code to it's fullest extent.

Features

  • Different database per site. The maintenance is eliminated by the automated system.
  • Different codebase per site. The maintenance is eliminated by the automated system.
  • Different repository per site. The maintenance is eliminated by the automated system.
  • Different deployment pipeline per site. However, the maintenance is eliminated by the automated system. It's possible to develop independently from the repository code driven on each site.
  • Custom code unlimited freedom.
  • Multiple environments possible per site. So dev, staging, live and more are possible on each site in the multisite structure.
  • All maintenance related tasks taken like cron, backups, updates, monitoring are executed by the central system.
  • SSO possible.
  • More than one URL possible.
  • No shared content. Web services for data exchange are possible.
  • No users content. Web services for data exchange are possible.
  • Multiserver support

What to look for when choosing a multisite platform provider?

A multisite provider needs to comply with a number of things.

  • The have to be able to prove that they've build several platforms. This is not easy to do. It all looks very simple, but the complexity of a multisite project should not be underestimated.
  • Make sure the provider knows how his platform should be hosted. People usually have several servers and it's not always easy to set this up.
  • Make sure the provider has sufficient Durpal knowledge for further development on your platform.

Conclusion

Multisite systems are the cost-effective solution for companies with lots of sites. When you're not already using it, you should start considering doing so. Thing about it can't hurt and it doesn't need to get complicated. Hopefully this post taught you something about the possibilities regarding multisite and are you aware of what your organization could gain by its use.

Jun 25 2013
Jun 25

Business Process analysis

To start a business process analysis we first sit down with everyone involved to discover the business needs. This is done in a stakeholder analysis.It is a technique you can use to identify and assess the importance of key people , groups of people , or institutions that may significantly influence the success of your activity or project. This step will reveil your business needs and what is possible within budget. We ll talk with all the people involved: application users, business stakeholders, technical people, ... and discover how all the different elements will influence the application being designed.

We will create a swot analysis concerning your business using a content management system. This will reveal why you need it.

After the swot analysis is done the next step is to create a conceptual and functional analysis. This functional analysis will come up with the information architecture and feature selection and will produce wireframes. These wireframes will be accompanied by functional text explaining what all elements on the elements of the content management system are doing. The concept needs to meet a range of criteria including usability, accessibility, attractiveness, findability, credibility, value and usefulness.

The business process analysis essential to discover the true need for an application. This will reveal if a content management system is needed and if it will be able to deliver value. Don't just pay to have a system. Be sure it automates things, generates revenue, reduce costs.

Implementation of the process

The creation of technical analysis will be needed which will be a document for the drupal develoment team to start creating the drupal application. Developers will exactly know what to build and how to build into the content management system.

With the technical document we can start the implementation. Learn how development is done.

When the implementation is done the process does not stop we measure the results and use this data to improve the process. The cycle starts again. Each iteration of the cycle will result in the discovery of more revenue, more efficient processes and reduced costs.

Mission of the business analyst

The software architect his job is to sit down with the people how know the business, he has the ability to work with business analysts by doing a business process analysis in function of software needs.

Being a Software Architect I can not only design the application but I can also make sure development goes as planned. I can followup on how the application is being made and if the application is up to standards) I can assist teams with training and leading. Like an architect reviewing a building until completed a software architect does the same thing for the application.

Creating value & reducing costs

Having done the total analysis consisting of the business, functional, conceptual and technical analysis with drupal a focus on drupal cms ensures that no valuable resources are wasted on creating certain functionality. A challenge often existing when working with different parties involving people with different skills. Although drupal should not influence the design process, not knowing what a cms like drupal can do out of the box will often cause complicated solutions that were already available. This could result in functionality being implemented in a cost ineffectif way. Knowing drupal give us the opportunity to consider alternatives for attaining the same business goal.

Development will be much smoother and cleaner when all levels of analysis are done with drupal in mind. The developers will end up with a straight forward technical analysis.

Creating the complete analysis also eliminates time consuming communication between the involved parties. Often lots of miss communications happens between partners that don't really know each others business. Functional/conceptual people often are not fully aware how developers operate and vice versa. Also a gap can exist between business, functional and technical. This problem is completely eliminated since all three terrains are well understood by the same party.

This will reduce the development and maintenance cost of the solution. Lowering the level of custom coding and increasing communication efficiency will lead to an increase in usefulness of the solution and deliver extra value to its business goal.

Jun 04 2013
Jun 04

Drupal is not secure

This is simply not true. Drupal.org has a dedicated security team monitoring all security problems in core and contributed modules. The fact that drupal is opensource makes it when well managed even more secure than closed source systems. Getting drupal secure is all about selecting the right implementation partner that knows how to take care of things.

Drupal is used by the Newyork Stock exchange which is the worlds thrid most highly ranked terorist target online. They are attacted by anything that exists out there. The rely on well implemented opensource drupal to be able to be on top of security.

Drupal does not perform

Drupal powers one of the most traffic intense websites in the world. Drupal can deliver fast experiences but once again it has to be implemented correctly.

Drupal does not scale

Scale drupal means scale your infrastructure. Just like any php application drupal can be scaled. Scale drupal means configuring drupal correctly to be able to take advantage of powerful systems that allow drupal to scale.

Drupal is for small websites

Drupal can power small websites too but it powers big traffic, big content and big user intense websites.

Drupal is for big websites

Drupal handles the big websites just like it does the small.

Drupal is for hobbyist

Drupal is very popular with hobbyists but don't be fooled. Drupal means business. Drupal is supported by mulitmillion dollar venture backend enterprises. More than thousands of high profile consultants are earning their fees with drupal constultancy.

There is no support for drupal

Drupal development services are getting wider spread than ever before. As said already drupal is supported by large enterprises and is implemented by a lot of drupal shop around the world.

The drupal community is perhaps the biggest support to get information from. The drupal community consists of thousands of drupal developers world wide. Continously upgrading drupal and doing drupal development of new features.

Is it just a matter of time before something else takes drupals place

Then that something will need to convince thousands of webdevelopers to bet their carreers on that something and get them to contribute like they do on drupal.org. No other propriety system can keep up with the speed of improvement drupal is bringing to the table. There are not a great deal of projects that can say more than ten thousand developers are working for them, for FREE.

Because developers can contribute to something greater than themselves, and by doing so also improve themselves, by adding to their expertise and increasing their market value they will keep investing in drupal. If drupal gets better, they get better even in the long term. Opensource gives developers more than just carreer opportunities it gives them the opportunity to be part of something.

No other web opensource project is as successful as drupal. Drupal will become the most opensource webframework.

Opensource means low quality

Not true. Opensource means constantly improving quality. Closed source bugs can lay undiscovered for ages. Opensource bugs get detected and fixed by the community fast.

Vendor lock in

Don't fear the vendor lock in. A cms drupal will allow you to take it anywhere. You are not tied to anyone. Not happy leave to somebody else.

Drupal modules are low quality

Some indeed are. But when building a professional drupal site you should listen to the pros who know how to pick a module.

Drupal cms is just a cms

Drupal is a cms out of the box but it can be installed as countless webapplications.

Drupal customization is difficult

When digging into drupal to integrate it with other platforms it requires some skill but like any framework with a certain potential it needs a certain complexity to achieve its ambtious goals. Great flexibility comes with a bit of complexity.

Drupal development is expensive

Drupal development doesnt need to be expensive. Drupal development only becomes very expensive when extensive customization is needed. Drupal already delivers a lot out of the box by combining drupal modules and drupal distributions. Choosing the right party to devise your concept is critical.

Drupal is a Great product

True :)

May 31 2013
May 31


Drupal is an open source content management system used by big websites on the web. Their are thousands of Drupal developers all claiming they are capable of implementing the platform. Surely not all developers are the most experienced when it comes to Drupal. How do you tell the good from the bad?

 

 

What is the Drupal Developer's experience?

Find out how long he has been developing Drupal websites. He should be able to show a list of high quality, high profile drupal sites. Ask him to show you around in the backend of a site. Let him show of the capabilities of a drupal site. Ask them to give you a demo so you can try it out yourself.

Do they specialize in Drupal development or do they offer every content management system on the planet?

Agencies and developers with too many offerings means they aren't focused on world class offerings that add value for their clients. Agencies offering anything mostly dont know the best features of the system and will deliver something basic where you pay to much for.

If the agency focusses on drupal only then they know all the good stuff. They will give it to you at the same price. They will have the goodies as a part of their system.

They also will be able to give you any custom functionality you ask for but they wont reinvent the wheel. Focussed agencies will only resort to custom development if it is really necesary, where unfocussed ones will resort to custom development because they dont know it is already there. Resulting in you paying to much.You don't want to be paying them to learn how to use drupal.

 

Can they meet your needs now and in the future?

Drupal has thousands of add-on modules that can be added to the main core to perform certain functions. Ask the agency what add-on modules they typically use, which ones work as they are and which ones require modification. There's a good chance you will need to use a module or two in your site. If they can't demonstrate this ability it's likely you will have costly updates in the future. Every module has its issues, so check to see what customizations they've needed to make to fit a client's particular need and what modules they've created from scratch because they didn't exist.

Drupal is an open source community built by the hard work of thousands, for free! A good developer embraces the community, so ask them what modules they've contributed back to the open source community.

Check contributions.

How do I know if I'm going to get a good bang for my buck?

Picking up the expertise of CMS platform takes deep pockets and a ton of research and experimentation. It takes years to gain a firm understanding of the ins and outs of these complex software systems. Be leery of developers claiming the lowest rates. Their low rates seem attractive, but the lack of experience will cost you more in the long run in the form of bug fixes, timeline overruns and misalignment of objectives. Cover yourself by asking about their warranty period; what is covered and for how long.

May 29 2013
May 29

Do you want to become a involved in drupal? There are a couple of ways to get a drupal job. The fastest way to start get a drupal job is try and find someone who can show you around. If you have been to drupal.org you might feel a little overwhelmed. The whole drupal universe is big, very big and it is difficult to figure out what you need to do and even more important what you want to do.

There are several paths one can follow to work with drupal and you don't have to be a rockstar drupal developer with 27 contrib module and 1000 core patches under his belt to be a valuable drupal resource. First you need to discover what interests you the most and at what level you think you can be valuable.

Determine Your Level

You need to know what your skills are. Ask yourself these questions?

Can I program in php or can I program in an other language than php?

When going for a drupal job it helps when you are experienced in php and/or webdevelopment in general. But if you are not and you know how to program in other languages or you have experience creating html css sites you can be valuable. People with strong technical and infrastructural background can learn drupal.

There are 3 types of development profiles that could be valuable for a drupal job. We have drupal themers who specialize in front-end drupal development, they interact with the drupal theme layer so skills here require html, css and a front scripting language or library like javascript/jquery. Then we have drupal site builders who interact with drupal modules that are used to build sites using the interface, like views, panels and others. Then we have our drupal developer who codes custom modules and does integrations. All these terrains are overlapping but typically everybody has his preferences and tend to lean to one of these sides when it comes to developing with drupal.

Do I have front end skills?

Drupal jobs that require front end skills are in high demand since the rise of the mobile revolution. Lots of companies want a responsive website since this is easiest way to be present on mobile platforms. Most content oriented businesses tend to choose for a responsive website that adapts to any screen than to build an application for every platform. It is a lot more cost effective to have a content management system like drupal capable of serving responsive output than to build and maintain an app on every platform.

Typically these front end developers are people with professional skills in design, html, css and a scripting language. With those tools they are capable of designing and implementing drupal themes that are responsive.

A drupal front end job is all about designing great user experiences.

Do I have site building skills?

Drupal is evolving greatly as a site building tool. The tools drupal offer are amazing. Just by using the browser you can build an entire website. Of course learning how to use these tools is comparable with learning how to use photoshop which has millions of possibilities and require smart people being able to combine these options into maintainable applications.

Drupal can build data types and define their data fields in the interface. Drupal has page builders that can build complex, context driven and responsive pages. Drupal has query builders that can pull any content in your database into themed lists. Drupal has export systems that allow drupal to stage content and configuration from one environment to another.

Drupal jobs requiring site building are on the rise. We need people that can mash up these complex interfaces. It is not just clicking.

A drupal sitebuilding job is all about building access points to content in the form of great useable and accessible user experiences.

Do I have coding skills?

Drupal developer jobs are in very high demand go down a bit and you will know what I'm talking about. A drupal developer has to have a wide range of skills. He has to know how to program in php, he has to know the drupal api which is extensive. He does not only have to know the core apis but also the contributed apis. He has to know how php works. He must use the right tools to be productive.

He works a lot on integrations so he will have to know webservices and other standards like xml. If he has notion of other technologies that is usually a win. Today professional drupal development is a lot about integratin drupal in the companies IT infrastructure.

A drupal developer job is all about tying up loose ends and building cool custom functionality all while contributing to open source.

Do I have experience with other content management systems?

Experience with a content management system is a bonus. Knowing what a cms is all about helps you think in the right direction. If you don't really know what a cms is check this. We believe drupal is the best content management framework and system out there.

Content management systems are used more and more as a marketing tool to get the companies content out there.

If you are willing to start your drupal job don't hesitate.

Do I have experience with infrastructure management?

Drupal is being installed on more and more complex workflows (continous integration) and is integrated deeper and deeper into companies IT infrastructure. Drupal can need a lot of third party software to run depending on the demands. For example a high profile drupal website might have a varnish reverse proxy in front, one or more apache solr instances, mongo db, memcached servers, ...

To develop drupal in a professional way you need version control, continuous integration tools (jenkins), deployment scripts, staging environments, ... All these things need to be setup and maintained.

It does not stop there most professional drupal hosting offers monitoring, maintenance and continuous improvement contracts. This require professional tools and they need to be setup, maintained and used.

Drupal is being deployed in the cloud. Numbers grow quickly as drupal instances are deployed into the cloud. All these servers need to be managed. Using the right tools like puppet or chef these cloud servers and their stacks can be managed like any other server.

Infrastructure drupal jobs are highly wanted and require a special breed of people because they also need to know how drupal works in order to consult developers on improving performance and security. The focus of infrastructure jobs has shifted away from the physical machines because they can be obtained and maintained a such cheap price that it requires special economical conditions to make it worth to own your own servers/datacenters. Therefore most of the drupal system administrator jobs focus tends to shift towards the application. Drupal is a great opportunity for infrastructure people to come closer to the application. Manage and monitor large cloud and/or hosted servers in function of drupal is a very cool drupal job.

A drupal infrastructure job is all about making those drupal sites run.

Do I have communication skills?

Communication skills are always needed. But some drupal jobs require more of them. A drupal developer job requires communication so you can communicate with the client, team lead or project manager.But a drupal project manager requires even more communication skills because typically he needs to communicate with stakeholders, developers, clients, managers. He must be able to keep an eye on scope, deadlines and resources.

Then you might wonder if you have the skills to manage a drupal project. When managing a drupal project it helps a lot if the project manager knows drupal. He has to know what it does and what it is capable of. It helps a lot if he knows what needs to be custom and what you get out of the box.

A drupal project manager job is all about combining drupal with people.

Do I have excellent people skills?

Do you have technical skills, people skills and communication skills? Then you might considering becoming a drupal trainer. A drupal trainer is a rare type of profile specialized companies are looking. These people know to transfer knowledge to other people and learn them how to thrive in professional environments. These people need to be very skilled in communication, have an ability to get the best out of people and require technical understanding of drupal. We train drupal trainers because we are drupal trainers. If you are a teacher in an other domain and have a technical background you might consider becoming a drupal trainer.

A drupal training job is all about getting people passionate about learning drupal.

Do I have work experience in other domains?

We love to work and learn from professionals from other domains. If you are stuck in your current carreer and you want to change then the wide variety in drupal jobs might interest you. We know changing carreers is stepping out of your comfort zone big time and we have deep respect for people who are willing to this. We will support you and we know that you will not be able to know it all from day one but we are willingly to give skilled people the chance to work with drupal. We know that people who are good at one thing given the right support can become great a another thing.

Do I have writing skills?

Since drupal is a content management system often companies offering drupal services need copy writers. Copy writers should have a good notion of SEO so their text reflect the best possible optimization.

Do I have marketing or sales skills?

Drupal means business these days. With drupal breaking through in the enterprise there is a lot of demand for people with commercial talent to help push drupal further into the enterprise. This means big companies are starting to use drupal. They are building platforms with it and integrating drupal deeply into their IT infrastructure. They are building business around drupal, generating revenue, reducing costs, increasing interaction all with drupal. This is because everybody start to see the value of drupal, because drupal is combining social, content management and framework.

Contracts being sold in the drupal industry are getting bigger and bigger. This means great challenges for commercial people. If your are a talented sales person or talented marketeer you can benefit greatly from the rise of drupal in the enterprise. Big contract mean big commissions.

You don't sell alone. You should have support from drupal consultants who support you technically. Although it really helps having a technical background selling drupal solutions, it is clear that you can't know every technical detail.

Being a in drupal sales job is al about showing the benefits of drupal to the client.

Do I have a passion for supporting people and applications?

Supporting people using drupal applications gets more and more in demand. As a drupal webmaster you bridge the gap between the user and drupal. You support users in using drupal. You need excellent communication skills. Optionally when you can combine this with seo and copy writing with webmastering you can be a very valuable resource.

Drupal Consultants

Doing one of these jobs can get you in a position to call yourself a drupal consultant. In fact a drupal consultant will have most of the time have one or two focus skills but should be aware of the other skills and their value. Drupal consultants are typically put at the disposal of the client they work for. A drupal consultant should consult a client on how to use drupal.

He should have excellent drupal skills and communication skills. He should try and think how he could make the client money by proposing solutions, try to figure out what could reduce costs, what could increase value. Examples of drupal consultants proposing solution could be to install social features and gamification on their intranet to increase collaboration from their workers. Make their workers show who is sharing knowledge. Get the knowledge in your organisation out there. This increases value for companies because now they are less dependent on people falling sick or leaving. The knowledge stays in the organisation. Another example could be to propose a drupal marketing framework. This way the client can post all his content his has stored in their organisation online and attract customer with it.

A drupal consultant should look for value that is why the client is hiring them. Drupal is full of opportunities for companies with content.

Determine What Your Want

This is very important before you even consider a drupal job. You must know what you want. We have read about all the different skills and how they overlap with each other. You have to ask yourself where in that picture do I position myself. What do I want to do. Based on your skills, personality and preferences you should try and find a drupal job that leans to one of the following:

drupal jobs

What Does A Drupal Developer Earn?

This topic might be a taboo in the drupal universe but we believe we can talk openly about it.

On the junior levels it is not so much about getting out as much money as possible but it is about getting as much experience as possible. Get a chance to prove yourself. Once you arrive at the more senior levels and you have worked hard to become a drupal specialist you should be able to get paid very well. People who work hard and work smart, meaning if you keep learning and you keep expanding your knowledge, you keep push your comfort zone, you keep interested in other things, you will be rewarded greatly.

A freelance senior drupal developer having 5 years of experience can charge up to 500€/day and more. On a contract base a senior drupal developer could earn a salary package as much as 60000€ annualy. It does not stop there high profile drupal consultants can earn even more if they can prove to the client that they have skills and knowledge than can transform their clients organisation by producing value and reducing costs.

Drupal job salary trend

Freelance Drupal job rate trend

http://www.itjobswatch.co.uk/jobs/uk/drupal.do

The Drupal Market Is Hiring

It should be clear that the drupal market is hiring and that drupal is here to stick around. Right now is a great time to start as a drupal developer. you'll get great chances. Betting your career on drupal is smart if you are passionate about web development.

With drupal all levels of skill and interests are welcome.

Junior drupal developers get the chance to get training and become experienced drupal developers.

A Drupal development jobs are in high demand.

Becoming a drupal consultants means variety of work and great pay.

Drupal means not only paying bills but also evolve all the time and having some fun too. Choose a drupal carreer, you won't regret it.

Apr 16 2013
Apr 16

Industrious workflow

You are developing drupal sites but a lot of effort is going down the drain in deploying and manual testing. A lot of regression is creeping in your applications. Maintenance costs grow when projects gets older. Nobody exactly knows how the project is really doing. How to resolve this?

You need to start monitoring and testing your applications. Every time somebody introduces a change into an application this application can become unstable without you knowing it.

A Quality Assurance system is the hart of your development. Building QA into the process itself is the only way you can assure quality in the long run. The process consists of:

  • A version control system
  • Staging of code and configuration on dedicated environments. (dev-staging-live)
  • Continuous integration
  • Automatic testing
  • Training of the developers
  • Structured project management

Version control

Version control is indispensible to be able to work in team on code. You need this to have a history record of all changes. You need to be able to compare changes of other people. You must have this or you can not garanty not overriding other peoples work.

We recommend Git as version control system. It is used on drupal.org and it is for now the most flexible tool for having version control.

Staging

Staging of code and configuration on dedicated environments is essential because the application must be able to evolve once in production. Developers must be able to introduce changes and test these changes without affecting the live application.

Continuous integratie

We strongly believe that a qa system needs to apply the continuous integration principle. This means that on each deploy tests must be executed. This is the only way you can garanty that the application keeps working.

Continuous and automatic deployment

A qa system should deploy automaticaly to reduce errors and to reduce time waisted with manual deployment. A tool like jenkins can help us to achieve this level automation for deploying our drupal site.

A department that has been automated will be able to deliver applications faster. This way they can concentrate on the details and the features that make good applications great.

Automatic testing

Automatic testing reduces the uncertainty and the regression when introducing changes. By executing a batch of tests on the application on each change, errors are detected soon and can be fixed fast. This reduces the bugfixing process. Automatic tests reduces bugs going to production undiscovered.

Automatic tests save hours and hours of manual testing. Can you imagine click every link, button, submitting each form in a large application?

Automatic tests are an investment but once inplace the mean a huge return on investment.

Developer training

This system must be used by developers. So developers need to be trained to use this system. Check our training program to see how we think about training drupal developers.

Structured project management

Of course structured project management is mandatory. Working agile is probably the most suited way of working. Actually if you want to work truely agile you need a quality assurance system or you will not be able to garanty if your work, accept by your clients wont regress.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web