Aug 16 2017
Aug 16

by Elliot Christenson on August 16, 2017 - 1:28pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Views module to fix an Access Bypass vulnerability.

The Views module enables you to create custom displays of Drupal data.

When creating a View, you have the option to enable the use of AJAX. The Views module does not restrict access to the AJAX endpoint to only Views configured to use AJAX. This is mitigated by having access restrictions on the view.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch for 6.x-2.x or 6.x-3.x.

If you have a Drupal 6 site using the Views module, we recommend you update immediately.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on

Aug 16 2017
Aug 16

I have been working on  Examples for Developer from Drupal 7 to Drupal 8 as part of this year’s Google Summer of Code (GSoC), under the guidance of Navneet Singh and Vaibhav Jain. In this I will be explaining how to test the modules which I was building for the few months, I was able to completely port two module(Menu Example and Contextual link Example) and also port more half examples of AJAX module. For test the module first I would recommend downloading the 8.x-1.x version module of Examples for Developer. 

  • Menu Example :- The Drupal 7 menu system revolved around hook_menu(), which provided associations between paths and callback functions (controllers) and also served as a central place to provide menu items in different menus (mostly in the administration menu) associated with the paths, as well as providing tabs and action links on pages and contextual links for different paths. It also did access checking, entity loading, and so on and on. That is a lot to handle for one system. In Drupal 8, these areas of functionality are now separated into different systems. The association of a path with a controller, coupled with parameter upcasting and access checking, is now handled in the routing system. This system serves as a basis for path access on a Drupal 8 site. The Drupal 8 menu system is now a collection of different APIs for menu items defined by modules as well as local tasks, actions and contextual links. Just follow few steps to install this module.
  1. Download the patch and applying:- Please follow this link and download the lastest patch from the queue (click here to open the issue queue)
Aug 16 2017
Aug 16

Last week, a client asked me to investigate the state of the Elasticsearch support in Drupal 8. They're using a decoupled architecture and wanted to know how—using only core and contrib modules—Drupal data could be exposed to Elasticsearch. Elasticsearch would then index that data and make it available to the site's presentation layer via the Elasticsearch  Search API

During my research, I was impressed by the results. Thanks to Typed Data API plus a couple of contributed modules, an administrator can browse the structure of the content in Drupal and select what and how it should be indexed by Elasticsearch. All of this can be done using Drupal's admin interface.

In this article, we will take a vanilla Drupal 8 installation and configure it so that Elasticsearch receives any content changes. Let’s get started!

Downloading and starting Elasticsearch

We will begin by downloading and starting Elasticsearch 5, which is the latest stable release. Open and follow the installation instructions. Once you start the process, open your browser and enter You should see something like the following screenshot:

Elastic home

Now let’s setup our Drupal site so it can talk to Elasticsearch.

Setting up Search API

High five to Thomas Seidl for the Search API module and Nikolay Ignatov for the Elasticsearch Connector module. Thanks to them, pushing content to Elasticsearch is a matter of a few clicks.

At the time of this writing there is no available release for Elasticsearch Connector, so you will have to clone the repository and checkout the 8.x-5.x branch and follow the installation instructions. As for Search API, just download and install the latest stable version.

Connecting Drupal to Elasticsearch

Next, let’s connect Drupal to the Elasticsearch server that we configured in the previous section. Navigate to Configuration > Search and Metadata > Elasticsearch Connector and then fill out the form to add a cluster:

Add cluster

Click 'Save' and check that the connection to the server was successful:

Cluster added

That’s it for Elasticsearch Connector. The rest of the configuration will be done using the Search API module.

Configuring a search index

Search API provides an abstraction layer that allows Drupal to push content changes to different servers, whether that's Elasticsearch, Apache Solr, or any other provider that has a Search API compatible module. Within each server, search API can create indexes, which are like buckets where you can push data that can be searched in different ways. Here is a drawing to illustrate the setup:


Now navigate to Configuration > Search and Metadata > Search API and click on Add server:

Search API home

Fill out the form to let Search API manage the Elasticsearch server:

Connect Elasticsearch to Search API

Click Save, then check that the connection was successful:

Elasticsearch server added

Next, we will create an index in the Elasticsearch server where we will specify that we want to push all of the content in Drupal. Go back to Configuration > Search and Metadata > Search API and click on Add index:

Elastic add index

Fill out the form to create an index where content will be pushed by Drupal:

Index form Index form 2 Index form 3

Click Save and verify that the index creation was successful:

Index added

Verify the index creation at the Elasticsearch server by opening in a new browser tab:

Index verified

That’s it! We will now test whether Drupal can properly update Elasticsearch when the index should reflect content changes.

Indexing content

Create a node and then run cron. Verify that the node has been pushed to Elasticsearch by opening the URL, where elasticsearch_index_draco_elastic_index is obtained from the above screenshot:

Content indexed

Success! The node has been pushed but only it’s identifier is there. We need to select which fields we do want to push to Elasticsearch via the Search API interface at Configuration > Search and Metadata > Search API > Our Elasticsearch index > Fields:

Add fields

Click on Add fields and select the fields that you want to push to Elasticsearch:

Browse fields

Add the fields and click Save. This time we will use Drush to reset the index and index the content again:


After reloading, we can see the added(s) field(s):

Content indexed extended

Processing the data prior to indexing it

This is the extra ball: Search API provides a list of processors that will alter the data to be indexed to Elasticsearch. Things like transliteration, filtering out unpublished content, or case insensitive searching, are available via the web interface. Here is the list, which you can find by clicking Processors when you are viewing the server at Search API :

Field processors

When you need more, extend from the APIs

Now that you have an Elasticsearch engine, it’s time to start hooking it up with your front-end applications. We have seen that the web interface of the Search API module saves a ton of development time, but if you ever need to go the extra mile, there are hooks, events, and plugins that you can use in order to fit your requirements. A good place to start is the Search API’s project homepage. Happy searching!


Thanks to:

Aug 16 2017
Aug 16
We've got a new installment in the decoupled Drupal project we're working on with Elevated Third and Hoorooh.The project we're documenting was one we worked on for Powdr Resorts, one of the largest ski operators in North America.The first installment in the series was A Deep Dive into a Decoupled Drupal 8 Project.
Aug 16 2017
Aug 16

On Sunday, 24 September we plan to start at 8am from Krems and travel to Tulln. At 11am we’ll arrive in Tulln and meet at the Weshapers office for some drinks & BBQ.

In the afternoon at 2pm, we plan to leave Tulln and cycle the remaining 40 km to Vienna to finally arrive in Vienna.

To sum-up, the meeting points are:

Kaiserwiese, Riesenrad, Source:

The arrival is planned for Sunday, 24 September at 5pm in front of the big wheel in Vienna at Kaiserwiese.

How to get there?

There are many cycling routes that lead to Vienna. We created a map that currently highlights roads from east and west along the Danube. Also, check out the EuroVelo routes, bessone summarized the interesting ones for Vienna.

If you just wanna join for the last day, it’s a 30-minute train ride from Vienna to Tulln or 1:10 from Vienna to Krems and you can bring your bike on the train. Check ÖBB to book your train ticket.

Convinced? Tell us you are coming!

[embedded content]

Tour de Drupal Amsterdam 2014 from SchnitzelCopter on Vimeo.

Aug 16 2017
Aug 16

Pushing clean codes is not every one cups of tea, it needs extensive knowledge and practice. Before a website go live, it needs to pass certain standards and checks in order to deliver quality experience. Certainly, a clean website is a demand of almost every client and it should be. 

In this blog post, you will learn why we need to implement git pre-commit hook? how it works? Simultaneously, we will also attempt to implement working examples in order to have better understanding.

Why we need to implement git pre-commit hook

Any website going live should pass certain standards and checks. If the web is built on any framework, then these checks are mandatory. How to ensure all developers are committing clean code? One way is to do code review, but this is manual and we can’t ensure all the issues are caught up. Another way is to set up continuous integrations (CI), where we can do these checks as automated jobs. In both ways, we can catch this up soon before pushing the code by any developer using git pre-commit hook.

Do you know git supports hook? A list of supported hooks and sample code can be found in ‘.git/hooks’ directory. You can read more about each git hooks here. In this article, we will explore about git pre commit hook.

How git pre commit hook script works

Whenever a git commit is performed, git pre commit hook script will be executed where we can do syntax check, presence of any debugging function, check for merge conflict tags and framework coding standard in files that are staged to commit. If any case fails then in the script we have to throw an error that is ‘exit 1’. Otherwise just do ‘exit 0’, which means success.


Here, I have created a git pre commit hook example script specifically for Drupal. You can go through the code here: manoj-apare/pre-commit. First, we have to check these standards only on staged files and avoid deleted files, using command ‘git diff-index --diff-filter=ACMRT --cached --name-only HEAD --’. In rest of the article, I will explain what all checks have been covered for these list of files in the script and how.

Syntax check

For checking Syntax, you can PHP linter for compilation errors that is using command ‘php -l’. To run php linter check, we can filter out only php files ‘git diff-index --diff-filter=ACMRT --cached --name-only HEAD -- | egrep '\.php$|\.inc$|\.module|\.theme$'’.

Check for debugging functions

For checking debugging function, we can use grep tool ‘git diff --cached $FILE | egrep -x "$pattern"’. Where $FILE is filename and $pattern is regular expression pattern for debugging functions for example ‘^\+(.*)?var_dump(.*)?’.

Merge conflict marker check

Merge conflict marker can be checked on all files staged to commit using egrep pattern ‘egrep -in "(<<<<|====|>>>>)+.*(\n)?" $FILE’.

Coding standard check

Using phpcs (PHP codesniffer), we can check for coding standards ‘phpcs --standard=Drupal’. No need to check coding standards for some files format like images and fonts. For this, we can filter out staged files with extensions ‘php,module,inc,install,test,profile,theme,js,css,info,txt’.

See the screenshot below for an example of the coding standard check.

Git precommit hook

Finally, using git pre-commit hook we can make sure that we are pushing clean code. I hope this blog will help you out to check coding standards, syntax errors and debugging functions. Moreover, it will also assist you in reducing the number of failing automated tests in CI.

Aug 16 2017
Aug 16

Last week we looked at creating a link that opens in a modal by adding a few attributes to the link. We are going to take this one step further this week by creating the modal in a custom module. This will give you much more flexibility over what you include in the modal.

Create the module info file

In this example, I’m going to call the module custom_modal.

First, create the module info.yml file in the module folder (called custom_modal). The file provides basic information to Drupal about the module. Add the following to

name: Custom Modal
type: module
description: Display a modal
core: 8.x
package: Custom

Create the path for the modal

Last week, you added a link to a block and clicking on the link triggered the modal. Here is a recap of that link:

<p><a class=“use-ajax” data-dialog-type=“modal" href="">Search</a></p>

With a custom module, you will need to create the link in code and clicking on that link will trigger the modal. To do that in Drupal 8, you can register a route.

Create the route file, called custom_modal.routing.yml. In custom_modal.routing.yml, add the following route:

  path: 'modal-example/modal'
    _title: 'Modal'
    _controller: '\Drupal\custom_modal\Controller\CustomModalController::modal'
    _permission: 'access content'

When the user clicks on modal-example/modal, the modal will be triggered. The controller is '\Drupal\csc_modal\Controller\CustomModalController::modal'. That means that when this path is called, this controller will be called.

Create the controller

Next you are going to create the controller mentioned above. In the root of the custom_modal folder create:

  • a folder called src
  • a folder inside src called Controller
  • a file inside Controller called CustomModalController.php

And add the following code to CustomModalController.php:


 * @file
 * CustomModalController class.

namespace Drupal\custom_modal\Controller;

use Drupal\Core\Ajax\AjaxResponse;
use Drupal\Core\Ajax\OpenModalDialogCommand;
use Drupal\Core\Controller\ControllerBase;

class CustomModalController extends ControllerBase {

  public function modal() {
    $options = [
      'dialogClass' => 'popup-dialog-class',
      'width' => '50%',
    $response = new AjaxResponse();
    $response->addCommand(new OpenModalDialogCommand(t('Modal title'), t('The modal text'), $options));

    return $response;

This creates an Ajax command to open a dialog modal. When the modal is open, it will contain the text “This modal text”.

You could use this to show anything else you want in a modal. For example, you might decide to define a form so that users can complete an action and show that in the modal.

Create the block

And finally, you can create the block for this, which will contain the button to call the modal.

In the root of the custom_modal folder create:

  • a folder inside src called Plugin
  • a folder inside Plugin called Block
  • a file inside Block called ModalBlock.php

And add the following code to ModalBlock.php:

 * @file
 * Contains \Drupal\first_module\Plugin\Block\HelloBlock.

namespace Drupal\custom_modal\Plugin\Block;
use Drupal\Core\Block\BlockBase;
use Drupal\Core\Url;
use Drupal\Core\Link;
use Drupal\Component\Serialization\Json;

 * Provides a 'Modal' Block
 * @Block(
 *   id = "modal_block",
 *   admin_label = @Translation("Modal block"),
 * )
class ModalBlock extends BlockBase {
   * [email protected]}
  public function build() {
    $link_url = Url::fromRoute('custom_modal.modal');
      'attributes' => [
        'class' => ['use-ajax', 'button', 'button--small'],
        'data-dialog-type' => 'modal',
        'data-dialog-options' => Json::encode(['width' => 400]),

    return array(
      '#type' => 'markup',
      '#markup' => Link::fromTextAndUrl(t('Open modal'), $link_url)->toString(),
      '#attached' => ['library' => ['core/drupal.dialog.ajax']]

This is creating a link for the block using the URL directly from the route that was defined earlier in custom_modal.routing.yml:

$link_url = Url::fromRoute('custom_modal.modal');

It then sets the attributes for that link. You’ll notice that this code sets a CSS class of use-ajax and a data-dialog-options attribute with a value of modal:

 'attributes' => [
        'class' => ['use-ajax', 'button', 'button--small'],
        'data-dialog-type' => 'modal',

This is the same as the link you were shown last week:

<p><a class="use-ajax" data-dialog-type="modal" href="">Search</a></p>

The code then creates the link from using the $link_url:

 '#markup' => Link::fromTextAndUrl(t('Open modal'), $link_url)->toString(),

Open modal will be the anchor text for the link, which you can change to what ever you want.

And finally it attaches Drupal’s dialog library, which uses jQuery UI dialog:

'#attached' => ['library' => ['core/drupal.dialog.ajax']]

Add the block to a region

Head on over to the Extend menu in your Drupal site and enable this module (or use Drush).

Now you need to enable the block and add it to a region. Head over to Block Layout and add it to a region of your choice. In this example, I’m going to add it to sidebar first.

After clicking on the Place Block button next to the region, you can search for the modal block you just created by its name (“modal block”).

Head back to the main site and you should see the block in the side bar.

Open modal in Drupal 8

And this will open the modal!

Even though this example doesn’t do anything other than display text in a modal, it gives you the tools to create your own modal in code and do a lot more in the modal - such as add a custom form. We’ll be looking at doing just that in the near future (stay tuned). We'll also be looking into how to gracefully deal with users who don't have Javascript enabled.

Aug 16 2017
Aug 16

If you’ve ever used Alexa, it may seem like it must be extremely complicated to get her to respond like she does. However, if you have your content inside Drupal, it’s not terribly difficult to get her to utilize that data for your own custom Alexa skill. Let’s take a look at how to accomplish that.

Add the Alexa module to your website 

To have your website talk to Amazon, it has to be seen by Amazon’s servers. With that said, your site will have to be outward facing with an https domain. This isn’t an endorsement for them but for this example, we’ll be using a Pantheon hosted site, which allows for free Drupal dev sites to be quickly spun up. To implement this, the first thing we will need to do is add the Alexa module to our Drupal 8 site.

Set up an Alexa Skill on the Amazon Developer site 

After your site is up and you have the module installed, it’s time to start setting up the Amazon side of things. You’ll first need to create a developer account at Next, you’ll need to set up a new Alexa Skill.

Once you’ve created a new Alexa skill, it’s time to start configuring it to be able to reach our site. The first thing to note is the Application ID that Amazon provides to you. That will need to be copied into the Alexa modules config located on your site at /admin/config/services/alexa

Next, back on the Amazon Development site, give your app a Name and an Invocation Name. The Invocation Name is what the end user will have to utter in order to activate the skill. For our example, we’ll be using the Invocation Name of ‘world publish’.

Next, click on Interaction Model. This is where we will begin to setup our custom commands that Alexa can respond to. These commands are called intents. Here we have setup 2 custom intents, readArticle and WorldPublish.

The other four listed are the default intents that Amazon provides to all Alexa apps and won’t require any setup on this side from us (though we will set a custom value for the AMAZON.HelpIntent when we are writing our custom module).

Our WorldPublish intent here will be used to ask Alexa what the five latest articles are on our site.

To have a better chance the user utters the right intent,  you can add more phrasings of this same question to make sure Alexa still will respond appropriately. In our custom module, we will set up the actions required for Alexa to respond back to this question by reading out the titles of the latest articles by date.

Once Alexa has given the user the names of the newest articles, the user may want one of those articles read to them. In our readArticle intent, we create two ways for the user to get an article read back to them. This intent also shows how we can use variables in our command. This variable will be sent to our custom module where we can use that to pull in the name of the article we want Alexa to read back to us.

Create custom module for your responses 

Now that we have some intents for Alexa to use, it’s time to start writing some code for what happens when a command is sent to your website. We’ll be naming the module demo_alexa.

In the file put in the following:

name: Alexa Latest Articles Demo
type: module
description: Demonstrates an integration to Amazon Echo.
core: 8.x
package: Alexa
 - alexa

Be sure to add the Alexa module as a dependency. Next, we need to create the file that will do all the heavy lifting in our custom module. Create a file inside src/EventSubscriber/ and call it RequestSubscriber.php

Let’s break down what code goes into this file. First, we need to create a namespace and our use statements.

namespace Drupal\demo_alexa\EventSubscriber;

use Drupal\alexa\AlexaEvent;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Drupal\paragraphs\Entity\Paragraph;

Next, we need to make our main class and a function that gets the event.

* An event subscriber for Alexa request events.
class RequestSubscriber implements EventSubscriberInterface {

  * Gets the event.
 public static function getSubscribedEvents() {
   $events['alexaevent.request'][] = ['onRequest', 0];
   return $events;

Now we can make the function that gives responses for each of our intents. Here’s the code for that:

* Called upon a request event.
* @param \Drupal\alexa\AlexaEvent $event
*   The event object.
public function onRequest(AlexaEvent $event) {
 $request = $event->getRequest();
 $response = $event->getResponse();

 switch ($request->intentName) {
   case 'AMAZON.HelpIntent':
     $response->respond('You can ask "what are the latest articles" and I will read the titles to you');

   case 'WorldPublish':
     $latestArticles = current_posts_contents();

   case 'readArticle':
     $article = $request->getSlot('Article');
     $articleResponse = current_post_body($article);

     $response->respond('Hello World Publish User. I can tell you the latest articles.');

Here you can see the name of our two custom intents that we created earlier, WorldPublish and readArticle. You can also see how this can be used to set values for the default intents such as AMAZON.HelpIntent where when the user asks Alexa for help, we now have a custom response that Alexa will respond back to the user with. This function also gives us the ability to set a default response for if Alexa doesn’t understand what the question was. This is a good place to give the user the phrasing needed to get the proper response from Alexa.

Let’s take a look in more detail at the two custom intents we have in the code above. First off we have the WorldPublish intent.

   case 'WorldPublish':
     $latestArticles = current_posts_contents();

If the WorldPublish intent is the one the user uttered, then we’ll run a custom function named current_posts_contents and then Alexa will respond back with the response from that function. Here is what that custom current_posts_contents function is doing:

* Get latest articles function.
* Set beginning and end dates, retrieve posts from database
* saved in that time period.
* @return array
*   A result set of the targeted posts.
function current_posts_contents() {
 $query = \Drupal::entityQuery('node');
 // Return the newest 5 articles.
 $newest_articles = $query->condition('type', 'article')
   ->condition('status', 1)
   ->sort('created', 'DESC')

 $nids = $query->execute();

 $fullstring = 'Here are the five latest articles from WorldPublish, ';

 foreach ($nids as $nid) {
   $node = \Drupal::entityTypeManager()->getStorage('node')->load($nid);
   $name = $node->getTitle();
   $fullstring = $fullstring . ' ' . $name . ',';

 $fullstring = $fullstring . '. You can have me <phoneme alphabet="ipa" ph="ɹˈiːd">read</phoneme> you one of the articles by saying, Alexa read, and then the title of the article.';


This function does a query of the content type named ‘article’, with the condition that the status is set to publish, ordered by when the article was created and set to return the first five results. This query will give us an array of nid’s that we can use in a foreach loop . We then load the title of the nid and append that title to a string that will be Alexa’s response.

Another thing to note in this code is that sometimes, Alexa won’t get the phonetic pronunciation of words correct. In the statement “You can have me read you one of the articles”, Alexa was saying “red” instead of “reed” when pronouncing “read”. To correct that we can use the Speech Synthesis Markup Language to directly tell Alexa how to pronounce a specific word. To get the correct sounds, you can use the phoneme alphabet that is provided on Amazon’s help site.

<phoneme alphabet="ipa" ph="ɹˈiːd">read</phoneme>

By using the phoneme alphabet in this code, Alexa will change her pronunciation to the correct way to say the word “read” in this sentence.

Now that we have had Alexa read us a list of the five latest articles, let’s take a look at the readArticle case:

  case 'readArticle':
     $article = $request->getSlot('Article');
     $articleResponse = current_post_body($article);

So here we can see where our ‘Article’ variable we set in our intent on the Amazon Developer site comes into play. We’re first grabbing that value from what the user tells Alexa and storing it inside of $article. Then we’re sending that value to a custom function called current_post_body to get our article for Alexa to read. Here’s what’s inside that function:

* Get body of an article.
* Get body of article based on title.
* @return array
*   A result set of the targeted posts.
function current_post_body(&$article) {

 $query = \Drupal::entityQuery('node');
 // Return the newest 5 articles.
 $article_query = $query->condition('type', 'article')
   ->condition('status', 1)
   ->condition('title', $article, 'CONTAINS')
   ->sort('created', 'DESC')

 $nids = $query->execute();

 if ($nids) {
   foreach ($nids as $nid) {
     $node = \Drupal::entityTypeManager()->getStorage('node')->load($nid);
     $name = $node->getTitle();
     $foo = '';
     foreach ($node->field_paragraph as $item) {
       $target_id = $item->target_id;
       $paragraph_node = Paragraph::load($target_id);
       $addition = $paragraph_node->field_wysiwyg->value;
       // Remove html tags from value.
       $addition = strip_tags($addition);
       $foo = $foo . $addition;
   $response = $response . 'Here is the article, ' . $name . '. ' . $foo;
 else {
   $response = 'I do not see an article by that name. To have me list the latest 5 articles, say, alexa what are the latest articles.';


First thing this code does is it runs a query looking through the published article nodes looking for a title that contains the string that the user input. It will then return the latest article it can find that contains that string. From there if we found a piece of content that matches up, we then grab all the values from the paragraph fields inside our article. Just a note that if you aren’t using paragraph fields for your content then this is where you could return whichever fields you wanted Alexa to read back. Lastly, our code then strips out any html tags from the created string so all that is left is the text of the article. This string is now ready to be given back to Alexa for her response.

Now that we have our code for our responses back to Alexa, the last file we’ll need is

   class: Drupal\demo_alexa\EventSubscriber\RequestSubscriber
     - { name: event_subscriber }

This should be all the code you’ll need to now test out your Alexa skill. Be sure to enable your demo Alexa module and then head back over to the Amazon Developer site.

Test the responses from the Amazon Developer Site 

Back at click on the Test tab and make sure your skill is enabled for testing on your account.

As of the time of this blog posting, there’s currently an issue with the test text simulator not working correctly but there’s an easy workaround for that. Be sure on “Use Service Simulator to test your HTTPS endpoint” that your web domain is listed in the box. Then you should be ready to enter in an Utterance into the text box. Type “help” into the box and this will send that to our code we just wrote. When you click the submit button it should send the AMAZON.HelpIntent request to our code and our code will respond back with the custom response we gave.

As you can see though, it looks like it didn’t work. This is the issue that Amazon is aware of and is currently fixing. We can get by this though by copying everything inside the Service Request box and then pasting it into the Json Request box in the JSON tab.

You can then click on ‘Listen’ and you’ll hear Alexa repeat back what the website sent as the response! You can try it again by asking Alexa ‘What are the latest articles’ and if you have articles on your site it should list out the articles for you.

Here’s a quick video showing this custom module working using an Amazon Echo Dot. This shows off how to activate the demo on the Echo, the commands for listing the articles, reading the article specified and also one of the built in default commands to stop Alexa talking. Be warned that if you have an Echo in your room with you, it will also try to respond to the commands said in this video as well!

[embedded content]

So that’s it! With this code, you have a good example of how easy it can be to get Alexa to talk to your website. This could easily be used to create lots of different, fun interactive ways for your users to access all of the various content on your Drupal website.

Additional Resources
Building Rest Endpoints with Drupal 8 | Blog
Creating Content with YAML Content Module | Blog
Mediacurrent's Drupal Theme Generator | Blog

Aug 16 2017
Aug 16

The last blog post might have left you wondering: "Plugins? It already does everything!". Or you are like one of the busy contributors and already identified a missing feature and can't wait to take the matter into your own hands (good choice).

In this and the following posts we will walk you through the extension capabilities of the GraphQL Core module and use some simple examples to show you how to solve common use cases.

I will assume that you are already familiar with developing Drupal modules and have some basic knowledge of the Plugin API and Plugin Annotations.

The first thing you will want to do is disabling GraphQL schema and result caches. Add these parameters to your

    result_cache: false
    schema_cache: false

This will make sure you don't have to clear caches with every change.

As a starting point, we create an empty module called graphql_example. In the GitHub repository for this tutorial, you will find the end result as well as commits for every major step.

Diff: The module boilerplate

A simple page title field

Can't be too hard, right? We just want to be able to ask the GraphQL API what our page title is.
To do that we create a new class PageTitle in the appropriate plugin namespace Drupal\graphql_example\Plugin\GraphQL\Fields.

Let's talk this through. We've created a new derivation of FieldPluginBase, the abstract base class provided by the graphql_core module.

It already does the heavy lifting for integrating our field into the schema. It does this based on the meta information we put into the annotation:

  • id: A unique id for this plugin.
  • type: The return type GraphQL will expect.
  • name: The name we will use to invoke the field.
  • nullable: Defines if the field can return null values or not.
  • multi: Defines if the field will return a list of values.

Now, all we need to do is implement resolveValues to actually return a field value. Note that this method expects you to use the yield keyword instead of return and therefore return a generator.

Fields also can return multiple values, but the framework already handles this within GraphQL type definitions. So all we do is yield as many values as we want. For single value fields, the first one will be chosen.

So we run the first GraphQL query against our custom field.

query {

And the result is disappointing.

  "data": {
    "pageTitle": null

Diff: The naive approach

The page title is always null because we extract the page title of the current page, which is the GraphQL API callback and has no title. We then need a way to tell it which page we are talking about.

Adding a path argument

Lucky us, GraphQL fields also can accept arguments. We can use them to pass the path of a page and get the title for real. To do that, we add a new annotation property called arguments. This is a map of argument names to the argument type. In our case, we added one argument with name path that expects a String value.

Any arguments will be passed into our resolveValues method with the $args parameter. So we can use the value there to ask the Drupal route matcher to resolve the route and create the proper title for this path.

Let's try again.

query {
  pageTitle(path: "/admin")

Way better:

  "data": {
    "pageTitle": "Administration"

Congratulations, MVP satisfied - you can go home now!

Diff: Using arguments

If there wasn't this itch every developer has when the engineering senses start to tingle. Last time we stumbled on this ominous route field that also takes a path argument. And this ...

query {
  pageTitle(path: "/node/1")
  route(path: "/node/1") {

... smells like a low hanging fruit. There has to be a way to make the two of them work together.

Attaching fields to types

Every GraphQL field can be attached to one or more types by adding the types property to its annotation. In fact, if the property is omitted, it will default to the Root type which is the root query type and the reason our field appeared there in the first place.

We learned that the route field returns a value of type Url. So we remove the argument definition and add a types property instead.

This means the $args parameter won't receive the path value anymore. Instead, the $value parameter will be populated with the result of the route field. And this is a Drupal Url object that we already can be sure is routed since route won't return it otherwise. With this in mind, we can make the solution even simpler.

Now we have to adapt our query since our field is nested within another.

query {
  route(path: "/admin") {

Which also will return a nested result.

  "data": {
    "route": {
      "pageTitle": "Administration"

The price of a more complex nested result might seem high for not having to pass the same argument twice. But there's more to what we just did. By attaching the pageTitle field to the Url type, we added it wherever the type appears. Apart from the route field this also includes link fields, menu items or breadcrumbs. And potentially every future field that will return objects of type Url.
We just turned our simple example into the Swiss Army Knife (pun intended) of page title querying.

Diff: Contextual fields

I know what you are thinking. Even an achievement of this epic scale is worthless without test coverage. And you are right. Let's add some.

Adding tests

Fortunately the GraphQL module already comes with an easy to use test base class that helps us to safeguard our achievement in no time.

First, create a tests directory in the module folder. Inside that, a directory called queries that contains one file - page_title.gql - with our test query. A lot of editors already support GraphQL files with syntax highlighting and autocompletion, that's why we moved the query payload to another file.

The test itself just has to extend GraphQLFileTestBase, add the graphql_example module to the list of modules to enable and execute the query file.

Diff: Adding a test


We just created a simple field, passed arguments to it, learned how to attach it to an already existing type and finally verified our work by adding a test case. Not bad for one day's work. Next time we will have a look at Types and Interfaces, and how to use them to create fields with complex results.

Aug 16 2017
Aug 16

How to install and use the module

Module has now a project page on and for another week I will host the code on before moving it to I have also created an issue ticket which will be the final submission link.

1. Get the module by running following commands while in the root of your Drupal installation:

cd modules

git clone -b master [email protected]:marnczarnecki/encrypt_content_client.git

2. Install required libraries:

  • Follow these instructions for building the library - remember about using the --with-ecc option and save the output file as /sites/all/libraries/sjcl.js
  • Download library file from here and save it as /sites/all/libraries/FileSaver.js

3. Grant the following permissions:

  • encrypt content client - allows users to generate ECC keys, encrypt and decrypt content
  • encrypt content client settings - allows admins to change module’s setting including encryption policies

4. Grant the following REST resource permissions using REST UI (not enabling them should gracefully limit functionality of the module for selected users):

  • Access DELETE on Client encrypted containers resource
  • Access DELETE on Client encrypted fields resource
  • Access DELETE on ECC keys resource
  • Access GET on Client encrypted containers resource
  • Access GET on Client encrypted fields resource
  • Access GET on ECC keys resource
  • Access POST on Client encrypted containers resource
  • Access POST on Client encrypted fields resource
  • Access POST on ECC keys resource

Post-install steps and settings

  • Navigate to /client_encryption/policies and set which nodes and fields to encrypt.
  • Generate keys as the admin user, open /user/ecc.
  • Add a custom block - so the users can update their private keys which are stored locally.

Redesigned keys management page

When working on use cases and first use manual I came across few idea on how to rework the keys generation page as this is a rather important part of my module.

Improved keys generation page


Successfully generated keys screen

Some changes worth mentioning

  • Better routing: /user/ecc, added a link at the tools list block.   

  • Users can not update their public keys in the database and private keys in localStorage using one form.

  • More basic error checking and feedback, now it’s easier to use for less tech-savvy users.

  • Users can now manually test if the keys they provided are valid (simple encryption-decryption check).

Keys generation page errors

Other various fixes

During my tests I have also found several usability and functional issues with my module. Here is a list what was fixed in this week:

  • More checks for the right permissions - based on experiences from writing tests.

  • Delete encryption container and encrypted fields when a node is deleted.

  • Give feedback when executing JavaScript code (status messages).

  • More robust way of creating nodes through REST - I am passing all of node’s fields from PHP to JavaScript and then applying a filter that leaves fields that are visible on the page.

  • Following up my issue from last week: I found a way how to make POST and DELETE request so now I can test my rest resources fully.

  • After node creation, JavaScript redirects user to the right location.

  • A lot of JavaScript code cleanup and error checking.

  • Moved ECC keys generation button from a separate page to the user profile.

User manual and documentation

Regarding the comment that Slurpee made, I have created an user manual which describes use cases along with easy to follow screenshots. This manual can be seen on the module’s project page. That would be the README and the module’s main page instructions on how to install and use my module.

I have also reworked the technical documentation of my module which is accessible here. This document should be used by people who want to contribute to the module. After I am done with the draft, I will export it as PDF and include in my module in the docs folder.

Plans for week 12

Aug 16 2017
Aug 16

As you may already know Media entity module entered Drupal 8.4 as Media module earlier this year. This was the result of years of hard work in contrib and core space. While the module stayed conceptually the same we used this opportunity to clean it up and refactor some things; mostly to make APIs even easier to understand and use.

Media entity comes with the concept of so-called source plugins (also called type plugins in the past). They are responsible for everything related to a specific media type: they have knowledge about their nature, about the way they should be stored and displayed, they are aware of any business logic related to them, etc.

There were many plugins already available before Drupal core decided to adopt the module and they mostly lived as separate modules in contrib space. Since the API changed a bit during the core transition all this plugins need to be updated. The process is pretty straightforward, but the number of modules that need to be worked on is quite high. This means that we'll need quite some help from the community to do this as fast and as effectively as possible.

Here is where you come in!

Are you interested in contributing but don't know how? Are you looking for a task that is relatively simple but not completely trivial? Then the porting of media source plugins might be a really good entry point for you!

There is a meta issue that is trying to keep the overview over the porting process. You will find the list of modules and their current status in it. In order to get familiar with the changes that were introduced during the core transition you should check the relevant change record. All information that is needed for ports should be available there. If you'd rather work with examples then take a look at Media entity image and Media entity document, which were adopted to core as Image and File source plugins respectively.

When you decided which module deserves your attention check its issue queue. If there is already an issue about the porting get involved there. If there is not create one to let others know that you are working on the port. In any case make sure to add its reference to the meta overview issue. This will help us to keep the general overview over the process.

Need help?

Have you checked all the resources I mentioned above and you feel that there are still things that are not entirely clear? Come to the #drupal-media channel on IRC. We are hanging out in that channel most of the times. Our weekly meetings happen in the same channel every Wednesday at 14h UTC.

Aug 16 2017
Aug 16

Migrating your site to Drupal 8 isn't simple or cheap. Nor is maintaining it or getting support once your new Drupal 8 site is live!

This is a problem that affects all organizations using Drupal, but it's particularly hard on smaller nonprofits.

A couple weeks ago, I wrote a super long article detailing how Drupal 8 has left many small nonprofits behind. It also proposes a possible path for fixing it!

We're building an Open Source platform for nonprofit websites built on Drupal 8 and CiviCRM, available as a SaaS with hosting and support included.

That article was primarily about why - in this article I'd like to talk about the details of how!

There's a lot to discuss, but I'll try to make this article shorter. :-)

Oh, and we're looking for 10 adventurous nonprofits to join the BETA and help build it.

If you join the BETA, we'll migrate your existing site to the new Drupal 8 & CiviCRM platform for FREE!

Read more to learn about all the details we've got worked out so far...

WARNING: All is in flux!

In response to our recent articles, we received dozens of comments, emails and submissions to our call for nonprofits to join the BETA. As a result, we've scheduled numerous calls with interested folks and already had about a dozen.

We started this process with a painful problem and a rough sketch of a plan to solve it.

But we want the rest of the plan to be finished in collaboration with the nonprofits in the BETA.

We did this on purpose, because we knew any first draft of any plan would be wrong, and we didn't want to get too attached to it. Too much existing software is a "solution looking for problem" and we want to actually solve problems.

Anyway, all of that to say: whatever I write here is subject to change based on further discussions with real nonprofits and what happens during the BETA.

The BETA process

So, with that said, here's how we envision the BETA process going:

  1. Find 10 adventurous nonprofits to join the BETA
  2. Gather use cases from them to cover the critical features of all sites
  3. Build, migrate and launch each site
  4. Iterate for 12-ish months until we have something solid and generally usable
  5. Launch the first version of the SaaS "self-service plan", ending the BETA period, and moving to general availability

Throughout the BETA and beyond, all of the code for the platform will be Open Source and publicly available, or contributed back to Drupal, CiviCRM or to the other 3rd party libraries and modules used.

So, even folks who don't want to work with us commercially can use the fruits of our labor or contribute on the Open Source side, and there's no "vendor lock-in."

Our main business is support and maintenance, which I think we're pretty awesome at, but I'm biased. ;-) We're going to treat members of the BETA like we treat any support and maintenance customer, as if they were on our "Standard" plan, so, we'll answer support questions and perform an UNLIMITED number of maintenance requests.

We'll do whatever amount of initial training is necessary to get you and your team productive on the platform. (Eventually, we'll have some sort of standard training, but in the beginning we need to figure out what should be included in that!) The training will need to be done virtually.

Feature ideas

Like I said above, we're going to work with the nonprofits in the BETA to decide on the final feature set, but here's roughly what we've got in mind for the first version so far:

  • Selling memberships (where members can login and update their profile)
  • Accepting donations
  • Events (created in CiviCRM but exposed on the Drupal site) with all the usual features from CiviCRM, like: online RSVP, optional fees, event reminders/follow-ups, attendee limits, etc
  • "Page" content type
  • "News" (ie. blog) content type
  • Modern front page that can be edited in-place
  • Clean, modern, mobile-friendly theme that can be re-colored and configured for brand identity
  • Contact page with messages recorded in CiviCRM
  • Volunteer management (via CiviCRM)
  • Mass emailing (via CiviCRM)
  • ...the rest of the default feature set of CiviCRM 4.7+ on the backend. If you're curious why we've chosen CiviCRM and not a pure Drupal solution, check out the article I wrote about CiviCRM last week.

Of course, after launching the first BETA sites on the initial feature set (whatever that ends up being) we'll continue to iterate and expand the feature set based on the needs of the BETA participants.

Pricing ideas (for the future)

Like I decscribed above, we're hoping the BETA process will last about a year.

At the end of that, we're going to make our SaaS platform for nonprofit websites (built on Drupal 8 and CiviCRM) available to anyone (what I'm calling "general availability").

We're imagining two plans:

  • A self-service plan for around $50/mo. You get a Drupal 8 & CiviCRM site, pre-setup with the basic things a nonprofit membership organization needs, and the ability to customize it yourself, with some detailed documentation on how to do so. Hosting is included and you'll get updates automatically as they come out.
  • A full-service plan for around $250/mo. You get the same as above, but additionally full support & maintenance service from our staff, similar to the Standard plan we currently provide for any Drupal site, which includes UNLIMITED requests to answer questions and make simple changes to the site. (Our Standard plan is normally $499/mo, so this is a reduced price for nonprofits.)

And, of course, since the whole platform is Open Source, you'll be able to quit at any time and take an exported version of your site with you, which you could setup in another hosting environment.

The goal is to have something that even small nonprofits could affort (like the ~$50/mo), while still giving them the full power of Drupal 8 and CiviCRM.

But, in order to get there, we need to first do the BETA which will have different pricing...

BETA pricing

Joining the BETA is a little risky (the product could flop!) but we're doing everything we can to mitigate that risk for you, and offer some incentives to make it worthwhile.

One thing we are NOT doing, though, is making the BETA completely free - after re-launching your site on Drupal 8 & CiviCRM, BETA participants will be charged $250/mo.

We feel strongly that charging for the BETA will lead to a better product: we want to make something that provides enough value that it's worth paying for. With money invested, participants are likely to be more engaged in the process and demand the things they need (rather than figuring, "eh, it's free, we can deal with it being crappy.")

However, we think this is a pretty good deal, because:

  • The monthly charges won't start until after your site is migrated and live on Drupal 8 & CiviCRM. You shouldn't have to pay until we're providing your organization with value. We will ask for a down payment on the first month to make sure your organization is serious, but if for some reason we don't re-launch your site on the new platform we're happy to refund that.
  • We'll include the same level of support & maintenance as on our Standard plan. Our main business is support & maintenance of Drupal sites, and our customers (many of whom are nonprofits) find value in our Standard plan for $499/mo - you'll get that same value for half the cost.
  • We'll migrate from your current site and CRM to Drupal 8 & CiviCRM for FREE. While there is a fixed, monthly cost, there isn't any additional charge for doing the migration. Drupal migrations are usually billed hourly and can cost tens thousands of dollars. Even if you don't continue your relationship with us past the end of the BETA, you'll now have a site that's upgraded to Drupal 8!
  • You can quit at any time - there will be NO term minimum. We've debated this internally over the last several weeks. On the one hand, we don't want to force anyone to be our customer. But on the other hand, there's a risk that an organization will join for a month just to get their site migrated and then quit. We've decided to trust the BETA participants to act in good faith. We want participants to stay in the BETA for a year, so, if you know up front that you won't or can't - please don't join the BETA. But if later on it's not working out or something comes up and you have to quit - that's absolutely fine, you can quit at any time and we'll give you a full copy of your site. :-)
  • You'll have a lot of influence over what the product becomes. All of the initial features and the features added over the first year will be based on the needs of the BETA participants.

Of course, we understand that there are plenty of small nonprofits that can't afford $250/mo!

The ultimate goal is to have a lower priced self-service plan (around $50/mo) that smaller organizations can afford. However, we need to go through the BETA process to get there, and while we're investing a lot of our own resources to build this, this is a way to partially share the cost of initial development with our customers.

If your organization can't afford to participate in the BETA, but might be interested after the BETA period is over, please stay in touch!

What makes a good BETA participant?

Certainly, if you're interested, let us know! We're actively changing our plans based on the conversations we're having, and so even if your organization doesn't match our current criteria, we could change our criteria based on talking to you. :-)

In any case, we think that a good BETA candidate is an organization that:

  • Uses a CRM or desperately needs to start using one. CiviCRM will play a big role and so we want to work with organizations that will actively use it and get something out of it. Signs that you desperately need a CRM include using some imperfect and painful system to track your members or constituents (like spreadsheets, Microsoft Access, paper, etc) but you do it anyway because it's important to your operations.
  • Has users (volunteer or staff) with the time to use it and give feedback. Of course, no one has an abundance of time, but we need to actively engage with the BETA participants in order to build a good product. If you spend some amount of time updating your site or doing outreach to constituents on a regular basis (say, weekly or monthly), and are excited about this idea enough to complain when things could be improved, then that's exactly what we're looking for. :-)

If that sounds like your organization, please get in touch!

But if we can't find 10 nonprofits for the BETA, we're not doing it

Basically, we don't want to build something that people don't really want.

If we can't find 10 nonprofits who will take the leap with us... well, either the need must not be that great, or our plan is seriously flawed in some way.

But if there's atleast 10 nonprofits willing to join despite the risks this early, there's probably many more who'd be interested later. And if we build something that works for at least 10 customers AND it's good enough to pay for, well, we probably made something pretty good.

If you're interested, please click the big green button below to...

Join the BETA or get progress updates

We'll be posting more as the project progresses, so please stay tuned!

Think this a great idea? Or, even better - think we got something terribly wrong? Leave a comment below! We're listening :-)

Aug 15 2017
Aug 15

So you just finished building an awesome new website on Drupal, but now you’ve run into a new dilemma. How do optimize the site for search engines? Search engine optimization, or SEO, can be overwhelming, but don’t let that cause you to ignore certain things you can do to help drive traffic to your website. There’s nothing worse than spending countless hours to develop a web application, only to find out that users aren’t able to find your site. This can be extremely frustrating, as well as devastating if your company or business heavily relies on organic traffic.

Now there are countless philosophies of SEO, many of which are well-educated assumptions of what Google is looking for. The reality is that no one knows exactly how Google’s algorithm is calculated, and it doesn’t help when their algorithm is constantly being updated. Luckily, there are a few best practices that are accepted across the board, most of which have been confirmed by Google as being a contributing factor to search engine ranking. This blog is going to focus on a few of those best practices and which modules we have found to be helpful in both our Drupal 7 and Drupal 8 projects.

So, without further ado, here is our list of Drupal modules you should consider using on your site to help improve your SEO:

XML Sitemap Module

As the name suggests, XML Sitemap allows you to effortlessly generate a sitemap for your website. A sitemap allows for Google and other search engines like Bing and Yahoo, to be able to easily find and crawl pages on your site. Is a sitemap necessary? No. But if it helps the pages of your site to become easily discoverable, then why not reduce the risk of not having pages of your site indexed? This is especially important if you have a large site with thousands or even hundreds of pages. Having a sitemap also provides search engines with some valuable information, such as how often the page is updated and the level of significance compared to other pages on your site.

XML Sitemap allows you to generate a sitemap with a click of a button, and best of all you can configure it to periodically generate a new sitemap which will add any new pages you’ve published on your Drupal site. Once your website has a sitemap, it is recommended to submit that sitemap on Google Search Console, and if you haven’t claimed your website on Google Search Console yet, I would highly advise doing so as it will provide you with helpful insight such as indexing information, critical issues, and more.

Metatag Module

The next Drupal module is one that can really help boost your search engine ranking and visibility. Metatag is a powerful module that gives you the ability to update a large number of various meta tags on your site. A meta tag is an HTML tag which contains valuable information that search engines use to determine the relevance of a page when determining search ranking. The more information available to search engines such as Google, the better your chances will be that your pages will rank well. The Metatag module allows you to easily update some of the more popular tags, such as meta description, meta content type, title tag, viewport, and more.

Adding and/or updating your meta tags is the first step of best SEO practice. I’ve come across many sites who pay little to no attention to their meta tags. Luckily, the Metatag module for Drupal can help you easily boost your SEO, and even if you don’t have time to go through and update your meta tags manually (which is recommend), the module also has a feature to have your tags automatically generated.

Real-Time SEO for Drupal Module

The Real-Time SEO for Drupal module is a powerful tool on its own, but it is even better when paired with the Metatag module which we just finished discussing. This module takes into account many SEO best practices and gives you a real-time analysis, ensuring that your content is best optimized for search engines. It will inform you if your content is too short, how readable your posts are, and also provides you a snapshot of how your page will appear in Google. The other helpful information it provides is regarding missing or potentially weak tags, which is why I mentioned that this module and the Metatag module work extremely well together. Real-Time SEO for Drupal can let you know how to better improve your meta tags and by using the Metatags module, you can quickly update your tags and watch in real-time how the changes affect your SEO.

The Real-Time SEO for Drupal module is a simple, yet incredibly useful tool in helping you see the SEO health of your pages. If you are just getting into SEO, this is a great place to start, and even if you’re a seasoned pro this is a nice tool to have to remind you of any meta tags or keyword optimization opportunities you may be missing.

Google Analytics Module

The final module is the Google Analytics module. Google Analytics is by far the most widely used analytics platform. The invaluable information it provides, the numerous tools available, and the integrations it allows, make it a requirement for anyone looking to improve the SEO of their Drupal website. This Drupal module is extremely convenient, as it does not require a developer to have to mess with any of the site’s code. After installing the module all you have to do is enter the web property ID that is provided to you after you setup your account on Google Analytics.
From the Google Analytics module UI, you have a number of helpful options, such as what domains to track, which pages to exclude, adjusting page roles, tracking clicks and downloads, and more. The Google Analytics module for Drupal is another great tool to add to your tool belt when trying to best improve your SEO.

Final Thoughts

This list of helpful SEO modules for your Drupal 7 or 8 site could easily have been much longer, but these are a few key modules to help you get started. SEO is something that should not be ignored, as I mentioned in the beginning of the blog, it’s a shame to build a site only to find that no one is actually visiting it, but using these modules properly can definitely help prevent this issue. if you would like to learn of other great modules to help your SEO, please leave a comment below and I’ll write a follow-up blog.

Aug 15 2017
Aug 15

I had this blog post written last week but didn’t publish because it felt wrong. It wasn’t until earlier today, when I was listening to one of my favorite public speakers, that I realized it was because I was talking at you, not to you. In truth, whenever I’ve discussed this document I keep talking about the what its purpose is and how it could bring value to our Drupal community but not why you should care. So, just for today I’ll set aside my talk of prom dresses past, my Munster life and try to focus on why the Accessibility Rights and Responsibilities Document I’ve been working on with others came to be. It’s not only about web accessibility, it's also about business. Hopefully you’ll find this document will become important to you too.  

Accessibility! Accessibility! Accessibility!

Accessibility talk is everywhere, at least if you’re looking for it. From code, to legal, to Slack channels, to my talk at Design4Drupal Boston this past June; if you want to read, hear, or chat about accessibility online - it’s out there.

But there’s a problem…

You see in Boston the night after I spoke, I was lucky enough to look around the table to find myself sharing hot peppers and nachos with project managers, writers, front end developers, sales and UX designers. And maybe it was the fact I was starving and inhaling nachos that had me be quiet long enough to listen, or maybe it was the enthusiasm of those with whom I sat, but I heard things I hadn’t heard before. They all wanted to grow and were looking for ways to incorporate accessibility into their everyday workflows but there was so much confusion as to who would ultimately be responsible for what.  

And as we spoke it occurred to a few of us, why would companies feel empowered to make this change if the various groups involved were not clear on who needed to do what?

-Time Out-

Let me break for a moment to say that the W3C checklist is detailed and incredibly well thought out. This is a tool developed by people far more educated in this than I and it will tell you every detail I could think of.

But we were seeing a problem…

It wasn’t translating.  

We had a table full of experienced, accomplished professionals and it was difficult for them to sort it all out into simple business language to help their organizations make an agreement internally that “if I need help with X, I can go to team Y.” Some needed to know that as an agency acting in the role of their clients creative and/or technical arm, that they could hand the site over to content editors who’d been educated and encouraged to continue to meet the requirements moving forward.

In short, I began to care about a document because there was a need for it.

Accessibility wasn’t accessible to the businesses who wanted to adopt it.

The next day a few of us started building a Rights and Responsibilities document. Our only goal was to meet that need and to clearly state, in common/business friendly language what teams are responsible for what when it comes to accessibility issues.

At first, we tried to take a more granular approach but that would have steered us away from our goal. Instead we had broken out responsibilities into two groups; the Design and Development Team, and the Content Authors/Managers.

Why? Because these were the people around the table who said they had the greatest need.

As stated in the document itself, the goal is not to determine at whom we point a finger at if something is not fully accessible but rather to help guide communication to the right source for solutions and ease businesses into making this a part of their standard operating procedure.

...ya know, “Just What We Do” and all that.

Our Rights & Responsibilities Documentation is Slowly Growing

Since those days in Boston a few of us have continue to work on this as we have been able. Last week at Drupal GovCon I had the privilege of meeting some amazing people who also want to help. A GitHub repository has been made and we have tentatively scheduling a zoom meeting with a large MeetUp group to see who there wants to contribute.

At the time of this blog post, the document is not at the stage where someone will copy and paste it into their department’s operations, but it’s got a good start.

So, why should you care?

Because we all need support.

  • Technical teams need to be confident that the content authors are enabled to keep a site accessible after launch.
  • Content managers need to know that their authoring UX was set up to support them.
  • And project stakeholders needed to know where to ask for help if they needed.

The days of one person sitting down to just ‘build a website’ are gone. Today building a website is about collaboration, knowing where to turn for support, and where to find the education to better ourselves.

I encourage you to read the Rights and Responsibilities document  and Tweet me your feedback (@dbungard).

It’s not about a document.

It’s another way for us to make this “just what we all do.”

Additional Resources
Accessibility Best Practices for Content Editors | eBook
Web Accessibility Terminology | Blog
Friday 5: 5 Ways to Incorporing Accessibilitylity Into Your Digital Strategy | Video
Friday 5: 5 Takeaways from Design4Drupal | Video

Aug 15 2017
Aug 15

Part 3 in this series is continued from a previous post, Decoupled Drupal: A 10,000ft View.

One of the main considerations when building the POWDR website was uniformity. POWDR is a holding company composed of many separate companies, all with individual websites. In order to ease the burden on content admins, we sought a solution that avoided multiple content types for each separate site. As a holding company with so many websites to maintain, managing many content types can become really complicated really quickly. It was our job to keep the content admins at top of mind in order to make their job updating the various websites as easy as possible.

Drupal Multisite for Easier Administration

The reason we ended up going with a multisite is that for each POWDR property there is a separate Drupal instance. In typical ski industry form, POWDR continues to acquire additional resorts and companies. They are constantly bringing on companies with different processes, different applications, and different third-party vendors. Many have different teams acting as admin. So, one of our first considerations was how people on the main POWDR team were going to administrate and edit all of this content.

Image showing relationship between Drupal paragraphs, parent website, and multisites

We considered doing it all in one large API site though that plan quickly became too complicated when it came to permissions. Instead, it was decided that the project would be split up into multiple sites. Acquia made this process nice and easy. Using Acquia and Drupal 8, we were able to spin up a new multisite instance within the parent Drupal instance.

After some practice, we are now able to spin up a new instance in a matter of minutes. Using Drupal 8 and configuration, we copy the configuration from a parent skeleton site into a new site This allows the design team to start their development process with a basis on the API side without us having to reprogram and rebuild from the ground up.

Paragraphs Makes Complex Content Manageable

Working with Hoorooh Digital, we created an overarching entity structure using paragraphs that allowed us to make a baseline unit to build upon. Each paragraph was essentially a different piece of the website. They made components within Angular line up with paragraphs on the Drupal side. If you’re not familiar with paragraphs, in Drupal 8, their entities in and of themselves. This was nice for us because it allowed us to load and alter them programmatically, much like any other entity on the backend. They could be rearranged and served to the frontend from any site to meet design needs.

Implementation was one of the larger challenges of the POWDR project. The difficulty arose as we matched up the frontend to the Drupal backend. Custom code was required to ingest the paragraphs in the components. If you’re thinking about taking on this project, be sure to consider this step during the estimation process. In our experience, a good portion of the frontend development was required to render frontend components. We took the time to decide how componentry and paragraphs would be ingested from the Drupal platform, then matched up with the frontend framework. This allowed us to standardize all of the content coming out of the API so that frontends wouldn’t have to be rewritten for every site.

D8 and JSON REST API Decrease Development Time

The real power here was that, out-of-the-box, Drupal 8 does have a JSON REST API. We took that and ran with it. We realized early on that the Angular frontend and the out-of-the-box JSON API were going to require a lot of work to get them to work together. Instead of sacrificing this time, we extended the JSON encoder class in Drupal 8 and created our own POWDR format JSON encoder. This allowed us to create a serializer service and a bunch of custom entity normalizers. We then added related entities and some custom processing to meet the frontend needs. Out-of-the-box, the JSON API is built so that you’re requesting each related entity down the line. You get an entity ID and then you make another call to the API to get the content of that entity.

Essentially, what we did by extending the JSON encoder and all the entity normalizers was create an entity reference class. By using this structure we were able to load related entities, such as paragraphs and media, all on the same parent node, enabling the JSON encoder and all the entity normalizers to load the related entities and be served up as pieces on the API call. This gave POWDR the ability to create pages in much of the same structure that they’d be using on the frontend. The content admin sees a structure similar to the frontend and their API calls. POWDR is building pages on the backend in much the same way that they’re coming out on the frontend. This saves a lot of these extra extraneous API calls.

One of the great things about Drupal 8 is that it is built on Symfony, and incorporates a lot of modern PHP concepts, which helped our development of this custom API move quickly. Using Drupal 6/7, we would have to build from the ground up then figure out how the API was going to call itself. Instead, we just extended the class, extended a few other classes and, in a matter of days, had at least a working model for the design team to work from.

Overall, development was much faster for this project. Since everything was an entity point the back end API could load taxonomies, media, paragraphs in the same way and they also looked the same. This meant the design team could be presented something that is agnostic to the backend functionality but still utilizes Drupal’s media power.

To Be Continued...

In the next post of this series our hosting Partner, Acquia will cover the ins and outs of the POWDR project’s frontend design. Stay tuned!

Aug 15 2017
Aug 15

In part two of our Webform tutorial, we’ll show you how to create multipage forms, apply conditional logic, create layouts and much more!

We’ll take the simple newsletter signup form created in part one of this tutorial and add additional pages. Then we’ll demonstrate how to show or hide an element depending on the selection made on another element. We’ll also look at layouts and then finish off with an overview of some of the other great features Webform has to offer.

Multipage Forms

For forms with many elements, it’s best to spread them across two or more pages. In this section, we’ll take the form we created in part one and move some of the elements to make a two page form. We’ll also add a preview page and make changes to the confirmation screen.

1. Starting from the Edit tab of the Webform created in part one, click on “Add page”.

Screenshot highlighting the Add page button which is above the first element.

2. Give the first page a title of “Your details”.

3. If you want to change the default “Previous page” and “Next page” text then you can do this in the “Page settings” section. We’ll stick with the defaults.

4. Click on Save to create the page.

5. Repeat the process to create a page called Feedback.

Screenshot showing the two new pages added at the bottom of the Webform elements.

6. On the Edit tab, drag the “Your details” page to the top.

7. Drag the “First name” and Email elements to the right a little so they are indented as shown below.

8. Drag the Feedback page above the checkboxes.

9. Drag the checkboxes and radio buttons to the right so they are also indented.

10. Click on “Save elements”.

Screenshot showing the pages moved to the correct places, as discussed in the text.

Clicking on the View tab will reveal a multipage form. You’ll see the page names on the progress bar at the top of the form. You can remove the progress bar in the form settings if you prefer.

Screenshot of the multipage Webform.

Preview and Submission Complete Pages

For long forms, it can be useful for users to preview the information before submitting it. Also, the default message a user receives after clicking on submit is “New submission added to Newsletter signup” or similar and so changing the message is normally a good idea. We can make both of these changes from the Settings tab for our Webform.

1. From the Edit tab of our form, click on the Settings sub-tab.

Screenshot highlighting the Settings sub-tab with is under the main Edit tab for a Webform.

2. Scroll down to the “Preview settings” section. The Optional radio button will allow users to skip the preview screen. We’ll select Required, so that users will always preview the information before submitting it. You can alter various aspects of the preview page but we’ll stick with the defaults.

3. Scroll further down the page to the “Confirmation settings” section.

4. It’s worth reading through the options under “Confirmation type”. We’ll stick with the default of Page as this will work well for our simple example.

5. For “Confirmation title”, enter “Newsletter signup successful”.

6. Enter “Thank you, [webform-submission:values:first_name]. You have signed up to our newsletter.” for the “Confirmation message”.

Screenshot of confirmation settings

7. Scroll down to the bottom of the page and click on Save.

When the “First name” element was initially added to the Webform, a key of first_name was created and we’ve used this in our confirmation message. You could also use the information from other form elements by replacing first_name with the appropriate key.

You can find the key listed on the Edit tab of the form, under the Key column, although that column may be hidden on smaller screens. Also, if you edit an element, you’ll see the key shown in small text to the right of the title.

Now if you click on the View tab and fill in the information on the form, you’ll have a preview screen. After signup, you’ll also have a personalized message.

Screenshot showing that the first name entered on the form becomes part of the confirmation message.

Conditional Logic

On page two of our wizard, we have a question asking about interests and then another specifically about JavaScript. Ideally, we only want to show the JavaScript question if the user has expressed an interest in it. This is where conditional logic helps. We can set the second question to respond to the results of the first.

1. From the Edit tab of our form, click on the Edit button for “Which JavaScript framework are you most interested in?”.

2. Scroll down to the “Conditional logic” section.

3. Change the State to Visible.

4. Select “JavaScript [Checkboxes]” under “Element/Selector”.

5. Under Trigger, select Checked.

Screenshot showing conditional logic being added.

6. Click on Save.

Now if you view the second page of the form, you won’t see the question about JavaScript frameworks unless you have selected the JavaScript checkbox.

Screenshot showing that the JavaScript library question only appears if the JavaScript checkbox in the first question is checked.

Conditional logic can be used to show or hide elements, disable them or make them required, depending on the state of other elements. It’s always worth testing that the logic performs as you expect it to, especially for complex forms.

Displaying Webforms

In this section, we’ll show how to change the default URL. We’ll also demonstrate how to attach a Webform to a node and how to display it in a block.

Changing the URL

By default, Webforms have a URL of “/form/name-of-form”, so in our case it’s “/form/newsletter-signup”. You can change the form part of the URL to another word for all forms within the global Webform Settings tab (administrative toolbar, Structure, Webforms). Instead of doing that, we’ll add an alias for our form.

1. From the Edit screen of the form, click on the Settings sub-tab.

2. Scroll down to the “URL path settings” section.

3. Here you can add URL aliases. We’re going to use “/signup” for the first box and “/signup-complete” for the second.

Screenshot showing the URL path settings being added.

4. Click on Save.

Now, both the form and the confirmation page will have a shorter URL.

Attaching a Webform to a Node

Webform also allows you to attach a form to a node. In this example, we’ll attach a node to the “Basic page” content type.

1. From the administrative toolbar, click on Structure and then “Content types”.

2. Click on “Manage fields” for “Basic page”.

3. Click on “Add field”.

4. Under “Add a new field”, select Webform.

5. Give the field a label, such as “Newsletter signup”.

6. Click on “Save and continue” and then “Save field settings” on the next screen.

7. You should now be on the Edit tab for the new field. In the “Default value” section, select “Newsletter signup” from the list.

8. Click on “Save settings”.

As with any field, you’ll be able to adjust its position relative to other fields, so you can move the Webform to any part of the node.

Now when you create new content using the “Basic page” content type, you’ll have the newsletter signup Webform attached.

Screenshot showing a node with a Webform attached.

Displaying a Webform in a Block

Another option for displaying a Webform is to create a block. This offers flexibility on where the Webform can be placed on the page.

1. Navigate to Structure on the administrative toolbar, and then “Block layout”.

2. Next to the appropriate region of your theme, click on “Place block”. We’re going to add the block to “Sidebar second” for our Bartik theme.

3. Find Webform in the list and click on “Place block” next to it.

4. Change the title to “Newsletter signup”.

5. Under Webform, type News and select “Newsletter signup” when it appears.

6. In the Visibility section, adjust the settings as you would with any block. We’re going to enter “/signup*” for “Hide for the listed pages” on the Pages tab, so the block will be hidden on the “/signup” and “/signup-confirmation pages”.

7. Click on “Save block” to complete the process.

The signup form now appears in a block on the right of our screen for most pages.

Screenshot of the Webform in a block.

Note that Webform adds another tab to the Visibility section for block configuration. This allows you to select which Webforms the block should be displayed on.

Creating Layouts

To simplify laying out elements on a page, Webform includes a variety of containers including divs, expandable details and fieldsets. If you add a new element, you’ll see all the containers listed together.

Screenshot listing the containers - Container, Details, Fieldset, Flexbox layout, Item, Label.

In this section, we’ll look at the Flexbox container and use it on the second page of the form. This will allow the two questions to sit side-by-side on a large screen, but they’ll automatically be vertically stacked on smaller screens.

1. From the Edit screen of our Webform, click on “Add element”.

2. Find “Flexbox layout” in the list and click on “Add element” on the same line.

3. Give the element a key, such as newsletter_interests.

4. The defaults will work fine for this example, so click Save to create the container.

5. Drag the new Flexbox layout element, so that it’s just below Feedback and make sure it’s indented.

6. The checkboxes and radio buttons should now be below the Flexbox layout element. Move them both to the right so they are further indented.

Screenshot showing flexbox layout with indented questions underneath.

7. Click on “Save elements” to complete the process.

Now when you fill in page two of the form, when the JavaScript box is ticked, the second question will appear to the right on large screens. If there isn’t enough room for the questions to be side-by-side then the second question will drop down below the first.

Screenshot showing two questions side-by-side.

This is just a simple example of what’s possible with layouts. Later in this tutorial, we’ll install the Webform Examples module and the “Example: Layout: Flexbox” form shows how many different elements can be displayed across a page.

Even More Features

We could carry on writing about the Webform module for weeks as it includes so many great features. In this section, we’ll give a brief overview of some other features that are definitely worth looking at.

Reducing SPAM

Any form on the internet will be a target for spammers so it’s essential to have systems in place to reduce this to a minimum. Webform works with the spam protection modules Antibot, CAPTCHA and Honeypot and using a combination of these should help cut down on unwanted messages.

Head to Structure on the administrative toolbar and then Webforms. Click on the Add-ons tab and then scroll down to the “Spam protection” section and find the links to each of the modules.

Once installed, to configure Antibot and Honeypot, click on Webform’s global Settings tab. Then expand “Third party settings” within the “Webform settings” section. For CAPTCHA, there is an element that can be added to any Webform.


The Edit tab on a Webform has a “Source (YAML)” sub-tab which exposes the underlying YAML markup. This allows you to copy code to another form, add more elements and make changes to forms. For forms that use a lot of similar elements, copying and pasting with the appropriate changes can be a lot quicker than manually adding each element.

In the code below, which is the first section of our YAML markup, we’ve added a Surname text field by copying the markup for first_name and editing it. We’ve also changed the title for the second page from Feedback to Interests.

  '#type': wizard_page
  '#title': 'Your details'
    '#type': textfield
    '#title': 'First name'
    '#required': true
    '#type': textfield
    '#title': 'Surname'
    '#required': true
    '#type': email
    '#title': Email
    '#required': true
  '#type': wizard_page
  '#title': Interests

Saving the form and clicking on the View tab shows the new form element in place and the new name for the second page of our form.

Screenshot showing that Surname has been added and the second page is now called Interests.

If you’ve not used YAML before then be very careful with spaces. When items are nested then always use two spaces to indent. Thankfully the interface will point out any lines that have been incorrectly formatted. The screenshot below shows what happens when you add an extra space.

Screenshot showing that there is an indentation issue near surname in the YAML file.

Note that some changes made to the YAML markup will require you to remove data first. For example, if we had also changed the key for the Feedback page, which is shown as feedback: in the YAML code above, then we would have needed to clear submissions or delete the page in the UI and then re-create it.

You can find out more about exporting and importing Webforms using YAML in this video.


To help track down issues, you can enable debugging for a form. Start off at the Edit tab of the form and click on the “Emails / Handlers” sub-tab. Then you just need to click on “Add handler” and follow through the screens to add a Debug handler. The screenshot below shows the type of information that will be displayed as you move through a form.

Screenshot showing debugging output with keys and values entered for each element.

The Examples Module

To get an idea of the capabilities of Webform, it’s a good idea to look at the Webform Examples module. You can enable this from the Extend tab of the administrative toolbar or by using Drush with the following command:

drush en webform_examples

This will install many Webforms that demonstrate different aspects of the module.

Screenshot listing the nine example Webforms available.

The “Example: Style Guide” is a good starting point as it shows all the different elements and also has some photos of cute kittens.

Settings, Modules and Add-ons

If you have been following along with this tutorial, you will have seen a huge array of settings. It’s worth spending some time looking through all the global settings available for Webform as well as the settings for individual Webforms and for different elements. These are just some of the settings available:

Screenshot listing some of the many Webform settings.

Webform includes a number of modules including starter templates and dev tools and you can view these on the Extend tab of the administrative toolbar by filtering using the word Webform. If you need to extend the functionality of Webform further, then the first place to look is the Add-ons tab.


In this part of the tutorial, we’ve looked at multipage forms and shown how to display Webforms in a variety of ways. We’ve used conditional logic to show or hide an element depending on the state of another element. We’ve also given an overview of some of the other great features included in the Webform module.


Q: Is there an online demo of Webform?
You can test the features of Webform on

Aug 15 2017
Aug 15

The Mediacurrent team is excited to be supporting the inaugural Decoupled Dev Days event in NYC this week (August 19-20) as organizers and sponsors! We hope you’ll be joining us, however, we know how busy summer weekends can be so we will be sharing session recordings after the event.

Why Decoupled?

That’s simple - decoupled is an important conversation for any business wanting to separate their front end and back end interfaces. It opens the door to a variety of potential programming languages and design philosophies to accomplish business goals.

Our very own Matt Davis along with Acquia’s Preston So will be kicking off the event with Opening Remarks, with insight into why this event was started and the future of decoupled, don’t be late!

Angular and Drupal: A Compelling Combo

Speakers: Matt Davis and Stephen Fluin, Google

The goal of this session is to provide an overview of the options building with Angular will open to you and your users. Attendees should expect to walk away with greater understanding of the capabilities and best practices of the framework and surrounding ecosystem, as well as new ideas about how to incorporate it into their next Drupal project.

Using JSON Web Tokens (JWT) for REST authentication

Speaker: Edward Chan

This session will provide an introduction to JSON Web Tokens (JWT) (, advantages over other authentication methods, and how to use it to authenticate requests to Drupal REST resources. After this session, attendees will have a better understanding of how JWTs work and will be able to set up and use JWT for authenticating REST requests in Drupal.

Matt will take the stage again at the end of the day for Closing Remarks.  Be sure to stick around to hear stats from the weekend and then head over to the Happy Hour at Ace Hotel NY.

You can see the full program and all sessions on the event website.

If you are attending and snap an awesome picture of one of the Team MC presenters, share it with us (@mediacurrent) on your favorite social platform: Facebook, Twitter, Instagram or LinkedIn.

Aug 15 2017
Aug 15

Creating unique content is a direct way both to Google’s and your customers’ hearts, and Drupal 8 is making the road much smoother and the journey more enjoyable! Creating interactive HTML5 content and content modelling in Drupal 8 are just a couple of examples we’ve already described. So today, let’s take a glimpse at the Entity Browser, one of the modules from Drupal 8’s collection of media management tools.

Well, saying “one” would probably mean diminishing its capabilities, because the Entity Browser has a whole ecosystem around it and serves as the base for creating many cool browsers. It’s like a large box of treasures, and we will now unbox some of them.

How Drupal 8’s Entity Browser makes content editors happy

Great times have come for content editors (and not only for them), since the appearance of Entity Browser, an incredibly flexible tool for handling (browsing, selecting, creating etc.) entities. With it, it’s possible to drag-and-drop multiple images at once, reorder or remove them, reuse them for other articles, upload more images in the process, easily search for related content by various criteria, create and add quotes without opening a new tab, create another piece of content without leaving the original form, embed entities into WYSIWYG, and much more.

Plugins for the Entity Browser

Entity Browser relies on Drupal 8 core plugins to do its work. The heart of Entity Browser is the Widget plugin, which is responsible for selecting and creating entities. The Widget Selector plugin deals with the options for switching between widgets, while the Selection Display plugin works with the ways the selected entities are displayed. Finally, the Display plugin determines how the Entity Browser will look.

Configuring Drupal 8’s Entity Browser

The Entity Browser’s flexibility lets it be literally whatever you want it to be and provide the perfect entity handling workflow for you. With your Entity Browser module installed successfully (as well as the Ctoools module just for this process), go to Configuration — Content authoring — Entity browsers and click the “Add Entity Browser” button. You can give it a name and then shape it to your liking.

1. On the same page, select the general features of the Entity Browser by configuring the necessary plugins.

  • The display type of your entity browser can be: a standalone form, an iFrame container, or a modal window.
  • The available widgets can be presented as: a dropdown menu, a single widget or horizontal tabs.
  • The options to show the entity selection area to the editors are: a preconfigured view, no selection display, or multi-step selection display.

2-3-4. Configure the details of your display type, widget selector type, and entity selection area.

The next three steps involve a more detailed configuration (sizes, styles, auto-opening etc.) of the plugins you have selected above. This will not take long — some of them even require no further configuration.

5. Add the widgets to your browser.

Equip your content editors with as many powers as possible. The available widget plugins are: “View”, “Upload,” “Entity form,” and “Upload images.”

Congrats — your Entity browser is created!

Ready browsers to choose from

You can also pick one of the very nice pre-configured browsers created on the basis of the Entity Browser. They are contributed Drupal modules you also need to download and install: Media Entity Browser, File Entity Browser, Content Browser, Entity Browser Enhance(d|r), and Slick Browser.

Thunder and Lighting

There also are full Drupal distributions using the Entity Browser. They are called Thunder and Lightning and are meant to provide an improved editing experiences.

Entity Browser and Inline Entity Form

The Entity Browser’s incredible “friendliness” lets it interact well with a great number of other tools. Among them, we would like to mention one we already discussed — the Inline Entity Form module in Drupal 8.

Entity Browser and Inline Entity Form are used together for creating new entities and browsing existing ones.

This is just a glimpse at the Entity Browser module, but its capabilities are endless. Moreover, the future looks very bright for the concept of media handling due to Drupal 8’s media initiative. To get help with configuring the Entity Browser module, building custom features, or migrating to Drupal 8 with all its innovations, you are welcome to contact our developers.

Aug 15 2017
Aug 15

Drupal Modules: The One Percent — Pagerer (video tutorial)

[embedded content]

Episode 31

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll consider Pagerer, a module which offers many options when customizing your pagers.

Aug 15 2017
Aug 15

Mollie provides a facade for several payment methods (credit card, debit card, Paypal, Sepa, Bitcoin, ...) with various languages and frameworks support.

In some cases, you could decide to use the Payment module instead of the full Commerce distribution.
This tutorial describes how to create a product as a node and process payment with Mollie, only via configuration. A possible use case can be an existing Drupal 8 site that just needs to enable a few products (like membership, ...).

composer require drupal/mollie

Composer will install the Mollie API client library for you into the vendor directory, so no need to download it into the libraries directory.

Enable the module, it will also enable the Plugin, Currency and Payment modules.

drush en mollie_payment

Enable the Payment Form module (that is shipped with the Payment module).

drush en payment_form

At the time of writing, you will need to use the Payment dev release.

composer require drupal/payment:2.x-dev

Then patch the line items ajax issue.

cd LOCAL_PATH/modules/contrib/payment
curl -O
patch -p1 < fix_ajax-1.patch

For deploying in other environments, make sure that your main composer.json file contains the patch into the extra section.

"patches": {
        "drupal/payment": {
            "Ajax broken in payment_form submodule": ""


If this error is triggered while viewing a line item:

Call to undefined function Drupal\payment\Entity\bcadd() in Drupal\payment\Entity\Payment->getAmount() 

or this one while importing a currency:

Error: Call to undefined function Commercie\Currency\bcdiv() in Commercie\Currency\Currency->getRoundingStep()

bcadd or bcdiv is missing, so install the bcmath php extension.

Composer should warn you about the missing ext-bcmath, but in some situations it can happen that the modules are being required properly but the error occurs at runtime (e.g. your command line php version is 7.0 and your Apache vhost version is 7.1, with a bcmath enabled for 7.0 but not for 7.1).

It can be easily fixed, here we are using php 7.1 on our vhost

sudo apt-get install php7.1-bcmath
# restart your server, here we assume Apache
sudo service apache2 restart


Create a Mollie account

After signing up, you will immediately receive a Test API key and a preconfigured Test profile that you can adapt to your configuration.

Mollie test profile

Define the site URL under Contact information. Then define the desired Payment methods.

Mollie payment methods


Configure the Mollie Payment module

Head to /admin/config/services/payment/mollie (or click on Configuration > Web Services > Payment > Mollie Payment).

Create a Mollie profile

Mollie add profile under Drupal

Test then the connection under /admin/config/services/payment/mollie/connection-test

Mollie Drupal connection test


Configure the Payment module

Add the Mollie payment method under /admin/config/services/payment/method

Mollie Drupal payment method

Configure the Currency module

Add or import a currency, preferably import it. Here we will use EUR.

Drupal currency, add or import

Drupal currency, import

If you want to add manually a currency, refer to the following documentation to get the ISO 4217:2015 reference:


Create a product content type

Add product content type

Then add a Payment Form field and configure it.

Add payment form field

just set the currency if desired

Product form currency


Create a product

Create a product

And voilà, here is the result

View product

Mollie payment page



Aug 15 2017
Aug 15

This year European DrupalCon will take place in Vienna, Austria. It's still more than a month away. However, the sessions were already selected. We will look at the ones, which were accepted in the business track. And we will also explain why.

DrupalCon Vienna is one of the biggest Drupal events in the world this year. Therefore, some of our team members will be present at the event in the capital city of Austria. But once again our AGILEDROP team will not be just present at the event. We had a »bigger« role.

Namely, our commercial director Iztok Smolic was invited to the Business track team. Together with Janne Kalliola (CEO of Exove) and Stella Power (CEO of Annertech) he prepared the program and selected sessions. And there is our reason why we are presenting them.


Business session


Many business sessions were proposed, so the decision was tough. It was expected to be tough, of course. But after some thought and discussions between the business track team, in the end, these Business sessions for DrupalCon Vienna were accepted:

1) Aligning your customers and product success. by Evelien Schut from GoalGorilla

2) Better together, a client/agency relationship based on trust and value by Andrii Podanenko and Alexander Schedrov from FFW

3) Challenges and Solutions in Getting your Open Source Company to Contribute by Chris Jansen from Deeson and Jeffrey A. "jam" McGuire from Acquia

4) Co-operative Drupal: Growth & Sustainability through Worker Ownership by Finn Lewis from Agile Collective

5) Content management market and Drupal by Nemanja Drobnjak from WONDROUS LLC

6) Creating business value with Drupal by Baddý Breidert from 1xINTERNET


Iztok session


7) Drupal Enterprise Marketing as a Global Business Alliance by Ivo Radulovski from Trio-interactive

8) How to go from one to seven companies around the world and how to run them by Michael Schmid and Dania Gerhardt from Amazee Labs

9) Is Selling Drupal an Art or a Science? by Michel van Velde from One Shoe

10) Marketing and Selling the Drupal Commerce Ecosystem by Ryan Szrama from Commerce Guys

11) Move up the value chain: DISCOVER, DEFINE, DESIGN, DELIVER, DISTRIBUTE (MAINTAIN, GROW & MEASURE) by Lukas Fischer and Michi Mauch from NETNODE

12) Observations from the Peanut Gallery. Confessions of a non-Technical Drupalist by Tom Erickson from Acquia

13) Teaching Clients How to Succeed by Ken Rickard from

14) Using Drupal 8 to build transactional & business critical enterprise applications by Maxime Topolov from Adyax

We hope you find something of your taste if you would be present at the event. In case you won't be, you will have to wait for the sessions to be published on Youtube.

Aug 14 2017
Aug 14

In Lightning 2.1.7, we’re finally answering a long-standing question: if I’m managing my code base with Composer, how can I bring front-end JavaScript libraries into my site?

This has long been a tricky issue. doesn’t really provide an official solution -- modules that require JavaScript libraries usually include instructions for downloading and extracting said libraries yourself. Libraries API can help in some cases; distributions are allowed to ship certain libraries. But if you’re building your site with Composer, you’ve been more or less on your own.

Now, the Lightning team has decided to add support for Asset Packagist. This useful repository acts as a bridge between Composer and the popular NPM and Bower repositories, which catalog thousands of useful front-end and JavaScript packages. When you have Asset Packagist enabled in a Composer project, you can install a Bower package like this (using Dropzone as an example):

$ composer require bower-asset/dropzone

And you can install an NPM package just as easily:

$ composer require npm-asset/dropzone

To use Asset Packagist in your project, merge the following into your composer.json:

"repositories": [
    "type: "composer",
    "url": ""

Presto! You can now add Bower and NPM packages to your project as if they were normal PHP packages. Yay! However...

Normally, asset packages will be installed in the vendor directory, like any other Composer package. This probably isn’t what you want to do with a front-end JavaScript library, though -- luckily, there is a special plugin you can use to install the libraries in the right place. Note that you’ll need Composer 1.5 (recently released) or later for this to work; run composer self-update if you're using an older version of Composer.

Now, add the plugin as a dependency:

$ composer require oomphinc/composer-installers-extender

Then merge the following into your composer.json:

"extra": {
  "installer-types": [
  "installer-paths": {
    "path/to/docroot/libraries/{$name}": [

Now, when you install a Bower or NPM package, it will be placed in docroot/libraries/NAME_OF_PACKAGE. Boo-yah!

Let's face it -- if you're using Composer to manage your Drupal code base and you want to add some JavaScript libraries, Asset Packagist rocks your socks around the block.

BUT! Note that this -- adding front-end libraries to a browser-based application -- is really the only use case for which Asset Packagist is appropriate. If you're writing a JavaScript app for Node, you should use NPM or Yarn, not Composer! Asset Packagist isn't meant to replace NPM or Bower, and it doesn't necessarily resolve dependencies the same way they do. So use this power wisely and well!

P.S. Lightning 2.1.7 includes a script which can help set up your project's composer.json to use Asset Packagist. To run this script, switch into the Lightning profile directory and run:

$ composer run enable-asset-packagist
Aug 14 2017
Aug 14

As an Acquia Preferred Partner, we are thrilled to have been ranked amongst the world’s most innovative websites and digital experiences. Elevated Third received recognition in the Nonprofit, Brand Experience, Financial Services, Digital Experience, and Community categories of the 2017 Acquia Engage Awards.

The Acquia Engage Awards recognize the amazing sites and digital experiences that organizations are building with the Acquia Platform. Nominations that demonstrated an advanced level of visual design, functionality, integration and overall experience have advanced to the finalist round, where an outside panel of experts will select the winning projects.

Winners will be announced at Acquia Engage in Boston from October 16-18, of which we are sponsors.  

“Acquia’s partners and customers are setting the benchmark for orchestrating the customer journey and driving the future of digital. Organizations are mastering the art of making every interaction personal and meaningful, and creating engaging, elegant solutions that extend beyond the browser,” said Joe Wykes, senior vice president, global channels, and commerce at Acquia. “We’re laying the foundation to help our partners and customers achieve their greatest ambitions and grow their digital capabilities long into the future. We’re inspired by the nominees and impact of their amazing collective work.”

Check out our competition! The full list of finalists for the 2017 Acquia Engage Awards is posted here.

Aug 14 2017
Aug 14

Templates and tasks make up the basic building blocks of a Maestro workflow.  Maestro requires a workflow template to be created by an administrator.  When called upon to do so, Maestro will put the template into "production" and will follow the logic in the template until completion.  The definitions of in-production and template are important as they are the defining points for important jargon in Maestro.  Simply put, templates are the workflow patterns that define logic, flow and variables.  Processes are templates that are being executed which then have process variables and assigned tasks in a queue.

Once created, a workflow template allows the Maestro engine to follow a predefined set of steps in order to automate your business process.  When put into production, the template's tasks are executed by the Maestro engine or end users in your system.  This blog post defines what templates and tasks are, and some of the terms associated with them.

Templates define the logical progression of a workflow pattern from a single start point to one or more end points.  Templates are stored in Drupal 8 as config entities provided by the maestro module and are managed through the maestro_template_builder module.  A Maestro template defines a few static and non-deletable elements:

Template machine name:  The machine name of the template is initially derived from the template human-readable label, however, you can edit the machine name to suit your requirements.

Template Canvas height and width:  The height and width, in pixels, of the template as shown in the template editor.  

"initiator" template variable:  The initiator variable appears once a new template has been saved.  You are unable to remove the initiator variable.  The initiator variable is set by the workflow engine when a template is put into production and is set to the user ID of the person starting the workflow.  The initiator variable is helpful in using to assign tasks back to the individual who kicked off a process.  You are able to edit/alter the initiator's value via the Maestro API.

"entity_identifiers" variable:  The entity_identifiers variable also appears once a new template has been saved.  You are also unable to remove the entity_identifiers variable.  entity_identifiers is used to store any entities used by the workflow in a specific format.  As an example, the Content Type Task uses the entity_identifiers variable as a means to store the unique IDs of content created and also to fetch that content for later use.  The format of the variable is as follows:    type:unique_identifier:ID,type:unique_identifier:ID,...  Where 'type' is the type of entity.  For content type tasks, this is set as the actual content type machine name (e.g. article).  'unique_identifier' is used to give each piece of content a unique ID used in the engine and task console to pick off which of the entities it should be actioning upon.  'ID' is the actual unique ID of the entity where in the Content Type Task's case, is the node ID.  While this may sound confusing, it's simply a list of entities which are used in the workflow.  As a workflow designer, you do not have to use the entity_identifiers to store unique IDs -- you can create and use variables as you see fit.

Maestro Workflow Concepts: Initiator and entity_identifier variablesThe template variable editor showing initiator, entity_identifiers and a third variable.

Start Task: When a template is created, a Start task is automatically generated.  This task is a non-deletable task and always has the machine name of "start".  The workflow engine will always begin execution of a process using the 'start' task (unless you specify via an API spawned process otherwise).

End Task:  Although deletable, the end task is generated automatically when a template is created.  A template can have multiple end tasks and as such, the end task is deletable and can be added back in to a template.

Already noted in the Template section above, the initiator and entity_identifiers variables are created by default for each template.  These variables are used primarily by the engine and tasks to store important information about what is going on in the execution of the process.  As a workflow administrator, you can create template variables that can be used by your workflow application to assign tasks to users or roles or to make logical branching determinations based on values.

You can create any number of template variables and assign them a default value.  It is advisable to set default values to avoid logic issues in your workflow when testing for specific values.  Each time your template is put into production, the variables you've created on the template are created in the process.  Process variables and their values are used by the workflow engine for assignment or logic branching.  It is up to you to determine how best to use the variables.

Tasks are used on Templates and are either assigned to actors in the workflow (called Interactive Tasks) or are executed by the Maestro engine (called Engine Tasks).  The following list of tasks are shipped with Maestro D8:

Start Task: Automatically created by the engine for each template and is non-deletable.  This task must be present for a workflow to function.

End Task: You can have any number of End tasks on your template, however, you must at least have one end task in order for your template to be validated for production usage.  The end task is responsible for ending a workflow and properly signalling the engine to close off the process and set the appropriate flags.  If you terminate a non-end-task-terminated workflow branch by having no other tasks after it, the process will never be flagged as complete and archivable.  In such a scenario, the process will appear to never complete.

And Task:  Logical AND.  This task takes multiple branches of a workflow and ANDs them together, meaning that the flow of the process will HOLD at the AND task until all tasks that point to the AND are complete before continuing execution. 

Or Task:  Logical OR.  This task takes multiple branches of a  workflow and ORs them together, meaning that the flow of the process will NOT hold at the OR task.  The OR is used to combine multiple branches of a workflow together into a single serial point of execution. 

Batch Function: The Batch Function task allows you to create a function that will be executed by the engine.  This is a non-interactive task and requires that the batch function return a TRUE in order to signal the engine that the task has completed.

Content Type Task:  The Content Type task provides an interactive task to the user to fill in a content type and have the content attach itself to the workflow via the "entity_identifiers" variable.  The Content Type task requires the administrator to attach a unique identifier to the content so that the content can be referenced in the workflow across multiple content type tasks.

If Task:  The If task provides logical branching based on the status of the last executed task preceding the IF, or based on the value of a variable.  The IF task provides a TRUE and a FALSE branch and is the mechanism used to generally cause a logical loop-back condition.

Interactive Task:  The Interactive task is a user-executed task that is generally run as a modal dialog in the Maestro Task Console.  Interactive tasks are completely customizable by the workflow developers to present whatever type of information is required to the end user.  Interactive tasks will only complete when an end user assigned to the task completes it.  The workflow will not continue until such time.

Manual Web Task:  The Manual Web task is used to redirect the user to a different Drupal page, or even an external page from the workflow application.  The redirection to a page is done in the Maestro Task Console and provides the Maestro Queue ID (queueid=xxxx) as a parameter in the URL when redirecting to the page.  It is 100% up to the manual web task's ultimate endpoint to complete the task via the Maestro API.  The workflow will not continue until the Manual Web Task's endpoint has completed the task.

Set Process Variable Task: The Set Process Variable task (SPV) is used to set the value of a process variable either though a hard-coded value, adding or subtracting a value from the variable, or by specifying a function to fetch data with.

Maestro's API in conjunction with the power of Drupal 8's underlying structure means that if a type of task that you require is missing, one can be written.  Examples of both interactive and non-interactive tasks are shipped with Maestro. 

Nextide provides consulting and support services for all of your workflow needs.  Contact us today with your project requirements!

Aug 14 2017
Aug 14

The configuration system (CS) in Drupal 8 is an indispensable tool, however this article is not meant as an introduction to the CS because that ground has already been covered by others. If you’re new to CS, this article will get you up to speed on what it is and why you might find it useful:

Alright, if you’re reading past this point, I will assume you have a baseline understanding of what the CS is; now to the fun part.

Common CS Pitfalls

UUID Mismatch

The CS is meant to move configuration between different versions of the same site. The way that Drupal distinguishes one site from another is through a Universally Unique Identifier (UUID). The UUID of the configuration that you import (found in must match the UUID stored in the row of the config table in your target database. If it doesn’t, your import will fail with the following message: “Site UUID in source storage does not match the target storage.”

There are ways around this (which we’ll cover below), but for most standard use cases this means that you should instantiate a new version of your site (think dev - test - prod) from a database dump of your pre-existing site. As long as you do that, your UUID’s will match and happiness will ensue.

Configuration vs Content Grey Areas

There are four broad categories of data in Drupal 8: State, Session, Configuration and Content. As the name suggests, the CS is designed to only manage configuration. This can lead to confusion in cases where it is not immediately clear that what you’re attempting to export is a blend of configuration and content. For instance, this can happen when you attempt to export a custom block. The problem here is that what you’re trying to export is part configuration and part content. Block placement (where it goes within a theme) is configuration, however the content of the block itself is content, which can’t be managed via the CS. If you’re not aware of this, you will get the broken block handler (“This block is broken or missing. You may be missing content or you might need to enable the original module.”) after deploying the block placement from the source environment to the target environment without the corresponding block content existing on the target environment beforehand.

Tips & Tricks

Change location of config_sync directory 

When you install Drupal, it will create a config sync directory for you at a path like this:


The idea here is that your config sync directory could contain sensitive information, so putting it in a directory that is specific to each site and hard to guess makes sense from a security perspective. However, you can do one better by placing your config sync directory somewhere outside of your webroot, preventing it from ever being served by your webserver. You can do that in settings.php like so:

$config_directories = array(CONFIG_SYNC_DIRECTORY => '../config/sync');

Non destructive configuration imports

By default, configuration synchronization is an all or nothing process. You take the data in your sync directory and place it in the config table of your database, overwriting anything that was there previously (and vice versa). If you want to avoid being quite so absolute, you can use the --partial flag when importing configuration via drush (drush config-import --partial). This will allow you to import new configuration items, update existing ones, and prevent you from deleting configuration items in the database that are not yet stored in the sync directory.

Environment Specific Configuration

If you’re interested in storing different configuration items for different environments, the configuration split module has you covered. It allows you to set up split directories where you can store environment specific configuration items. For instance, you might want different performance settings locally than in production. Of course you could override these settings in a settings.local.php file, but configuration split can do the same thing in a more graceful manner.

There are already some great articles on how to set up configuration split, so I won’t duplicate those efforts here. Check out these articles for more detail:

Before we move on I do want to clarify some of the terminology used by the module as it can be confusing. When setting up a configuration split, you have the option to add configuration items to the blacklist, or to the greylist. The difference between these two lists is subtle but important.

  • Blacklist: Configuration items that are explicitly managed by the split in question. These configuration items will be removed from the default directory. You can now store that configuration item in one or more configuration split, but not in the default directory.
  • Greylist: Configuration items on the grey list are not automatically removed from the default configuration directory. If a configuration item exists in the currently active split, it takes precedence over the copy stored in the default directory. If a copy of the greylisted configuration item does not exist in the currently active split, the configuration item from the default directory is used.

For example, if you want to override site performance settings in your local split in order to disable css / js aggregation, the greylist would be a good option. This would allow you to store your performance settings for dev / test / prod in the default directory, while also storing an altered version for local development in your local split. You can also achieve this using the blacklist, but you would need to store a copy of the performance settings in each split. If performance settings are the same for dev / test / prod and only vary for the local split, this is duplicative.

Importing Configuration From Another Site

As I mentioned above, mismatching UUID’s will prevent you from importing configuration from a different site. However, there are (at least) three ways around this:

Other useful configuration modules

Aug 14 2017
Aug 14
The mere mention of website templates makes some clients bristle. Nobody likes being told they have to conform to a set of rules they feel weren’t written with them in mind. They also believe that their site will look like everyone else’s and not meet their unique needs.Let’s start by dispelling that myth: that using templates means your site will look like everyone else’s.
Aug 14 2017
Aug 14

With pride and pleasure, the Devel maintainers have released 1.0 for Drupal 8. Even at the ripe age of 14, Devel is an active, popular, and nimble project. Devel has been downloaded 3.5M times, and 200,000 sites currently report Devel as an enabled project. Devel’s whole codebase was deeply improved in this release. A few highlights are below, with annotated screenshots and gifs below each section. Please upgrade and report your success or failure using the new features!


  • A new submodule in Devel that reports all sorts of useful details abut a Drupal response.
  • Dive into Cache-hit ratio, DB queries, Events, Forms, Session, Assets, and much more.
  • Add your own data collectors.
  • Learn more at our demo video.

WebprofilerWebprofiler toolbar expandedWebprofiler toolbarWebprofiler databaseWebprofiler widgets list

Devel Module

  • Integrated into the new toolbar as a top level menu (configurable).
  • New pages and Drush commands list available Services, Routes, Events, etc.
  • New State and Configuration editors
  • 3 Twig extensions for debugging
  • A new plugin system for showing dumps like dpm(). Enable and configure the new Kint submodule for pretty dumps.
  • Supports Drush 8 and Drush 9.

Devel configurationDevel drush servicesDevel drush uuidDevel dumpersDevel state editorDevel toolbarDevel toolbar items list

Devel Generate

  • We moved the generation of text/int/bool/etc. for Fields into Drupal 8 core. Now all field types, even custom and obscure ones, are supported.
  • Supports Drush 8 and Drush 9.

Moshe is available for hire as a solo consultant or as a duo with the inimatable Campbell Vertesi. Moshe has worked on Drupal core since 2001, and loves mentoring Drupal teams. Moshe loves to set up CI and developer workflows for PHP/JS teams.

Aug 12 2017
Aug 12

Today we announce updates to our Drupal 7 Glazed Theme and Builder, an update alpha2 release for Drupal 8 (Testing only) and announce the Unsplash Media module.

Media Unsplash: Free Photos Without Leaving Your Website

This week we're adding a new module to our Glazed CMS Distribution that will be especially exciting to all designers, publishers, marketers, and lovers of photography. Rather than searching Google for free photos, you can browse the Unsplash collection of over 200.00 free photos right in your Drupal 7 Media popup. To the uninitiated: Unsplash is a carefully curated collection of professional photography that has been shaking up the stock photography market in the past few years. All photos are provided totally free with a do whatever you want license. This module provides a simple search interface right in your Media popup, see how simple it works in the video:

This module was built by Vallic based on our initiative/idea, collaboration and co-sponsoring. It was an especially smooth collaboration because Vallic has previously ported the media_pixabay module and allthough the Unsplash API was slightly more complicated, the scaffolding of that module could be re-used.

Integration with the Drupal 7 Media module means that the Unsplash library is also available when configuring theme settings, building pages with our Drupal Drag and Drop Builder, and in any other form that leverages the Media browser.

Drupal 8 Alpha2 Test Release

Just 10 days after our first test release for Drupal 8 we are bringing you alpha2! If you are a customer and curious (or excited) about Drupal 8 please join us in the alpha2 testing and feedback thread! The more testing we can do now, the faster we can release the beta and stable version when Drupal 8.4 comes out. Join in and test our latest and greatest today!

Changes in alpha2:

  • Demo Content Import now works if you select and import any language in the installer! The content will be naturalized to match your selected language. This means you can edit demo content and translate it as if it was initially created in your webite's primary language.
  • Fixed missing titles on views and blocks in Glazed Builder
  • Updated branding & Design
  • Minor performance improvements

Glazed Theme 7.x-2.6.5 Release

Today's updates for Glazed theme include a number of minor bug fixes as well as improvements to the admin interface for themers. Several performance improvements were also realized. We added minification to more of the custom Glazed javascript files, we updated some 3rd party libraries (animate.css), and dropped the modernizr dependency for our animated mobile menu. See Changelog for more details.

The branding and design of some elements are slightly changed to reflect the changes we are making for the Drupal 8 release. There's even a single SVG icon that made it into this release, it's from Font Awesome 5 (beta). Expect more where that came from!

Glazed Builder 7.x-1.1.4 Release

This minor release for Glazed Builder includes small bug fixes, some editor-experience enhancements. Modernizations in code and design that reflect the work we are doing for the Drupal 8 release are also included. See Changelog for details.

Important NoteS If You're Updating A Complete Installation Profile

We open sourced our GridStack Drupal module! Now our profiles includes sooperthemes_gridstack. This is a rebrand of the glazed_gridstack module that you might have in your website. If you're updating your installation profile you should check if you use any GridStack view on your site. If you do, you can choose to either disable the sooperthemes_gridstack module, or rebuild (or export/import) them using the SooperThemes GridStack module and then disable and uninstall the glazed_gridstack module. This goes both for customers using our premium theme and for people on the free Glazed CMS Distribution.

Join SooperThemes Or Upgrade To Unlimited today!

We're super excited about our Drupal products and we think you will be too! If you're on the fence about getting a subscription, or waiting for the Drupal 8 products, wait no longer! Join now and get 10% discount if you join our newsletter. Once you've tried Glazed Theme and our Glazed Drag And Drop Builder you''ll never build Drupal sites the old way again. Joining is risk free, no questions asked refund policy if you change your mind within 20 days after purchase.

Aug 12 2017
Aug 12


2017-08-15 12:00 - 2017-08-17 12:00 UTC

Event type: 

Online meeting (eg. IRC meeting)

The monthly security release window for Drupal 8 and 7 core will take place on Wednesday, August 16.

This does not mean that a Drupal core security release will necessarily take place on that date for any of the Drupal 8 or 7 branches, only that you should watch for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

Drupal 8.4.0-beta1 will also be released sometime during the week to continue preparation for the upcoming minor release in October.

There will be no bug fix or stable feature release on this date. The next window for a Drupal core patch (bug fix) release for all branches is Wednesday, September 06. The next scheduled minor (feature) release for Drupal 8 will be on Wednesday, October 5.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Aug 11 2017
Aug 11

There are many different facets of “the Palantir way,” but one principle that sticks out the most is the encouragement to be continuously learning. As a company, we are strong advocates for the concept of “learning by doing,” which is why we’ve had a summer internship program going for years. We believe paid internship opportunities are essential to figuring out what career path is best for you, and they can be beneficial for both the company and the intern.

Our interns are provided the opportunity to see what it’s like to work on real projects with a development team while getting exposure to working through a process with clients. They gain experience using tools like Github and JIRA, and a deeper understanding of responsive design, open source software, and Agile development.

We’ve found that our interns bring huge value with new perspective to our team. They give other Palantiri an opportunity to work on mentorship, and our buddy system means we gain a quick understanding of our interns’ existing skills, so we can help them grow that skillset more effectively.

The added bonus of our internship program is that both sides get to leave with an understanding of whether or not it’s a good fit. Being a remote-first company, it’s nice for our interns to be able to test drive remote work and see if it works for them.

We’ve had such tremendous success in hiring our interns as full-time employees (you might be familiar with Ashley, Kelsey, Patrick, and Matt), that we’ve recently decided to expand our program beyond summer to accommodate the awesome candidates that have extended availability.

Meet Our 2017 Summer Interns!

Lily Fisher

Q: Why were you excited to come work at Palantir?
A: While poking around the website and blog, I saw the previous clients Palantir worked with. I wanted my first job to be fulfilling and a learning experience that allowed me to grow during my pursual of a Computer Science career. Based on the eloquent, effective, and personal approach this company takes when serving their clients, I felt like working with Palantir would allow me to grow while working on real projects in a wholly-understanding professional environment.

Q: Who is the most famous person you’ve ever met?
A: I had a conversation with Alan Parson about my involvement in music.

Q: What do you most like to do to unwind?
A: Skateboard.

Q: What is the first thing you do when you wake up/start your day?
A: Cuddle with my hamster.

Jose Arreluce

Q: Why were you excited to come work at Palantir?
A: I was excited to come work at Palantir as I believed that Palantir’s internship perfectly fit what I was looking for. It presented the opportunity to work on real projects that would have an impact on real people, while also allowing me to learn extensively about how websites are developed in a professional environment. I was also excited by what I saw on Palantir’s website regarding its previous projects and the company culture, as well as by the emphasis on learning.

Q: What excites you about the web?
A: The vast amount of knowledge and opportunity for learning it provides. The memes are nice too.

Q: In 5 years time you hope to be. . .
A: In five years I hope to be working as a software engineer, pursuing an advanced degree, and to have run at least a half marathon.

Q: What do you most like to do to unwind?
A: Running, especially on the Chicago lakefront on a nice day.

Want to know more about Palantir? Check out our culture page or read through our bios. Think it sounds like a good fit? Send us your resume.

Want to work at Palantir?

Send us your resume!
Aug 11 2017
Aug 11

What is CSP?

Content Security Policy – or CSP – is a security feature of modern browsers. Browsers will ignore data from domains that are not cleared in the CSP http-response header. For instance if you embed a YouTube movie on a webpage and the domain is not whitelisted in the CSP header, then the movie will not be loaded. All traffic from will be blocked and the movie cannot be displayed.

This is a safety feature; it is not possible to somehow hack the connection or page and add a malicious script from an unknown domain. Indeed the domain is not cleared and all traffic from it will be blocked by the browser and no harm can be done.

You can set the CSP header in your webserver configuration. This can easily become complicated. Not the configuration itself, but setting up and maintaining a list of domains, grouped by type of request, maybe even specified per subdomain, protocol or port can be a daunting task.

Find violations with browser console

A helpful feature of browsers is that they show reports of CSP violations. Go to your Inspector and check Console messages for Security notifications. These messages tell you the exact directive that is violated. The Drupal watchdog will also log violations, more on that later.

How to use CSP in Drupal

Drupal wouldn’t be Drupal if there wasn’t a module for this. It will help you set up the CSP response-headers and give you separate fields for the type of request. Maintaining these can still be overwhelming, though. There are a few tricks to lower the burden, but both the developer and the content editor need to be aware of the complications.


The module is called Security Kit and can be downloaded from As the name implies it does more than setting CSP headers, but it’s the task we’re concentrating on in this post. There is a stable version for Drupal 7 available, and development for Drupal 8 is underway (alpha2 at the time of writing).

When installed, a configuration page is available under Configuration > System > Security Kit (/admin/config/system/seckit). There is quite a lot of explanation on the page available. What we’re looking for is the first section under “Cross Site Scripting.” There are two options to Enable CSP or use Report only. More on this later on.


Let’s first look at the directives. A directive is a specific kind of asset that is loaded, the following are available:

  • default
  • script
  • object
  • style
  • img
  • media
  • frame (deprecated)
  • child
  • font
  • connect

The Security Kit gives good examples of what kind of HTML-elements are triggered by which directive. Some are self-explanatory, others are a little less obvious to comprehend.


Finding the directives are violated is rather easy, use the browser console or see the Drupal watchdog. Think carefully about the domain that is white-listed. You can use wildcards but you might want to list only very specific domains. It is up to you to choose between or use google.*. You can also specify sources by scheme (data:, https:), where “data:” can be used for loading base64 encoded images.

Apart from using domains to whitelist sources, there are also a few reserved keywords. The most relevant are:

  • 'self'
  • 'unsafe inline'
  • 'unsafe eval'


This is the current domain. It is convenient as you don’t have to add all local, development, test, acceptance and preproduction domain names. This also comes in handy when using the domain module.


Mainly relevant for the script and style directives. It means you can use scripts and styles inline, both in the head or as an attribute of a tag. It is recommended not to use this as the only way to block inline scripts is to block them all. There is no mechanism to distinguish good or bad inline scripts. For Drupal, this can have serious consequences. More on that later.


This prohibits tricking of executable code in JavaScript based on (harmless) strings in your code. A few functions are prohibited with 'unsafe-eval': eval(), new Function(), setTimeout([string], …), and setInterval([string], …). This also has serious consequences for Drupal. Note that it’s not the use of the timing functions, it is about the fact that a string is used here as variable.

'Unsafe-inline', 'unsafe-eval' and Drupal

Drupal 7

When using 'unsafe-inline' in the script directive, without any precautions no javascript will function properly anymore. Drupal 7 stores all JavaScript settings as an inline (and thus executable) javascript itself. JS behaviour will be unpredictable. There is an issue for Drupal 7 to turn these settings into an inline JSON object, just as it is done with Drupal 8.

Apart from core, be sure that all JS is loaded as a file and not inline. Also make sure, you don’t use javascript in a href-attribute (e.g. href="…") or with an eventhandler attribute (f.i. onclick="…"). Both practices are highly discouraged anyway, but won’t work at all unless 'unsafe-inline' is added in the script directive.

It can be hard to get your site running properly without using 'unsafe-inline' but it can be done. Use the patch from the aforementioned issue, make sure all settings are loaded into and from the main JS Drupal settings, and place all scripts in a file instead of inline. When using drupal_add_js make sure to use the “file” parameter, instead of “inline” and place your javascript file in a separate module. Or use hook_js_alter to change this for javascript that is added by a contrib module.

Drupal 8

As mentioned, the javascript settings in Drupal 8 are stored as a JSON object. Besides this, all remarks for Drupal 7 are also relevant for 8. In Drupal 8 you use the asset library system for adding javascript (or js settings, or stylesheets) to your module or theme. Inline javascript is highly discouraged anyway. Drupal 8 is much more adapted to CSP than Drupal 7 already. Not surprisingly, CSP is more recently developed than Drupal 7.


As mentioned earlier, not only the browser logs CSP violations, CSP itself does that too. In Drupal the standard configuration is to add those messages to the watchdog queue, but it is possible to write them to a custom file, see the seckit module configuration.

When setting up CSP for an existing site, there will probably already be content with external sources added to the site. How to find all these sources, and the directives they affect? Instead of adding sources to the directives, you switch on the “Report-only”-mode. Now, no CSP-headers will be rendered, but all violations will be logged. This is an excellent way to catch the external sources that need to be whitelisted. Once you put CSP in place, a non-whitelisted source will be blocked. A wrongfully blocked source could damage your site’s working or reputation.

A useful drush command to only show only CSP-violations while they come along is:

drush ws --type=seckit --tail --extended

More info

These sources were a welcoming help when searching for information:

Aug 11 2017
Aug 11

Combining different tools to produce fantastic reactions is the true “chemistry” of web development. The special trend of the day is using Drupal with JavaScript tools. We’ve had a chance to look at the benefits of using Drupal with Angular and Drupal with Node. Now, it’s time to describe another chemical reaction — between Drupal and ReactJS, which results in the appearance of websites and apps with cosmic speed and interactivity. Indeed, these qualities are in the DNA of the whole JavaScript family. So what makes one of its youngest members, ReactJS, stand out? Let’s take a closer look at it and find it out.

ReactJS and at least some of its benefits

“You can’t scare me — I handle Facebook’s interface”

Such a quote could easily belong to ReactJS. This JavaScript library for creating user interfaces was built by Facebook engineers. At first it was only used internally and then was released as an open-source project. Considering the scope of Facebook, there’s hardly any project ReactJS can’t cope with when it comes to building large-scale dynamic applications with real-time data change. Nor is it used by Facebook alone — check out lots of other projects using React.

Virtual DOM

One of the awesome features of React is its effective approach to DOM updates. When a page is loaded, the browser generates its DOM (Document Object Model). However, this is traditionally a bit of a weak, or, let’s say, slow point of JavaScript. To significantly speed things up, React JS uses a lightweight, virtual DOM. React discovers which virtual DOM objects have changed and updates the necessary parts of the real DOM, not the whole DOM tree. This provides a great performance boost, as well as makes a developer’s life easier.

Component structure

In ReactJS, it is easy to create self-contained, independent components and put them together in large-scale applications. Parameters are passed to each of the components. One-way data flow Data flows through your application in a single direction after some change. This makes the data flow more predictable, gives you a better control over it, and lets you easily track changes.

Easy to work with

React presents a rather simple programming approach with no complicated concepts. It easily integrates with absolutely any JavaScript library. Born in 2013, React has developed a large ecosystem around itself. Its active community keeps creating new libraries, tutorials, and other helpful stuff.

Drupal and ReactJS

Drupal is great for absolutely any type of website — social, educational, ecommerce, healthcare and so on. It can handle any amount of users and pages, as well as any website scale and complexity. With the special front-end miracles of ReactJS added to this, this can produce absolutely fantastic results.

The combination of Drupal and ReactJS is most beneficial for websites with plenty of dynamic page elements and a giant amount of constantly-changing data that requires smart, real-time updates. Other examples are when you need to provide an automated data exchange or access to data from mobile apps. However any type of website or app will benefit from an ultra-quick and magnetically engaging interface.

The most popular way of combining Drupal and React is using React as a lightweight front-end for the so-called decoupled (or headless Drupal) as a CMF and data source. The one-way data flow of React JS helps shape the web page in accordance with the data sent from Drupal's RESTful API.

Drupal 8 has special opportunities for React integration, thanks to its built-in RESTful services, but using JSON API will make your developer’s life even easier. Another hot trend is combining Drupal 8 with React by means of GraphQL. Drupal 7 has to rely on contributed modules to work with React, but the integration is possible.

If you are interested in using Drupal and ReactJS for your project, contact our developers who love modern technologies, especially in their best combinations!

Aug 11 2017
Aug 11

Drupal 8 ships with a great and easy to use 'vertical_tabs' form element but unlikely a horizontal counterpart is missing. I'd needed exactly this today. I'll show you, how can I finally managed to get this in and how you can accomplish that much quicker.

On my search on an existing solution (tutorial, module, core patch, etc) I've only found an issue in the issue queue on, that isn't really frequented very much:

And of course there's Field Group module, which already has horizontal tabs support. I tried hard to use this in my form but unfortunately failed to do so. Also, a Gist I've found didn't work at all for me.

Field group is great to use, as long you want to use it on entity forms and define and configure the field groups via Field UI. As soon as you want to use e.g. their horizontal tabs in any other form than an entity form, things get complicated (it's maybe even impossible to do so), as it seems to be closely coupled to entity forms (e.g. it relies on having entity type and bundle defined while making theme hook suggestions).

Horizontal Tabs module to the rescue

It even gets complicated, if you want it to use in an entity form programmatically. There's a lot of more stuff you will have to define in your form. When I tried to accomplish this, I have found myself digging more and more into the form processing that is happening inside field_group.module, trying to rebuild the necessary parts. When I then saw that the extra stuff I've added to my form elements were removed inside another processing step of field group, I've decided to rather build a lightweight solution for my problem that orientates on core's vertical tabs instead. I've accomplished that in about half an hour, while I was before trying to fix the field_group approach for a couple of hours.

Here's the module I've created:

I've already stated in the module's README file, why the module is currently only available on Github, not on It's important to me to first clarify whether there's a chance that this module's functionality gets merged into either Drupal Core (would be best imho) or at least into Field Group module, and if there's a demand for having this module on anyway.

Another reason is that I've decided to preferring having a clean and logical form element name rather than reaching compatibility with field_group. Both modules are defining a form element with the same ID/name at the moment. This should also be part of the discussion. You can read more about that on the module's README (and Github page) as well.

One of the great things in the Drupal community is to concentrate very much on collaboration, having the positive effect that we have an ecosystem providing a great number of flexible, high quality modules without too much redundancy (in having multiple modules serving the same/similar features), while you often find in other open source projects a bunch modules doing similar things but non of the really satisfying your needs. The easyness of Drupal 8 in creating custom funcitonality in addition to a partially low speed of some modules being refactored and ported to Drupal 8, already caused some duplication of modules - I don't want to push this further without having thought about alternatives.

That's why I found the Github hosting as the currently best solution. You can load it via Composer as agoradesign/horizontal_tabs.

Let's discuss

I've already commented on the Core issue here:

I'm planning to open an issue about possibilities in merging this into field group as well (I'll update this post as soon as I've done that).

Anyway, I'd be happy about your feedback. Doing this rather in the mentioned issue(s) on would make sense. Opening issues on Github is also ok. Of course you can always comment directly here under the post as well, but my response rate there isn't that quick. Informal discussions on Twitter would also make sense. So feel free do this on contacting me. I've already opened discussion in this tweet here:

Aug 11 2017
Aug 11

Drupal is PHP 7-ready and sites that run many contrib modules, or particularly memory-intensive ones, will need more memory requirements. Optimizing Drupal website performance with PHP 7 version will boost the overall page speed. 

PHP 7 was released in December 2015 and offers high performance, huge load capacity and asynchronous programming support  for online applications. According to the latest study, PHP 7 is twice as fast as PHP 5.6 and reduces memory usage.


Why this is the right time to move to PHP 7

Drupal 7 core officially supports PHP 5.2.5 or higher  However, PHP 7 introduces backward incompatible changes which may need to be addressed in contributed or custom modules and themes. While some Drupal 7 sites may run on minimum of PHP versions, you can move to Drupal 8 with PHP 7 support to enhance the performance of the site.

Drupal 8 officially supports 200+ new features and improved functionalities, and upgrading to PHP 7 brings a lot of improvements and delivers high performance Drupal site, whether you are a website owner or a Drupal developer

Drupal core's automated test suite is now fully passing on a variety of environments where there were previously some failures (PHP 5.4, 5.5, 5.6, and 7). Several bugs affecting those versions were fixed as well. These PHP versions are officially supported by Drupal 7 and recommended for use where possible.

Anecdotal evidence from a variety of users suggests that Drupal 7 can be successfully used on PHP 7, both before and after the 7.50 release. 

A slow website load time impacts visitors, overall user experience, and the bottom line. With the improved speed enhancements of Drupal 7 and Drupal 8 with PHP 7, your user engagement and experience will increase and less likely they will leave your website

Aug 10 2017
Aug 10

This week whilst trying to update one of our projects to the latest version of Drupal 8 core, we had some issues.

We use Composer to manage our dependencies, modules etc, and on this particular occasion, things weren't straightforward.

In order to solve it, we had to use some of the lesser known features of Composer, so decided to share.

The problem

So updating Drupal core with composer is normally pretty simple. And on this occasion, we had no reason to suspect it would be anything different.

Normally we'd just run

composer update "drupal/core" --with-dependencies

But this time, nothing happened.

So we checked that there was a newer version available

composer show -a "drupal/core"

And sure enough, we can see 8.3.6 in the available versions.

Time to dig deeper.

The why

Luckily, composer will tell you why it won't install something.

composer why-not "drupal/core:8.3.6"

Which yielded

drupal/core  8.3.6  conflicts  drush/drush (<8.1.10)

Aha, so drush is the issue.

So maybe we just update both

composer update "drupal/core" "drush/drush"


Digging deeper

So after trying a few different combinations of version constraints etc, we decided to remove drush, update and then add it back.

composer remove --dev "drush/drush"

Which worked.

composer update "drupal/core" --with-dependencies

Ok, nice, we now have Drupal 8.3.6

composer require --dev "drush/drush"


Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - Installation request for drush/drush 8.1.12 -> satisfiable by drush/drush[8.1.12].
    - Conclusion: remove phpdocumentor/reflection-docblock 3.2.2
    - Conclusion: don't install phpdocumentor/reflection-docblock 3.2.2
    - drush/drush 8.1.12 requires phpdocumentor/reflection-docblock ^2.0 -> satisfiable by phpdocumentor/reflection-docblock[2.0.0, 2.0.0a1, 2.0.0a2, 2.0.0a3, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0a1, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0a2, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0a3, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.1, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.2, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.3, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.4, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.5, 3.2.2].
    - Installation request for phpdocumentor/reflection-docblock (locked at 3.2.2) -> satisfiable by phpdocumentor/reflection-docblock[3.2.2].

Installation failed, reverting ./composer.json to its original content.

Hm, so we have a version of phpdocumentor/reflection-docblock in our lock file that is too high for drush.

composer why "phpdocumentor/reflection-docblock"


phpspec/prophecy v1.6.1 requires phpdocumentor/reflection-docblock (^2.0|^3.0.2)

Aha, so prophecy - but it allows either version .. but our lock file has pinned it to the 3.x branch

So lets force composer to downgrade that

composer require --dev "phpdocumentor/reflection-docblock:^2.0"

Now lets see if we can add drush back

composer require --dev "drush/drush"


Now all that remains is to clean up, because we don't really want to depend on phpdocumentor/reflection-docblock

composer remove --dev "phpdocumentor/reflection-docblock"

Done - quick - commit that lock file while you're winning!


So while it might be easy to curse Composer for not letting you upgrade, its actually doing exactly what you told it to do.

Your lock file has a pinned version, it is honoring that.

And in order to resolve it, Composer provides all the tools you need in the form of the why and the why-not commands.

Composer Drupal 8
Aug 10 2017
Aug 10

This week whilst trying to update one of our projects to the latest version of Drupal 8 core, we had some issues.

We use Composer to manage our dependencies, modules etc, and on this particular occasion, things weren't straightforward.

In order to solve it, we had to use some of the lesser known features of Composer, so decided to share.

The problem

So updating Drupal core with composer is normally pretty simple. And on this occasion, we had no reason to suspect it would be anything different.

Normally we'd just run

composer update "drupal/core" --with-dependencies

But this time, nothing happened.

So we checked that there was a newer version available

composer show -a "drupal/core"

And sure enough, we can see 8.3.6 in the available versions.

Time to dig deeper.

The why

Luckily, composer will tell you why it won't install something.

composer why-not "drupal/core:8.3.6"

Which yielded

drupal/core  8.3.6  conflicts  drush/drush (<8.1.10)

Aha, so drush is the issue.

So maybe we just update both

composer update "drupal/core" "drush/drush"


Digging deeper

So after trying a few different combinations of version constraints etc, we decided to remove drush, update and then add it back.

composer remove --dev "drush/drush"

Which worked.

composer update "drupal/core" --with-dependencies

Ok, nice, we now have Drupal 8.3.6

composer require --dev "drush/drush"


Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - Installation request for drush/drush 8.1.12 -> satisfiable by drush/drush[8.1.12].
    - Conclusion: remove phpdocumentor/reflection-docblock 3.2.2
    - Conclusion: don't install phpdocumentor/reflection-docblock 3.2.2
    - drush/drush 8.1.12 requires phpdocumentor/reflection-docblock ^2.0 -> satisfiable by phpdocumentor/reflection-docblock[2.0.0, 2.0.0a1, 2.0.0a2, 2.0.0a3, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0a1, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0a2, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.0a3, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.1, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.2, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.3, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.4, 3.2.2].
    - Can only install one of: phpdocumentor/reflection-docblock[2.0.5, 3.2.2].
    - Installation request for phpdocumentor/reflection-docblock (locked at 3.2.2) -> satisfiable by phpdocumentor/reflection-docblock[3.2.2].

Installation failed, reverting ./composer.json to its original content.

Hm, so we have a version of phpdocumentor/reflection-docblock in our lock file that is too high for drush.

composer why "phpdocumentor/reflection-docblock"


phpspec/prophecy v1.6.1 requires phpdocumentor/reflection-docblock (^2.0|^3.0.2)

Aha, so prophecy - but it allows either version .. but our lock file has pinned it to the 3.x branch

So lets force composer to downgrade that

composer require --dev "phpdocumentor/reflection-docblock:^2.0"

Now lets see if we can add drush back

composer require --dev "drush/drush"


Now all that remains is to clean up, because we don't really want to depend on phpdocumentor/reflection-docblock

composer remove --dev "phpdocumentor/reflection-docblock"

Done - quick - commit that lock file while you're winning!


So while it might be easy to curse Composer for not letting you upgrade, its actually doing exactly what you told it to do.

Your lock file has a pinned version, it is honoring that.

And in order to resolve it, Composer provides all the tools you need in the form of the why and the why-not commands.

Photo of lee.rowlands

Posted by lee.rowlands
Senior Drupal Developer

Dated 11 August 2017

Add new comment

Aug 10 2017
Aug 10

Pluralizing and singularizing words got very easy with the inclusion of the Doctrine Inflector class.

use Doctrine\Common\Inflector\Inflector;
$pluralized = Inflector::pluralize($bundle_singular);
$singularized = Inflector::singularize($bundle_singular);


About the Author

Hi. My name is Jeremiah John. I'm a sf/f writer and activist.

I just completed a dystopian science fiction novel. I run a website which I created that connects farms with churches, mosques, and synagogues to buy fresh vegetables directly and distribute them on a sliding scale to those in need.

In 2003, I spent six months in prison for civil disobedience while working to close the School of the Americas, converting to Christianity, as one does, while I was in the clink.

Aug 10 2017
Aug 10

I am working on Adding support for The league oAuth and new implementers for social auth and social post under the mentorship of Getulio Sánchez "gvso"(Paraguay) and Daniel Harris “dahacouk” (UK).

Last week, I started working on creating the first social_post implementer using theleague library. This week, we have finished social_post_facebook [Link to Code] and I have started working on social_post_google.

Here are some of the things that I worked on during the 10th week of GSoC coding period.

User Collection

The functionality of adding Facebook account was added during the last week, this week I worked on showing the linked account on the user page and overall social post accounts for an implementer for the administrator. This was one of the blockers I faced during this week but thanks to Getulio Sánchez for helping me with this task.

Deleting Functionality

The functionality to delete the linked account was added this week. The SocialPostEntityDeleteForm was defined to implement the account delete functionality.

Change In Entity

Handlers were added to the entity to implement above 2 tasks and the entity keys were changed id and uuid.


To implement the functionality of the auto posting, Rulesaction has been defined which provide the post to Facebook functionality. The Rulesaction need to be defined by the administrator to perform an action on an event.

In our rules action file, we’re retrieving the Facebook accounts for the current user and then making the API calls using the access token saved during the time of linking an account. (The token is encrypted and is decrypted by using the salt defined in settings.php).

Talk at LNMIIT


This week I was invited to give a talk about my GSoC project and my open source experience at LNMIIT. LNMIIT is a university in India and is among top 10 university based on no. of GSoC selections. It was an overwhelming experience to interact with students interested in contributing to Open Source, I hope some of the students from the meetup will get involved with Drupal as a mentor or GSoC students.

My talk consisted of my Open Source experience, sharing information about Drupal involvement in GSoC and GCI and most importantly sharing details about my GSoC project [Discussing leagueoauth/OAuth2 and social initiative project]

It was amazing experience for me as I got to interact with other GSoC students and share ideas with them.[Photos of the event]

These were some of the tasks I worked on as a part of my project,  I was thrilled by the tenth week of Google Summer of Code coding phase. My goal for next week to complete this whole project by writing documentations and pushing the implementers for the community to test.

Aug 10 2017
Aug 10
Mike and Matt talk about the intricacies of front-end development with two of Lullabot's front-end developers, Marc Drummond and Wes Ruvalcaba.


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web