Jan 18 2018
Jan 18

Under certain circumstances, it might be necessary to build a specific version of Lightning with dependencies exactly as they were when it was released. But sometimes building older versions of Lightning can be problematic. For example, maybe a new dependency conflicts with an old one, or a patch no longer applies with an updated dependency.

In that case, you can use the new "Lightning Strict" package to pin all of Lightning's dependencies (and their dependencies recursively) to the specific versions that were included in Lightning's composer.lock file when it was released. (If this sounds familiar, a "Drupal Core Strict" package also exists that does the same thing for core. But note that package is incompatible with Lightning Strict since Lightning uses PHP 7.0 when building its lock file.)

In this example, we want to build Lightning 2.2.4 - which contains the migration to Content Moderation, Workflows, and Lightning Scheduler:

$ composer require acquia/lightning:2.2.4 balsama/lightning_strict:2.2.4 --no-update
$ composer update

Assuming you were updating from Lightning 2.2.3, you could then follow the update instructions for 2.2.4 found in our release notes. In this case, they are:

$ drush updatedb && drush cache-rebuild
$ drupal update:lightning --since=2.2.3

Once you've updated to the most recent version, you can remove the dependency on balsama/lightning_strict.

The package will automatically be updated when new versions of Lightning are released. Hopefully this will solve some of the problems people have experienced when trying to build older version of Lightning.

Jan 18 2018
Jan 18
January 18th, 2018

What are Spectre and Meltdown?

Have you noticed your servers or desktops are running slower than usual? Spectre and Meltdown can affect most devices we use daily. Cloud servers, desktops, laptops, and mobile devices. For more details go to: https://meltdownattack.com/

How does this affect performance?

We finally have some answers to how this is going to affect us. After Pantheon patched their servers they released an article showing the 10-30% negative performance impact that servers are going to have. For the whole article visit: https://status.pantheon.io/incidents/x9dmhz368xfz

I can say that I personally have noticed my laptop’s CPU is running at much higher percentages than before the update for similar tasks.
Security patches are still being released for many operating systems, but traditional desktop OSs appear to have been covered now. If you haven’t already, make sure your OS is up to date. Don’t forget to update the OS on your phone.

Next Steps?

So what can we do in the Drupal world? First, you should follow up with your hosting provider and verify they have patched your servers. Then you need to find ways to counteract the performance loss. If you are interested in performance recommendations, Four Kitchens offers both frontend and backend performance audits.

As a quick win, if you haven’t already, upgrade to PHP7 which should give you a performance boost around 30-50% on PHP processes. Now that you are more informed about what Spectre and Meltdown are, help with the performance effort by volunteering or sponsoring a developer on January 27, 2018 and January 28, 2018 for the Drupal Global Sprint Weekend 2018, specifically on performance related issues: https://groups.drupal.org/node/517797

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.


Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jan 18 2018
Jan 18

Encrpt and Decrypt text in the URL's

In general, while Anonymous user's are given access to make some DB operations via UI, then important thing to make sure is its not hackable..  

For this to make sure, minimum thing to do is encrypt the piece of text for the end user and decrypt back piece of text while doing the backend operation..

This can be done as seen below

Rendered URL, where node id is encrypted

<a href="http://dev-karthikkumardk.pantheonsite.io/abt/cancel/'. fmg_encrypt($entity->nid) .'">here</a>

Encrypt function

function fmg_encrypt($str) {
  $ivlen = openssl_cipher_iv_length($cipher="AES-128-CBC");
  $iv = openssl_random_pseudo_bytes($ivlen);
  $key = "fmg_sec";
  $ciphertext_raw = openssl_encrypt($str, $cipher, $key, $options=OPENSSL_RAW_DATA, $iv);
  $hmac = hash_hmac('sha256', $ciphertext_raw, $key, $as_binary=true);
  $ciphertext = base64_encode( $iv.$hmac.$ciphertext_raw);
  return base64_encode($ciphertext);
}

Decrypt function

function fmg_decrypt($str) {
  $tmp = base64_decode($str);
  $c = base64_decode($tmp);
  $ivlen = openssl_cipher_iv_length($cipher="AES-128-CBC");
  $iv = substr($c, 0, $ivlen);
  $hmac = substr($c, $ivlen, $sha2len=32);
  $ciphertext_raw = substr($c, $ivlen+$sha2len);
  $key = "fmg_sec";
  $original_plaintext = openssl_decrypt($ciphertext_raw, $cipher, $key, $options=OPENSSL_RAW_DATA, $iv);
  $calcmac = hash_hmac('sha256', $ciphertext_raw, $key, $as_binary=true);
  if (hash_equals($hmac, $calcmac))//PHP 5.6+ timing attack safe comparison
  {
    return $original_plaintext."\n";
  }
}

Decrypt back the node id back

$node = node_load(trim(fmg_decrypt($str)));

Cheers :)

Jan 18 2018
Jan 18

The Bibliography and Citation - Altmetric module is an addition to the existing Bibliography and Citation module. Bibliography and Citation is a module that allows keeping, outputting, exporting and importing bibliographic data. The library we used is from the official CSL style repository with over 8000 styles. Those styles are available without charge under a Creative Commons Attribution-ShareAlike (BY-SA) license. Learn more about Bibliography and Citation here.

The Bibliography and  Citation - Altmetric module adds Altmetric badges to reference entities provided by the "Bibliography & Citation - Entity" submodule. Altmetric badges (donuts by default) visualize the influence of your published content and show a number of times your content was mentioned.

How does it work? Each donut gives your website visitors one-click access to the collated record of online attention for each piece of research, where they can browse all of the original mentions and shares associated with it.

Requirements

  • The module Bibliography & Citation

The module is under an active development. If you have any questions or need more information, please, contact us.

Jan 17 2018
Jan 17

by David Snopek on January 17, 2018 - 10:21am

One of the great things about Drupal, is that it's possible to build a pretty advanced site just by pointing and clicking and configuring things - what we call "site building" in the Drupal universe.

But with all that power, you can also make your Drupal site less secure - and possible to hack! - just by changing configuration settings! We covered other examples of this in a previous article.

Today we're going to talk about one of the most common... and most DANGEROUS: exposing your Drupal private files on the internet.

In this article we're going to discuss how to determine if your private files have been exposed, and also how to fix it.

Read more to find out!

What are Drupal "private files"?

On all Drupal sites, there are at least two different types of files: public and private.

Public files are served directly by the web server, which is nice because it's fast. Private files have to pass through Drupal, which is slower, but allows Drupal to define the rules to access them.

Private files are usually used for either:

  1. User uploaded content you want to control access to (ex. a newsletter than only site members should be able to see), or
  2. Files created by Drupal modules which they intend to keep private, usually for security reasons

You probably know if you have #1, but you might not know if you have #2 if you don't know how all the modules on your site work.

So, even if you're not using private files to power a feature of your site, a module that you use may be using them unbeknownst to you! Because of that, it's always important to make sure your private files aren't being exposed to the internet.

How does Drupal keep private files private?

Public files have to be placed in your public "web root" along with the other files that make up your Drupal site, so that the web server can serve them.

Private files should either:

  1. Be placed outside the web root, where the web server can't get to them, or
  2. If they are in the web root, you need to configure your webserver not to serve them

#1 will always be safe!

It's #2 where things can go wrong. Frequently, private files are placed within the web root (ex. under "sites/default/files/private") but without changing the web server configuration to prevent directly accessing them, and skipping Drupal altogether.

How to find out if they are exposed?

This is pretty easy to check manually, but does require a little bit of Drupal and technical know how:

  1. Login to your Drupal site as an admin user
  2. Go to Configuration -> Media -> File system
  3. Check the "Private file system path"
  4. If you it might be in the web root, then place a file at the given path (probably will require using FTP or SSH to transfer a file to the live server) and then try to access it with your web browser. For example, if your "Private file system path" is "sites/default/files/private" then upload a "test.txt" file and try to access it at http://example.com/sites/default/files/private/test.txt
  5. If you can access the file directly - then your private files are exposed to the internet!

Since that involves messing around with FTP or SSH, an easier way may be installing the Security Review module and running its report. It'll find exposed private files as well as a number of other potential security issues with your site!

How to fix it?

The safest thing you can do is to move your private files directory outside the web root. If your site is at the top-level of your domain, an easy option can be putting the private files at a directory above the web root.

For example, if you login to Drupal by going to http://example.com/user/login as opposed to some prefix directory like http://example.com/drupal/user/login, then you know that your site is at the top-level of your domain, so there shouldn't be any way to access files in the directory above it.

Here's the process:

  1. Use FTP or SSH to move the private files directory to a "sibling" directory to your Drupal root, for example, called "private"
  2. Login to your Drupal site as an admin user
  3. Go to Configuration -> Media -> File system
  4. Change the "Private file system path" to a path relative to the Drupal root, starting with "..", for example, "../private"
  5. Click "Save configuration"

An alternative to this is changing the configuration of your web server to block access to files in the private files directory, but that can be tricky - it depends on your specific web server (Apache, nginx, IIS) and configuration. So, we're not going to go into that today - that could be the topic for a whole new post. :-)

Conclusion

If you have a Drupal site, you need to make sure that your private files are secure - even if you think you don't have any private data on your site, you could be surprised!

Hopefully, the steps above will be helpful, even for users who aren't very technical. Unfortunately, though, maintaining a website will always be at least a little technical, so you may need to call on an expert!

If you have any questions or feedback or tips, please leave them in the comments below!

Or, if you're interested in paid support: Our whole business is support and maintenance for Drupal sites, so if you need help, please feel free to contact us. Good luck!

Jan 17 2018
Jan 17

by Elliot Christenson on January 17, 2018 - 10:13am

We're Drupalers who only recently started digging deep into CiviCRM and we're finding some really cool things! This series of videos is meant to share those secrets with other Drupalers, in case they come across a project that could use them. :-)

You may recall the blog post that David put out way back in August 2017. He gave some very detailed instructions on how you can install CiviCRM on Drupal 8!

We have some new Drupal versions released since August, and we've had some requests to demonstrate how to go through some of the steps. So, I'm going to do just that!

Every step will be followed quite literally. Note that David assumed this was being installed on a development system running Linux. Since I'm running a Mac, this should be a great cross-platform test.

Watch the screencast to see if I run into any issues with the instructions:

Video of CiviCRM secrets for Drupalers: Screencast of Drupal 8 + CiviCRM Installation

Some highlights from the video:

  • Very quick install of Drupal 8 on a Mac running MAMP
  • Download and installation of CiviCRM
  • Brief comments along the way as I follow the steps
  • Finish with a working Drupal 8 + CiviCRM site!

Please leave a comment below!

Jan 17 2018
Jan 17

It seems like RSS is not quite as a buzz as it once was, years ago. There are reasons for that, but I partly believe it is because more services mask direct RSS feed subscriptions in larger aggregate tools. This change also makes it more interesting to get analytics about where that traffic is coming from, and what feed. When I migrated my site to Drupal 8, I decided to take an adventure on adding UTM parameters to my RSS feeds.

This was not nearly as easy as I had thought it would be. My RSS feeds are generating using Views. My Views configuration is straightforward, pretty much the out of the box setup. It uses the Feed display plugin and the Content row display. The setup I assume just about everyone has.

Blog RSS feed configuration

.

In order to start attributing my RSS links and understanding my referral traffic, I needed to adjust the links in my RSS feed for the content. So my first step was to review the node_rss plugin. The link is generated from the entity for its canonical version and with an absolute path, which is to be expected, but there is no way to alter this output (unless you possibly want to alter every time a URL is generated.)

// \Drupal\node\Plugin\views\row\Rss::render()

    $node->link = $node->url('canonical', ['absolute' => TRUE]);

// ...

    $item->title = $node->label();
    $item->link = $node->link;
    // Provide a reference so that the render call in
    // template_preprocess_views_view_row_rss() can still access it.
    $item->elements = &$node->rss_elements;
    $item->nid = $node->id();

// If only I could alter $item!

    $build = [
      '#theme' => $this->themeFunctions(),
      '#view' => $this->view,
      '#options' => $this->options,
      '#row' => $item,
    ];

    return $build;


My next thought was to just extend the node_rss plugin and add my link generation logic. But that's messy, too. I would have to override the entire method, not some "get the link" helper, meaning I'd have to maintain more code on my personal site.

So I kept digging. My next step was to visit template_preprocess_views_view_row_rss and see if I can do a good ole preprocess, hack it in, and call it a dirty day. And, guess what? I could. I did. And I feel a little dirty about it, but it gets the job done.

function bootstrap_glamanate_preprocess_views_view_row_rss(&$variables) {
  /** @var \Drupal\views\ViewExecutable $view */
  $view = $variables['view'];
  if ($view->id() == 'taxonomy_term') {
    $term = \Drupal\taxonomy\Entity\Term::load($view->args[0]);
    $label = $term->label();
    if ($label == 'drupal') {
      $source = 'Drupal Planet';
    }
    else {
      $source = 'Term Feed';
    }
    $variables['link'] .= '?utm_source=' . $source . '&utm_medium=feed&utm_campaign=' . $term->label();
  }
}

My taxonomy term feed is what funnels into Drupal Planet, so only posts tagged "Drupal" show up. So in my preprocess I check the vie and decide to run or not. I then attribute the source, medium, and campaign.

Ideally, I should make a UTM friendly row display which extends node_rss and allows Views substitutions to populate the UTM parameters, if available when generating the URL object. I might do that later. But hopefully, this helps anyone looking to do something similar.

Jan 17 2018
Jan 17

The blog is follow up on our previous post “My First Impression of Learning AngularJS” where I shared my experience of working in AngularJS. This blog is intended to take you one step ahead and have a better understanding of basic operation workflow. In Angular, we have a concept of ‘Data-binding’ that means synchronization of data between a view to model or model to view (technically). 
 

Data Binding Flow

From the business perspective: Change in the logic (backend) impacts front-end (view) and vice versa.

Scenario: Suppose you have a variable, called BookPrice, stored in a database or in local storage. And you are trying to render that value to your page. So change in a “BookPrice” in DB will change front-end value by performing any operation and submitting data back to DB. It will also help you to sync data from view to model.

Angular follow three types of Data-binding method:

  1.  One-way Data-binding
  2.  Two-way Data-binding
  3.  One-Time Data-binding

One Way Data-binding: AngualrJS binds data using one way/two way Data-binding method. In this approach, the value of the variable is always taken from model and display using a view. This is a unidirectional approach. Any update from view won’t update the model. Few points to notice here:

  • Data always flows from model to view.
  • In case of any modification in the model, the same will sync up in view.
  • Data won’t flow from view to model.
  • To render data to the page. Use data-ng-bind aka ng-bind or evaluation expression  {{ }} (curly braces).
One-Way Binding

As you can see in below codebase, we have included AngularJS minified version in a script tag and we have ng-app & ng-init. 

ng-app or data-ng-app: Before compiler starts executing Angular, it looks for ng-app for initialization and loads the document. Then tells Angular that this is a root element for the application. Generally, it is attached to <html>/<body> tag. 
You’re not allowed to use multiple ng-app for single HTML page. Angular does not execute outside of its scope (code inside ng-app will be executable).

ng-init: ‘ng-init’ stands for initialization. As per Angular official statements, ng-init can be used to add an unnecessary amount of logic to your template. The main objective of using ng-init is to evaluate expression under the current scope.

Model, View

In the above source code, I tried to execute outside <div> tag. As a result, it responded incorrectly and print the data as it is. Similarly, there are various ways you can incorporate ng-app into the HTML. Here we are executing the codes online. Later, we will take a look at different ways to incorporate ng-app.

One way data-binding

Two-way Data-binding: It is a bi-directional approach as it travels in both of the directions. From model to view or view to model. Any Update/change on (model) endpoint response (JSON/XML) will impact in the front page that means data in a page(view) will be also impacted.

Few points to note here:

  1. Data always flow from model to view and view to model
  2. In case of any modification in the model, the same will sync up in view
  3. In case of any modification in view, the same will sync up in the model
  4. ng-model directives are used to bind input box/textarea/select list to the variable created using AngularJS
     
Two-Way Data binding

Code Simplification: As you can see in the following source code, we have created a new application under script tag and added it inline. Before I start any more code simplification, let me take out a portion of code that you will get in each and every codebase.

<script>
         var app = angular.module('newApp', []);
          app.controller('newCtrl', function($scope) {
                 $scope.name = "Jaywant Topno";
          });
</script>

Best way to differentiate HTML tag and JS code is by creating a separate file and paste the same code in JS file.

var app = angular.module('newApp', []);

Note that using above code, we can create an object for an app and give them name as ‘newapp’ where angular.module is a syntax to create a new app. Also, you will find a blank array, it’s nothing but to inject services into your app or any dependency for your app.

Sample codebase:

<script>
  var app = angular.module('myApp', ['ngRoute']);
</script>

Here controller holds the function or attribute or functionality in order to provide data to module and module provides data to an application.

Sample codebase:

app.controller('newController', function($scope) {
                 /*properties will be defined here.
                    Variable defines inside controller will be always tagged with $scope
                */
  $scope.properties=’100’;
 });
Two way data-bindingTwo way data-binding method

One-Time Data-binding: In AngularJS, data gets transferred from model to view only one time. Ones the change happen in mode, it won’t impact on view again and again. As per Angular official document, the values only need to be shown in the view and are never going to update from view or controller.

  • Data always flow from model to view or scope to view
  • Data populated only once
  • Data doesn’t travel from view to model
  • Rendering the one-time data view can be accomplished using: :[double colon] special character
  • Use one-time binding when we need a static variable. Help to reduce unwanted watchers.

Below source code will assign name variable only once. It won’t change later. On-time binding follows data flow pattern from model to view that also only once. As you can see on the result screen. Name value has been taken only once from the model and didn’t change later on.

Result:

One-time data-binding

Above shared source code refers to Databinding in AngualrJS with examples. 

That’s it! Data-binding is a technique that helps you bind data with the model in front-end and vice versa. Though we have different ways to bind the data, we have gone through one-way, Two-way, One-time Data-binding model. Use the models technically as per the requirement to make sure the flow, internal performance & complexity.
 

Jan 17 2018
Jan 17

This article is the first in a series about different Continuous Integration implementations for Drupal 8 projects. Each installment will pick a CI technology and go over its pros and cons for implementing the following set of jobs when someone creates a pull request:

  • Run unit and kernel tests.
  • Generate a PHPUnit coverage report.
  • Check Drupal's coding standards.
  • Update the database and run Behat tests.

In this article, we will start with CircleCI, a SaaS platform. Thanks to work initiated by Andrew Berry at drupal_tests—if you maintain a Drupal 8 module, check it out—I can present you a single-command installer to get your Drupal 8 project started with Continuous Integration using CircleCI.

There is a repository that contains the installer script where we are working on the different CI implementations, plus a demo Drupal project to see them in action.

Setup

Here is a clip where I take a vanilla Drupal 8 project created with composer-project, I run the installer and commit the files, and when I allow CircleCI to watch the repository I see the jobs running:

Videos require iframe browser support.

For details on how to run the installation script and connect your repository with CircleCI, have a look at the repository's README.

Killer features

Once you have the setup in place, your project will benefit right away from the following features:

Less infrastructure to maintain

When code is pushed to a GitHub repository, CircleCI takes care of spinning up and tearing down containers for each of the jobs that you have defined in the CircleCI configuration file. CircleCI provides a set of pre-built images for you to use on your project but you can use a custom Docker image if you need it. For example, here is the Dockerfile that the installer script uses:

# This is the parent image, located at https://hub.docker.com/_/drupal
FROM drupal:8.4-apache

# Install libraries and extensions.
RUN apt-get update && apt-get install -y \
  imagemagick \
  libmagickwand-dev \
  mariadb-client \
  sudo \
  vim \
  wget \
  && docker-php-ext-install mysqli \
  && docker-php-ext-install pdo \
  && docker-php-ext-install pdo_mysql

# Remove the vanilla Drupal project that comes with the parent image.
RUN rm -rf /var/www/html/*

# Change docroot since we use Composer's drupal-project.
RUN sed -ri -e 's!/var/www/html!/var/www/html/web!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www!/var/www/html/web!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf

# Install composer.
RUN wget https://raw.githubusercontent.com/composer/getcomposer.org/f3333f3bc20ab8334f7f3dada808b8dfbfc46088/web/installer -O - -q | php -- --quiet
RUN mv composer.phar /usr/local/bin/composer

# Put a turbo on composer.
RUN composer global require hirak/prestissimo

# Install XDebug.
RUN pecl install xdebug-2.5.5 \
    && docker-php-ext-enable xdebug

# Install Robo CI.
# @TODO replace the following URL by http://robo.li/robo.phar when the Robo team fixes it.
RUN wget https://github.com/consolidation/Robo/releases/download/1.1.5/robo.phar
RUN chmod +x robo.phar && mv robo.phar /usr/local/bin/robo

# Install Dockerize.
ENV DOCKERIZE_VERSION v0.6.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
    && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
    && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz

# Install ImageMagic to take screenshots.
RUN pecl install imagick \
    && docker-php-ext-enable imagick

Status badges

By adding a CircleCI status badge to your project’s README file, you can check whether the main branch is stable or not:

Status badge

This is useful when creating a new release. If the badge is red, then you need to investigate what’s going on. Beware, there is an open bug in CircleCI that may display the green PASSED badge even when one of the jobs in a workflow has failed. Until this gets fixed, click on the badge to double check that everything passes.

Version control

CircleCI’s jobs live within the project repository under the .circleci directory, which makes it easy to track changes in the jobs and make them evolve along with the project.

Intelligent reporting

CircleCI is intelligent at presenting job artifacts. Here are some screenshots:

Failed tests

By storing test results as artifacts, CircleCI can parse them and present them in the browser:

Failed tests

Links to screenshots taken by Behat

By using the Behat Screenshot extension and storing the screenshots as job artifacts, we can see them as a list of links in the browser:

Behat screenshots

Here is what we see when we click on the highlighted link above:

Drupal screenshot

Coding standard violations

CircleCI can parse the Code Sniffer report and present a summary of Drupal coding standard violations:

Coding standards

Test coverage reports

By generating an HTML PHPUnit report and exposing it as an artifact, we can see a link to the report at the CircleCI web interface:

Coverage result

The highlighted link above shows the following report which describes how much of the code is covered by tests:

Coverage report

Running CircleCI jobs locally

CircleCI offers a command line interface for running jobs locally. This is a big time saver as it gives you a chance to test and debug a job locally before pushing your changes.

CircleCI command line interface

Ready to take off!

Do you have a Drupal 8 project and want to try Continuous Integration with CircleCI? If so, follow the instructions at the Drupal8CI repository and start writing tests and getting the jobs to pass. If you find issues or add improvements, please either post a comment here or contribute them to the repository. Happy CI-ing!

Acknowledgements

  • Andrew Berry, for teaching me so much about Docker and CircleCI.
  • James Sansbury, for his editorial and technical feedback, plus his Bash-fu.
  • The Draco team at Turner, for allowing me to add continuous integration to their development workflow.
Jan 17 2018
Jan 17

If you are using Acquia Dev Desktop for spinning up Drupal 8 sites quickly – like I suggested in this article you may run into trouble when trying to use Drush on Drupal version 8.4 and newer.

If you're using Acquia Dev Desktop with Drupal 8.4 and Drush (which is version 8 in the current version of Acquia Dev Desktop) you'll probably get this error:

PHP Fatal error:  Declaration of Drush\Command
\DrushInputAdapter::hasParameterOption() must be compatible with 
Symfony\Component\Console\Input\InputInterface::hasParameterOption($values, $onlyParams = false) 
in /Applications/DevDesktop/tools/vendor/drush/drush/lib/Drush
/Command/DrushInputAdapter.php on line 27

This is because Symfony in Drupal has been updated to ~3.2 (it used to be 2.8.x). Long story short: this breaks Drush 8 and you'll be needing Drush 9 to execute Drush commands on your Drupal site.

To patch/upgrade Acquia Dev Desktop you need to update composer.json in the tools folder. To do this on a Mac, which is the OS environment I'm familiar with you open Terminal and go to

cd /Applications/DevDesktop/tools

In this folder you'll find the composer.json file that governs which version of Drush that Acquia Dev Desktop is using. If you wish to, you can inspect this file and change it like so:

nano composer.json
---
{
    "require": {
        "drush/drush": "9.0.0-rc2"
    }
}
---

After this, you can update composer by typing

composer update

And you should be able to use Drush again.

(Drush 9 is compatible with Drupal 7 and Drupal 8.)

Jan 17 2018
Jan 17
  • By : Ganesh
  • Date :17-01-2018

Once you have a well-built Drupal site up and running, ask any website administrator and everyone wants to know more about the visitors to their site, their behaviour and how the website is performing. This information is a solid indicator of the popularity of the website among the people and the search engines as well.

When we talk about this data or analytics as the marketers call it, there is one tool which stands tall and wears a crown - Google Analytics. With detailed information on the number of visitors, including where they came from and how much time are they spending on the site, Google analytics reports provides real-time updates. In Drupal, the Drupal Google Analytics module used by almost 400,000 sites, is among the top 20 most popular modules. With its integration, the module allows the use and customization of numerous features to change things up from the usual pattern.

The first thing you need to do is add your Drupal website into your Google Analytics (assuming you already have a Google Analytics account). Login to your Google Analytics account and navigate to the "Admin" panel to add a new account. After filling in the required details click on the "Get Tracking ID" button to receive the ID and the script to be embedded into your Drupal website.

Google Analytics New Account
Google Analytics New Account

Once done with adding the account, login to your Drupal website and go to the Google Analytics Module page to install and enable the same. Know the version of your website and select the same version of the module. Now get the "Tracking ID" from Google Analytics and paste it in your Drupal site. To do that, go back to your Drupal site, navigate to Configuration-> Google Analytics. Paste the ID in the field of "Web Property ID". The setup is complete.

Google Analytics Tracking ID
Google Analytics Tracking ID

While the real power of Drupal Google Analytics lies in the data it provides, it can be sometimes be quite overwhelming. So much data at one place and the application developers keep on adding on more and more analytics where sometimes all we need is a simple report or a more specific one. After months of scrutinizing data, I had enough of it and I finally worked up the nerve to switch over to custom reports. Oh Boy I was in for some great features and lot more!

Why Go Custom?

Because I'm in love with it! Simply put, why bother or scratch your head over the standard GA reports and scrutinize for the information you require, when you can create reports that contain just those exact information?

While "what" custom reports should you create is a big question in mind, you should know that these reports are, well, "CUSTOM" and are specific to business needs and requirements.

Before we jump right into the reports, let us go over the two main concepts to keep in mind while creating a Google analytics custom report.

  • Dimensions: Characteristic of visitors on your site. For example: Source, Country, Device etc
  • Metrics: A qualitative measurement on visitor interaction on your site. For example: Page views, Bounce Rate, Unique Purchases etc

Building Your Report

To begin with, log into your Google Analytics account and navigate to customization. Click the "New Custom Report" to start building the report you require.

New Custom Report
New Custom Report

The first thing you will do is to name your custom report, which is pretty straightforward using the "Title" field.

Custom Report - Title
Custom Report - Title

Next up, you have the report tab which allows you to create more than one tabs of data in a single report which allows you to breakdown your report into smaller chunks. Once you have decided on the number of tabs to use, you need to select your report type. There are three options to pick from, Explorer, Flat and Overlay. Explorer reports are similar in layout to the older version of Analytics with a trend chart and some data tables. On the other hand, "Flat Table" is a basic table report that allows you to analyze two dimensions side-by-side. "Map Overlay" can be used for precise location based data or stats.

The Metric groups and dimension drill down allows you to select what data you want in your report and how you want this data to be further broken down. Google analytics allows a maximum of 25 metrics to be included in a flat table report and the metrics selected here will the appear as data columns on the final custom report. The metrics can be anything from visits, average time per visit & bounce rates to something more specific like conversions to goal completions.

Metrics & Dimensions
Metrics & Dimensions

The dimension drill down in the form of individual rows in the final report allows you to organize your metrics and further break down your report data.

When you get to "Filters" this allows you to limit your final report and let Google know which data set you want to include. For example if you create a report with the country dimension, you can add a filter to show up data from certain countries only. All you have to do is use "Include" with "Exact Match" of the country you desire.

Filters
Filters

Now that you have your Google Analytics report for your Drupal site, it is time to review your "masterpiece" and start analyzing the data.

From basic tracking to complex behaviour, Drupal Google Analytics module provides a one-stop solution by using the best of Drupal, and enables website administrators to zero in on insights that can drive better and smarter business decisions. With the integration of Drupal Google Analytics module and the custom reports, you will still be pulling up important insights on revenue, conversions etc but just won't be spending hours on report after report.

Jan 17 2018
Jan 17

There were five pieces altogether written about ambitious Drupal experiences. Each of them focused on a single subject of what constitutes an ambitious digital experience. In the first part of the series, I did my best to explain what ambitious digital experiences mean. The main conclusion, at least in my opinion, is that experience is much more than just mere content. And ambitious digital experiences should come as natural and intuitive as possible. And be focused on the customers, on the users. 

Provide valuable and unique digital experiences

I have tried to define the term customer experience as an element of digital experience, the latter being a much broader term. Customer experiences are definitely the ones that drive businesses forward, but it seems that by focusing only on them we miss out something. Especially if we take into account the content, scale, and volume that customer experience management can't cover. It seems that digital experience management can cope with managing customer experiences and add some extras to the equation. IoT, Industry 4.0, VR and AR being just a few of them who would have an irretrievable effect on the digital experiences of a not so distant future. As a matter of fact, some of them are already here. 

Closely connected to an ever-expanding area of digital experiences is what and why integrate something into your existing online presence or a new one if building it. We are leaving our footprints and fingerprints behind every time we engage in the digital arena. We are all users in that digital arena, potential customers looking for items, products, solutions. And like in the B2C environment, the same can and should be applied also to the B2B environment. Marketing automation tools, CRM systems, ERPs, are the fundamentals. 

 

What are the opportunities in Industry 4.0?

When considering Industry 4.0 the integrations should go even deeper. Autonomous robots who can communicate with each other and with people are already a reality. Same goes for collection and evaluation of data from many different sources. For example, production systems on the one hand and enterprise and customer management systems on the other side. Simulation and horizontal and vertical system integration are also one of the building blocks of Industry 4.0. Rarely are companies, suppliers and customers closely linked today. Companies, departments, functions, and capabilities will become more cohesive and by evolving universal data-integration networks truly automated value chains will be enabled. More and more systems and applications will be deployed to the cloud, production systems will benefit from data-driven services with a reaction time of several milliseconds. Due to increasing interconnectedness, the need to protect vital industrial systems and production lines from cybersecurity threats will increase dramatically. Augmented reality will also be used more and more in future years. The employees, users, and buyers will be involved in a much broader use of AR, which will provide real-time information to improve decision-making and all work-related procedures. 

usb

 

Not to forget about new regulations on data protection and privacy

But as we will all be more and more a part of interconnected digital worlds and agendas the need for safeguarding the personal data of every individual is becoming more and more important. The amount of data that will be travelling through optic fibres every second will be enormous and all that data should be taken care with the utmost care and highest standards of security. However, when the companies will fully embrace the idea of Industry 4.0 the data will be stored in the cloud and when needed making its way to users will be surely greatly protected from possible cybersecurity threats. As they should be. And because the personal data will also be stored in the same system, it will be treated with the same standards. 

To sum it up, ambitious digital experiences are definitely the way to go. I covered some of the aspects that kind of experiences should entail - I'm making no illusions I covered all of them. And I do believe that the Drupal community is more than capable to address any of them. Those which are already here and we know about them and those who are hiding just behind the corner. 

We at AGILEDROP have dealt with a lot of aspects of ambitious digital experiences in the past years. Collected experiences which we are willing to pass along. Get in touch if you would like to utilize the obtained knowledge in your projects also. 

Jan 16 2018
Jan 16

The first post was about setting up the Drupal and React environment, the second explained the concepts of component, route, fetch data from JSON API and localization.

This one focuses on translation issues and various pitfalls that you might encounter while building with React and Drupal. Knowing this, prefer the demos and documentation that have been build around Contenta and Drupal JSON API if you are looking for a more general introduction.

Following the second post example, here is the content model for an audioguide used by a film museum:

  • Itinerary (Drupal vocabulary) and Stop (Drupal content type):
    an itinerary has several "stops", that are obviously places to stop and listen to a track of an audioguide. We define a term reference to the itinerary vocabulary on the stop content type.
    Example: "The Maltese Falcon" and "You Only Live Once" stops have a reference to the "Adult" itinerary.
  • Each Drupal entities (nodes and terms) are fully translatable and exposed with JSON API.

We will discuss the following points:

  • Filter stops by itinerary (e.g. adult, child, ...), so we can demonstrate taxonomy term filtering.
  • Custom sort, using the Weight module.
  • Full multilingual support: localization and internationalization with and without node language fallback.
  • Get images, with image styles.
  • Fetch data on the route or the component.
  • How to deploy in production.

Documentation and code example

An update of the demo repositories (React and Drupal ones) is on its way.
It will contain a readme with a summary of the steps for getting started on a local dev environment.
The React repo will also describe the components used in the application and provide various examples, that are a bit long to describe in a single post, like: searching, fetching and displaying nested data structure like Paragraphs, ...

Boilerplate update

The code from the previous article will also be updated to the latest release of the React Starter Kit boilerplate that now includes React 16.2.0.

Languages

Localization: translate the UI with React Intl

Language setup

Here is an update of the second part of the tutorial for the React 16.2.0 React Intl branch of the boilerplate.

Edit the src/config.js file

module.exports = {
  // default locale is the first one
  locales: [
    /* @intl-code-template '${lang}-${COUNTRY}', */
    'en-US',
    'fr-BE',
    /* @intl-code-template-end */
  ],
}

Edit the src/components/LanguageSwitcher/LanguageSwitcher.js

const localeDict = {
  /* @intl-code-template '${lang}-${COUNTRY}': '${Name}', */
  'en-US': 'English',
  'fr-BE': 'Français',
  /* @intl-code-template-end */
};

Add your locale in src/client.js

import en from 'react-intl/locale-data/en';
import fr from 'react-intl/locale-data/fr';
(...)
[en, fr].forEach(addLocaleData);

If you want to change the default language, edit also the src/actions/intl.js

defaultLocale: 'fr-BE',

Run yarn build after having made your changes to languages, so the messages defined in your components will be extracted in one file per language in /src/messages.
 

Messages

You can define translatable messages in your components.
The defaultMessage and (optionaly) the description will be the ones that will be translatable after the extraction.

// Above the class definition. 
const messages = defineMessages({
  search_description: {
    id: 'search.description',
    defaultMessage: 'Search using stop id or title.',
    description: 'Description for the search input',
  },
  search_placeholder: {
    id: 'search.placeholder',
    defaultMessage: 'Search...',
    description: 'Search input placeholder',
  },
});
(...)
// In the render() method.
<FormattedMessage {...messages.search_label} />


The tricky part for placeholders

But if you need to define a placeholder (e.g. for the search input field) the <FormattedMessage /> does not fit.
You will have to use injectIntl.

import { defineMessages, injectIntl, intlShape } from 'react-intl';
(...)
static propTypes = {
  intl: intlShape.isRequired,
}
(...)
export default injectIntl(withStyles(s)(SearchBar));

Then you can use React Intl for plain strings like html attributes.

<input
  placeholder={formatMessage(messages.search_placeholder)}
  value={this.props.filterText}
  onChange={this.handleFilterTextChange}
/>

Here is an issue that tells a bit more about that.

Internationalization: get translated content with JSON API from Drupal

Filtering content by language can be done with the Drupal language id on the path. It is still a workaround that is explained by @e0ipso in this video about content translation from the JSON API Drupal YouTube channel.

// JSON API request for translated stop nodes.
const languageId = 'fr';
const nodes = `${JSON_API_URL}/${languageId}/jsonapi/node/stop`;
But wait... I do noT want language fallback

The request above will produce language fallback, so if your Drupal node is not translated, the source language will be provided by default.
If your requirements does not comply with this behaviour, add a filter with the language id: ?filter[langcode][value]=${languageId}`;

// JSON API request for translated stop nodes, with no language fallback.
const languageId = 'fr';
const nodes = `${JSON_API_URL}/${languageId}/jsonapi/node/stop?filter[langcode][value]=${languageId}`;

JSON API

To test JSON API requests, you can use your IDE, Chrome extensions (Postman, ...), or just use Firefox Developer Edition, it is fast and formats JSON + prints headers out of the box.

Firefox developer edition JSON

Before we start, here is a discussion about using fetch.

Fetch

Fetch returns a Promise, here are some ways to use it.

// Chaining with then
const data = await fetch(endpoint).then(response =>
  response.json(),
);
// Declare method explicitely and using await, in two steps.
const response = await fetch(endpoint, { method: 'GET' });
const data = await response.json();

We have basically two options to fetch the data from JSON API: on the router action or on the component.

1. Fetch on the router

A First naive approach, described on the second part of this serie, was to fetch on the router and pass data to the component.

I was basically happy with that, before reading about this performance issue:

since the routes are universal, it will first run on the server and then run again on the client.
This is basically one of the major point of using isomorphic architecture: to prefetch the data on the server.

See this issue comment on React Starter Kit

So, it should lead to use Redux by default. You have to define your actions and reducers to couple the stored data to React components.
There are loads of good tutorials for that (and here is one of them), but I didn’t wanted the overhead of adding Redux now.

It also appears that if you want basic behaviours like error handling for your data, while using the route, you also need Redux or have to work with throw on the router / try - catch on the component.

Another point was the propTypes definition for each response.

So I reconsidered fetching on the component itself.
 

2. Fetch on the component

Fetching on the component resulted as a simplification, apart while changing a language.
The JSON API request was called automatically on language change while getting the data from the route. Now that we are using componentDidMount to fetch, the request is not executed again while using the language switcher.

componentDidMount() {
  const endpoint = ItineraryListPage.getItinerariesEndpoint(
    this.props.languageId,
  );
  this.fetchItineraries(endpoint);
}

So we have to add a componentWillReceiveProps definition.

componentWillReceiveProps(nextProps) {
  if(nextPropos.languageId !== this.props.languageId) {
    const endpoint = ItineraryListPage.getItinerariesEndpoint(nextProps.languageId);
    this.fetchItineraries(endpoint);
  }
}

On the code below, we have simplified the route and added fetch loading state and error handlers. Also, note the usage of propTypes and states that now appears to be much more readable.

Then, it will still be possible to add progressive enhancement with Redux later on, which looks like a better development approach.

 

Sorting

The default sort will not be what you expect in most situations.
In some cases, custom sort should be provided instead of alphabetical.
This can be easily achieved with the Drupal Weight module and an extra parameter passed to JSON API: ?sort=field_weight.

// Fetch sorted node stops for this itinerary.
const itineraryStopNodesEndpoint = `${JSON_API_URL}/${languageId}/jsonapi/node/stop?sort=field_weight&filter[field_itinerary.uuid][value]=${this.props.itineraryId}`;

Filtering examples

Filtering by term can be easily done with filter[field_itinerary.uuid][value]=this.props.itineraryId

Here is how to filter nodes with a term: get the stops for an itinerary.

// Fetch the translated node stops for this itinerary.
const itineraryStopNodesEndpoint = `${JSON_API_URL}/${languageId}/jsonapi/node/stop?sort=field_weight&filter[field_itinerary.uuid][value]=${this.props.itineraryId}`;

And here is the reverse one: stops that are not part of this itinerary.

// Fetch all the available translated node stops that are not part of the
// current itinerary.
const stopNodesEndpoint = `${JSON_API_URL}/${languageId}/jsonapi/node/stop?sort=field_weight&filter[not-current-itinerary][condition][path]=field_itinerary.uuid&filter[not-current-itinerary][condition][operator]=NOT%20IN&filter[not-current-itinerary][condition][value][]=${this.props.itineraryId}`;

You can continue reading about filtering, sorting and paginating. For more advanced filtering, have a look at JSON API Fancy Filters.

Include images

Don't do this

async getImageUrl() {
    const imageUUID = this.props.itinerary.relationships.field_image.data.id;
    const fileEndpoint = `${JSON_API_URL}/jsonapi/file/file/${imageUUID}`;
    const imageResponse = await fetch(fileEndpoint).then(response =>
      response.json(),
    );
    if (imageResponse) {
      const url = `${JSON_API_URL}/${imageResponse.data.attributes.url}`;
      this.setState({ imageUrl: url });
    }
  }

But prefer includes

const itineraryStopNodesWithImages = `${JSON_API_URL}/${languageId}/jsonapi/node/stop?filter[field_itinerary.uuid][value]=${this.props.itineraryId}&include=field_image`;

 

Image styles

JSON API comes with zero configuration. It means that when you include images, the original image will be provided. You known this 5MB thing that could live on your server.

This is an exception in the zero configuration rule and it totally makes sense.

Download and enable the Consumer Image Styles contrib module on your Drupal site then go to Configuration > Consumers and add a Consumer.

Drupal add Consumer for the React app

Then you can call your image style with your consumer id.

const itineraryStopNodesWithImageStyles = `${JSON_API_URL}/${languageId}/jsonapi/node/stop?_consumer_id=${CONSUMER_ID}&sort=field_weight&filter[field_itinerary.uuid][value]=${this.props.itineraryId}&include=field_image`;

The image styles will then be available in the meta.derivatives:

Consumer Image Style

For more information about image styles, read this article from Lullabot.


Getting parent terms only

Let's say that we have the following structure on our content model.

  • Itineraries can have or not child itineraries, we use the terms relation (hierarchy) for that.
    Example: The "Adult" itinerary has the "Film Noir" and "Zombie films" sub itineraries. The "Child" itinerary have no child itineraries.
  • A stop can belong to several itineraries, so the term reference has a Drupal multiplicity of 'unlimited'.
    Example: "The Maltese Falcon" belongs to the "Adult" and "Film Noir"

Based on that, we want to display the parent itinerary terms on a first view.

Depending on your Drupal version, the parents may be empty in you JSON API response, see Parent is always empty for taxonomy terms and Make $term->parent behave like any other entity reference field, to fix REST support and de-customize its Views integration

This is far from ideal (for redundancy), but as a temporary workaround, a is parent boolean can be added to the Itinerary vocabulary.
So the request for itineraries is now using filter[field_is_parent][value]=1 (note the 1 as true).

${JSON_API_URL}/${languageId}/jsonapi/taxonomy_term/itinerary?filter[field_is_parent][value]=1&sort=weight&include=field_image

Displaying optional child itineraries with propTypes and defaultProps

On a second view, e.g. while getting stops for an itinerary, we may want to group the stops by child itineraries.
In our content model, an itinerary can have a relation of 0..* for child itineraries. So we need tell React that they are optional and that JSON API will not always return values for the childItineraries prop.

defaultProps are there for that purpose.

  static propTypes = {
    childItineraries: PropTypes.shape({
      data: PropTypes.arrayOf(
        PropTypes.shape({
          id: PropTypes.string.isRequired,
        }).isRequired,
      ).isRequired,
      included: PropTypes.arrayOf(
        PropTypes.shape({
          id: PropTypes.string.isRequired,
        }).isRequired,
      ).isRequired,
    }),
  };

  static defaultProps = {
    childItineraries: null,
  };

Read more on Typechecking With PropTypes from the official React documentation.

Deploy in production

If you are using the same production server that hosts your Drupal site to serve your web app, you will have to run it on the same port (80 or 443) as Apache or Nginx.

We will use Apache here, as a reverse proxy.

1. Install and test Node.js

Install the NodeSource PPA

cd ~
curl -sL https://deb.nodesource.com/setup_6.x -o nodesource_setup.sh
sudo bash nodesource_setup.sh

Install node, npm and build-essential

sudo apt-get install nodejs
sudo apt-get install build-essential

Test node from the same server

nano hello.js
#!/usr/bin/env nodejs
var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(8080, 'localhost');
console.log('Server running at http://localhost:8080/');
chmod +x ./hello.js
./hello.js
curl http://localhost:8080

The result of the curl command should be the message from your console.log.

Test form another server

If you want to test your app from the outside, just remind to open your port (here 8080).

iptables -I INPUT 1 -i eth0 -p tcp --dport 8080 -j ACCEPT

Read more about Iptables.

Also, in this case, let your node server listen to 0.0.0.0 and not localhost, as described in this issue.

var http = require('http'); 
http.createServer(function (req, res) { 
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
 }
).listen(8080, "0.0.0.0"); 
console.log('Server running at http://0.0.0.0:8080/');

2. Install Yarn

curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt-get update && sudo apt-get install yarn

3. Prepare your virtualhost

  • Clone your repository, to keep things organized I put it on the user directory that is dedicated to the virtualhost.
  • Cd into the cloned repo directory then install dependencies with yarn install
  • Production build with yarn run build --release

Another option is to use a deployment script, so we do not need yarn in production.
 

4. Manage your application with PM2

PM2 is a production process manager for Node.js. We will use it to run our React application.
Install it globally and make it available on startup.

sudo npm install -g pm2
pm2 startup systemd

Then start your application and check the status.

pm2 start build/server.js
pm2 status

If you need to restart it, after a rebuild.

pm2 restart build/server.js


5. Configure Apache

Modify your Apache vhost configuration as a reverse proxy to the port used by your application (here 3000).
Do the same for the port 443 (https).

<VirtualHost *:80>
ServerName mysite.org
ServerAlias www.mysite.org
DocumentRoot /home/mysite/docroot

<Proxy *>
  Order deny,allow
  Allow from all
</Proxy>

ProxyPreserveHost On
ProxyRequests off
ProxyPass / http://mysite.org:3000/
ProxyPassReverse / mysite.org:3000/
</VirtualHost>

Enable the Apache proxy modules and reload Apache.

sudo a2enmod proxy
sudo a2enmod proxy_http
service apache2 reload

Your app should now be available, yay!

Resources

Related articles

Jan 16 2018
Jan 16

With so many shiny new Drupal 8 modules emerging this year, we were hard pressed to pick our recommendations for 2018. It came down to asking ourselves: which modules are we excited about implementing in 2018… the ones that will make our projects better, faster, smarter brighter? Read on for our list of Drupal 8 modules we're excited about.

Configuration Split

The Drupal Configuration Split module makes Drupal 8 configuration management more customizable. This means you can set up some configurations that can be edited on the live site, without interfering with your configuration management workflow. Instead of importing and exporting the whole set of a site’s configuration, the module enables you to define sets of configuration to export to different directories that automatically merge again when they are imported.

Content Workflow

If you’ve shied away from implementing complicated workflows in the past, you’ll enjoy how the Content Workflow module makes it easy to set up a simple workflow. This core module enables you to streamline the content publication process by defining states for content (such as draft, unpublished and published) and then manage permissions around these states.

Deploy

The Deploy content staging module makes it easier to stage and preview content for a Drupal site. It’s often used to deploy content from one Drupal site to another. Redesigned for Drupal 8, the new version is based on the Multiversion and Replication modules, making it more efficient and flexible.

Drupal Commerce

The new full release of Drupal Commerce has us very excited to start building ecommerce sites in Drupal 8. Fully rebuilt for Drupal 8, the new Drupal Commerce module doesn’t presume a specific ecommerce business model, enabling developers to customize the module to suit a merchant’s needs.

JSON API

The JSON API module formats your JSON requests and responses to make them compliant with the JSON API Specification. This module is the key to setting up Drupal as a backend so you can implement the font-end with React or your front-end platform of choice.

Schema.org Metatag

Ramp up your SEO with structured data that helps Google categorize and display your pages. The Schema.org Metatag module allows you to add and validate Schema.org structured data as JASON LD, one of Google’s preferred data formats.

UI Patterns

If you’re looking for a way to implement an ‘atomic design’ in Drupal the Drupal UI Patterns project is a nice option. It consists of six modules that allow you to define and expose UI patterns as Drupal plugins. You can use them as drop-in templates for all entity types — paragraphs, views, field groups and more.

Webform

The Drupal webform module has a new release candidate for Drupal 8. A ton of work has been put into the module; it’s like a whole form-building application inside your Drupal site. Quickly integrate forms into any Drupal 8 website. enables you to build, publish and duplicate webforms. You can also manage and download submissions, and send confirmations to users.

Which Drupal 8 modules are doing it for you?

We’d love to hear about which Drupal 8 modules your team is excited about. Leave us a comment.

Jan 16 2018
Jan 16

Spoiler alert! If you haven't seen “The Last Jedi” yet, this blog post includes what can be considered a minor spoiler. I've seen the movie a few times now (I saw the original Star Wars movie when I was 7 years old, and I've been hooked ever since), and I've been able to fully indoctrinate at least one of my kids in my love for the series. When we first saw the movie on opening night, there was a line of dialog that resonated with me more than usual - I've been thinking about that line for over a month now and have figured out how to relate my love of Star Wars with my obsession for teaching Drupal. 

"The Greatest Teacher, Failure Is"

There's a point in the movie when Yoda is speaking to another character and utters this line. As a former mechanical/aerospace engineering college adjunct professor and a current Drupal trainer, I've always believed that for a lesson to truly take hold, there has to be a little bit of pain - not physical pain, but rather the kind of pain that comes from doing something incorrectly (often numerous times) before realizing the proper way of doing something that leads to a more satisfying, correct (and often efficient) result. As usual, I didn't have the proper words to describe it - thanks to Yoda, I do now.

As I look back at my eleven years in the Drupal community, I can point to more things that I care to admit that I didn't do correctly the first time. If I narrow that list to technical mistakes, it becomes very clear that many of the mistakes I've made have had a direct impact on the curriculum I've written for our various training classes.

As we gear up to teach Mastering Professional Development Workflows with Pantheon for the second time, allow me to share some of the failures I've had in the past and how they've had a direct result on the curriculum for this 6-week class.

  1. "Everything is a content type" - this is something I learned only by repeatedly designing the information architecture for various sites that ended up not being able to completely fulfill all the project's requirements. Understanding the differences between various kinds of entities is key to building a sustainable site that meets 100% of a project's requirements.
  2. "Core search is fine" - I'm embarrassed to say how late I was to get on board the Search API train. Being able to provide faceted search to clients of all sizes is a huge win.
  3. "I don't need the command line" - looking back at the first half-ish of my Drupal career, I used Drush only when absolutely necessary. Not learning basic command line tools until well into Drupal 7 definitely held me back. With Drupal 8, if you want to be a professional Drupal developer, there is no way to avoid it. Luckily, using command line tools like Composer, Drush, and Drupal Console are not only "the right thing to do", but also save time. 
  4. "MAMP is fine" - I was late to the party in moving my local development environment from MAMP and Acquia Dev Desktop to a Docker-based solution. I had played around a bit with virtualized solutions, but once you get accustomed to a professional-grade, modern, Docker-based solution, you'll never go back.

While I could list additional examples (multi-branch development, configuration management, display modes) of previous failures - or even one or two that I feel like I'm currently failing (test-driven development), the point is that sometimes it is necessary to fail in order to really understand the value of a success. 

DrupalEasy's 6-week live, online Mastering Professional Development Workflows with Pantheon, not coincidentally, addresses the failures listed above. The next session begins on February 27, 2018.  

The next session (our 11th!) of our 12-week, live, online more-introductory-focused Drupal Career Online begins March 26, 2018.
 

Jan 16 2018
Jan 16

There's no such thing as "just a typo."

In Drupal, clients and perspective users see the user interface and documentation first. When we give them demos or when they evaluate the Drupal project, they aren’t just evaluating the code. Grammar, punctuation, readability, and spelling all matter. They give the project credibility. It is just as important to maintain the same high standards with the front facing side of Drupal as we do with the code.

I have been working with Drupal for about three years, and contributing back to the project for a little less than two.

I have learned quite a bit, but, most importantly, I have come to the conclusion that there is no such thing as “just” a typo, “just” a grammar issue, or “just” a documentation patch; not all patches have to fix code to be important.

Ways to get involved:

Getting involved is easy. You don’t have to be a senior developer to give back. There are multiple ways to contribute to the issue queue. Whether you are a Project Manager, Customer Success Engineer, a novice, or hold a non-technical role - there is something you can do to contribute back to the community. 

"It’s really the Drupal community and not so much the software that makes the Drupal project what it is. So fostering the Drupal community is actually more important than just managing the code base." - Dries Buytaert

Report an issue:

If you come across an issue when using the project or evaluating a module make note of it (or them). In your notes include how to replicate the problem(s). If you have a dedicated community lead on your team, pass on the info; if not, you can report the issue in the project's issue queue. The problem can only be remedied if the community knows about it.

Create a patch:

If you’ve found an issue to work on, the next step is to create a patch. This can be the most difficult step, and there are a few ways to make a patch, but there is plenty of documentation on Drupal.org. Remember there is no such thing as “just” a small patch. Every patch moves the Drupal project closer to perfection!

Help test patches and provide useful feedback:

Testing patches and providing feedback is also a great way to contribute, and it is an important part of the process. You can use the Drupal issue queue, Dreditor, and Simplytest.me to review and test patches. Carefully read the issue summary, spin up a site applying the patch, and test the functionality. Be sure to follow the steps for replication and make notes of any ambiguity.

When testing patches, first and foremost, be polite and give USEFUL constructive feedback. It is really helpful to supply screenshots from before and after the patch was applied.

Ask Questions:

Ask questions. All the time. Every Drupaler was once a beginner, but sometimes they forget the steps it takes to learn the process. Asking for clarification of the issue and steps for replication can help make the testing a little easier.

Attend code sprints and allow yourself to be mentored:

Does any (or all) of this sound like something you’d love to do but need a little help to get started? If so, DrupalCon and many Drupal camps have full day code sprints which provide mentoring for people who want to get involved. Mentors can help set up local testing environments, walk through the patching process, and give insights to basic issue triage. If you are interested in helping others get started, you can also sign up to be a mentor.

It takes a village:

Not everyone who works on open source projects is a coder. Smaller tasks help the less experienced gain confidence and experience, which, in turn, leads to more contributions. A more polished Drupal leads to more job security.

The User Interface and functionality of the project are often the first things that potential users of the project see. If they don’t code themselves, the forward facing project is all that they may ever see. Code is very important, but so are all the other parts. With that in mind, typos, grammar issues, or lack of documentation might “just” turn out to be a problem. Remember, the small patches are just as important as those that address code.

It takes a village!

Additional resources:

At the upcoming Florida Drupal Camp, February 16th - 18th, I will be presenting my session: Dred(itor) the Issue Queue? Don't - It's Simple(lytest) to Git in! I have also been helping to organize and lead the Sunday Code Sprints. Come to the Sunshine State to get a complete rundown of the issue queue process and start collaborating with the Drupal project!

Jan 15 2018
Jan 15

After spending the past year experimenting with promoting paid services, talking about sponsored features, and adding an about section to the Webform module. I learned a lot, from my experiments, including not asking for forgiveness.

"If it's a good idea, go ahead and do it. It is much easier to apologize than it is to get permission."

-- Grace Hopper

Importance of contributing to the Drupal community

Not enough people understand and/or realize the importance of contributing to the Drupal community. My last blog post discussed my hope of finding ways to help sustain my commitment and contribution to the Drupal community and ended by stating…

The question is not should you contribute to open source but how can you contribute.

Convincing people that they need to contribute

The challenge is convincing people and organizations that they need to contribute to Open Source. Funding is an ongoing challenge for the Drupal community The problem could be that people don't understand the importance and value of contributing back to Open Source.

Nowhere in Drupal's user interface/experience is our community and Drupal Association promoted and/or acknowledged. Core maintainers are only included in the MAINTAINERS.txt file, which only fellow developers can access. Drupal is not a product that can to be sold but we are a community with an association that needs recognition, support, and contributions.

The Drupal Association is the backbone of the community and everyone needs to be member.

Everyone needs to be a member of the Drupal Association

It’s surprising how many people and organizations are asking for support in the Webform module's issue queue who are not members of the Drupal Association. As another experiment, I thought of my top 10 list of showcase Drupal websites and most of them are not members of the Drupal Association. These are large enterprise websites that are not paying a few hundred dollars annual membership fee to help maintain the software and community that they depend on. And if we can't get a developer or company to pay a small membership fee, how are we going to show them the value of contributing to Drupal and/or funding core or contrib development?

Encouraging individuals and organizations to join the Drupal community

The Webform module now depends on the Contribute module.

Contribute module

Contribute module

Introducing the Contribute module

The Contribute module’s project page provides a complete explanation of the module’s goals and purpose. I wanted to isolate the concept of everyone contributing back to Drupal into a dedicated module because this is not just a Webform specific challenge. I think it’s important to emphasize that other modules can also depend on the Contribute module. Frankly, if some of Drupal's top 40 modules were to add the Contribute module as a dependency, it would clearly acknowledge the overall desire to change the Drupal user's mindset when it comes to contributing back to Drupal. It’s very important for me to point out that the Drupal community is not the few thousand developers and companies who contribute and value Drupal; our community is the one million sites using Drupal.

Preaching to the choir

This blog post is "preaching to the choir" because we are all active and caring members of the Drupal and Open Source community. However, we need to take it a step further. We must look beyond our code and talk to the people who are actually using our software. For Drupal and Open Source to succeed, everyone needs to contribute back to the community - we need to reach out to these users and organizations and communicate that their participation is not only welcomed, but vital and necessary.

Creating code before the discussion

Instead of discussing this idea extensively, I decided to just go for it. At the same time, I have been asking every single person I know to provide feedback. The general consensus is that the Contribute module is not going to offend anyone, but nobody can say if it can succeed in changing our community’s mindset. I don’t know the answer either but I do know that I want to ask the question, that I do want hear what others have to say and so I started this discussion using my best tool; thought out, clean, and simple open source Drupal project called the "Contribute module."

Learn more about the Contribute module

I look forward to hearing your thoughts.

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Jan 15 2018
Jan 15

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Seventeen years ago today, I open-sourced the software behind Drop.org and released Drupal 1.0.0. When Drupal was first founded, Google was in its infancy, the mobile web didn't exist, and JavaScript was a very unpopular word among developers.

Over the course of the past seventeen years, I've witnessed the nature of the web change and countless internet trends come and go. As we celebrate Drupal's birthday, I'm proud to say it's one of the few content management systems that has stayed relevant for this long.

While the course of my career has evolved, Drupal has always remained a constant. It's what inspires me every day, and the impact that Drupal continues to make energizes me. Millions of people around the globe depend on Drupal to deliver their business, mission and purpose. Looking at the Drupal users in the video below gives me goosebumps.

[embedded content]

Drupal's success is not only marked by the organizations it supports, but also by our community that makes the project more than just the software. While there were hurdles in 2017, there were plenty of milestones, too:

  • At least 190,000 sites running Drupal 8, up from 105,000 sites in January 2016 (80% year over year growth)
  • 1,597 stable modules for Drupal 8, up from 810 in January 2016 (95% year over year growth)
  • 4,941 DrupalCon attendees in 2017
  • 41 DrupalCamps held in 16 different countries in the world
  • 7,240 individual code contributors, a 28% increase compared to 2016
  • 889 organizations that contributed code, a 26% increase compared to 2016
  • 13+ million visitors to Drupal.org in 2017
  • 76,374 instance hours for running automated tests (the equivalent of almost 9 years of continuous testing in one year)

Since Drupal 1.0.0 was released, our community's ability to challenge the status quo, embrace evolution and remain resilient has never faltered. 2018 will be a big year for Drupal as we will continue to tackle important initiatives that not only improve Drupal's ease of use and maintenance, but also to propel Drupal into new markets. No matter the challenge, I'm confident that the spirit and passion of our community will continue to grow Drupal for many birthdays to come.

Tonight, we're going to celebrate Drupal's birthday with a warm skillet chocolate chip cookie topped with vanilla ice cream. Drupal loves chocolate! ;-)

Note: The video was created by Acquia, but it is freely available for anyone to use when selling or promoting Drupal.

Jan 15 2018
Jan 15

Managing technical debt is important for the health of all software projects. One way to manage certain types of technical debt is to revisit code and decide if it’s still relevant to the project and to potentially remove it. Doing so can reducing complexity and the amount of code developers are required to maintain.

To address this we’ve been experimenting with adding simple annotations to code, which indicate an “expiry”. A nudge to developers to go and reevaluate if some bit of code will still be needed at some point in the future. This can be integrated into CI pipelines to fail builds which have outstanding expiry annotations.

Some scenarios where this has proved to be helpful have been:

  • Removing workarounds in CSS to address bugs in web browsers which have since been fixed.
  • Removing uninstalled modules, which were required only for hook_uninstall.
  • Removing code that exists for features which are gradually being superseded, like an organisation gradually migrating content from nodes into a new custom entity.

Here is an real snippet of code we were able to recently delete from a project, based on a bug which was fixed upstream in Firefox. I don’t believe without an explicit prompt to revisit the code, which was introduced many months earlier, we would have been able to confidently clean this up.


// @expire Jan 2018
// Fix a bug in firefox which causes all form elements to match the exact size
// specified in the "size" or "cols" attribute. Firefox probably will have
// fixed this bug by now. Test it by removing the following code and visiting
// the contact form at a small screen size. If the elements dont overflow the
// viewport, the bug is fixed.
.form-text__manual-size {
  width: 529px;
  @media (max-width: 598px) {
    width: 100%;
  }
}

The code we've integrated into our CI pipeline to check these expiry annotations simply greps the code base for strings matching the expiry pattern for the last n months worth of time:


#!/bin/bash

SEARCH_FORMAT="@expire %s"
DATE_FORMAT="+%b %Y"
DIRS="./app/modules/custom/ ./app/themes/"
SEARCH_LAST_N_MONTHS=4

# Cross-platform date formatting with a month offset.
case `uname` in
  Darwin)
    function date_offset_month() {
      date -v $1m "$DATE_FORMAT";
    }
    ;;
  Linux)
    function date_offset_month() {
      date --d="$1 month" "$DATE_FORMAT"
    }
    ;;
  *)
esac

for i in $(seq 0 $SEARCH_LAST_N_MONTHS); do
    FORMATTED_DATE=$(date_offset_month -$i)
    SEARCH_STRING=$(printf "$SEARCH_FORMAT" "$FORMATTED_DATE")
    echo "Searching codebase for \"$SEARCH_STRING\"."
    grep -rni "$SEARCH_STRING" $DIRS && exit 1
done

exit 0
Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 16 January 2018

Add new comment

Jan 15 2018
Jan 15

It's only taken two years since the release of Drupal 8 for us to get our own site updated... Cobbler's children and all. But finally, we are proud to unveil our shiny new site!

But wait, don't you tell your clients you don't need a new site?

Take a look around... all our old content is here, most of it in the same place it has always been. In fact, we fixed some things that were missing from our last site -- several pages that had broken videos are now fixed. All in all, our site has right around a thousand pages -- and somewhere between 1/4 to 1/2 of our clients use it to pay their invoices, so Commerce is a critical piece. It turns out our site has a lot more going on than many of our clients, with some content going back over 18 years.

We see our content and the website as our biggest asset. Much of our new business comes through our website -- less now than in the past, but it still plays a vital role in helping new visitors get to know us, learn our strengths, and ultimately develop enough trust to become a client.

In the past couple years, we have added WordPress to our maintenance arsenal, provide regular maintenance, updates, enhancements and improvements. Working with WordPress is easy -- it's like working with a toy, it really does not do much for you. This frees up a web designer to do whatever they want with the look and feel, and the system does not get in the way.

But we're not web designers -- we are developers, system administrators, business consultants, information architects, and we think Drupal 8 is the best system out there. We love working with it -- it's great for marketing people, it's great for developers, it's great for managing data and information, it's great for integrating with other systems, and it's great for the future. So that's what we chose for our site.

A simple example: most WordPress sites we've been seeing have somewhere between a dozen and 50 database tables. This site has over 800 tables (ok yeah, maybe we experiment a bit much) and most of our Drupal sites have somewhere between 100 and 500 database tables. That's just one indication of how much more Drupal is doing for you. Overkill? Maybe, if you just want a blog. But if you're doing e-commerce, customer management, membership management, complex layouts, scheduling, event registration, publishing workflows, you end up with a lot more sophistication under the hood.

Migration

The initial migration was easy. We sucked over all our content very quickly, right from the start. But... that's just getting content into the site. There ends up being tons of issues to resolve going forward. Things like:

  • Converting embedded media to the new Drupal media format
  • Finding the current location of videos that were no longer where they used to be
  • Consolidating tags into our current set we want to make a bit cleaner
  • Customer payment profiles, to continue charging our clients who auto-pay their bills as seamlessly as possible
  • Supporting images/media that were previously migrated into Drupal 7

Part of the complexity of this for us was that our site has gone through many versions. First it was entirely custom PHP. Then it was Joomla. Then it was Drupal 6 -- and we folded in a separate MediaWiki site. Then it was Drupal 7, and we folded in a WordPress site. And without a person dedicated to going through the old content and bringing it up to date, we've just accumulated that content and brought it forward, fixing the issues so it continues to look ok (actually better than it ever has before!)

The more we looked around nearing launch, the more stuff we found that needed fixing, so it was a huge push on the week before we pulled the trigger to get that all squared away.

Commerce

We're really impressed with Drupal Commerce 2, in Drupal 8. It seems very robust, and so much of it "just works" out of the box with very little configuration. We had to create two plugins -- one to support our payment gateway, and one for Washington State tax collection -- and we had to do some tweaks to get migrations from our old Ubercart 7 store for customers, products, and orders -- but otherwise we spent very little time on the Commerce. And we had a new customer successfully make a payment the very next day after we turned it all on!

We did write another custom module to support our payment links. Way back in Ubercart 6, we became early adopters of "cart links", which allows us to send a link to a customer that populates their cart with what they are buying. This sets up our automatic payments for hosting with tax properly calculated, and our monthly maintenance plans. Our bookkeeping system also sends out a payment link that allows people to pay invoices through our site.

We created a custom Product Variation for our invoice payment now that makes this process easier, and so while we were at it, we simplified our cart links to make them easier to figure out on the fly (just sku and qty, and for invoices, invoice # and amount) and also made them "idempotent" (a computer science term meaning you can click the link over and over again and you get the same result -- it won't keep adding more items to the cart).

Front End

Yes, it's Bootstrap. (Caution: that link is not exactly... kind... or appropriate for work) Bootstrap seems to be what everybody wants these days, it's a decent looking theme that we use on almost everything (contributing to that problem!)

The thing is, it looks nice, it works great in mobile, and it lets us focus more on what we want to get across -- our content -- and not spend much time with design. And frankly, that's what we advise for our clients, too -- start with your content, what you're trying to get across, what makes you special. If design is your thing, great! Go out and get a really top notch, custom design. But if it's not... well, just use Bootstrap. And try to use some unique photography, the difference between a great bootstrap site and one that's Meh is just photography.

It's not that we don't think design is important. The key point here is that design should be directed to support some goal you have for the website -- and if your goal is a company brochure or any of a number of different purposes, well, Bootstrap has solved a lot of those basic design problems. Spend your time on your content.

Once you have a really clear idea of what you want your users to do on your site, then bring in a designer to optimize for those goals.

With all that said, we're really excited to have our site current so we can start experimenting with entirely new web UIs. We've particularly been delving into Vue.js, React, and GraphQL, and have some demos we've built and integrated into a couple sites we can't wait to roll out here!

Here's to 2018!

We did launch the site early. There are still layout glitches here we're quickly fixing, in between client work (If you're on Safari, sorry!). But we feel a huge sense of relief to be fully up-to-date on a new site, which gives us so many opportunities to try out new things for ourselves, and then can share what works with our clients.

Need a web partner to bring you up to date? Let us know how we can help!

17

Jan 15 2018
Roy
Jan 15
||||| ||||| ||||| |¡

Drupal is 17 years old today. Quite an achievement for a web software to stay around, let alone stay relevant for such a long time.

I’ve been around for 12 years. Quite a stretch as well. Getting involved in this open source project as a designer has taught and brought me a lot. I put quite a bit into it as well.

I get a lot of benefits from things I learned in Drupal that I can apply in other contexts.

  • Provide rationale for design decisions. So much typing in issue queue comments!
  • Help people see the other’s point of view and then come to a shared decision.
  • Or agree to disagree, then still make a choice.
  • An appreciation and at least a “gist of things” knowledge of the complexity of software development. It helps with clarifying scope, finding a good place to start, and understanding what is difficult and what can be relatively straight forward.
  • Pragmaticism over purism
  • Edge cases are important
  • There’s a difference between patience and stubborness
  • Accessibility, multilingual, extensibility, modularity are hard but worth it
  • If you can’t imagine why somebody would want do do X, it’s always from a lack of imagination from your part
  • There’s always so much more to do
  • There’s only so much you can do
  • When you start taking things personal it’s probably time to take a break
  • It’s amazing what people can get done when driven by a passion for doing a good thing and doing it well.

… and many returns!

15 Jan 2018

Jan 15 2018
Jan 15

Q: "Can you add simple text to a Date Format value in Drupal?" A: YES! [This was news to me]
Preamble:
I was teaching a Drupal 8 class last week and a student asked if we could enter regular text [like the word "date"] into the Date Formate field. I tried it and, of course, some of the letters were translated into PHP Date elements rather than showing all the letters for the word "date."
Ex: "date : M-d-y" became "15am31America/Indiana/Indianapolis : Jan-15-2018" but what we wanted was "date : Jan-15-2018"

It was at this point that I got the bright idea to ESCAPE the letters by adding a BackSlash "\" infront of each letter. SURE ENOUGH!! Now I could see each letter instead of the date translation that each letter stood for.
So... I made this quick video to share with the world just incase someone else can benefit from this discovery!

p.s. I'm quite sure MANY have been using this "trick" for years. But I was excited to discover it on my own after a student brought the idea up! :-)

Link to video http://www.youtube.com/watch?v=gJO0t-KkjX0

Jan 15 2018
Jan 15

Seventeen years ago today, I open-sourced the software behind Drop.org and released Drupal 1.0.0. When Drupal was first founded, Google was in its infancy, the mobile web didn't exist, and JavaScript was a very unpopular word among developers.

Over the course of the past seventeen years, I've witnessed the nature of the web change and countless internet trends come and go. As we celebrate Drupal's birthday, I'm proud to say it's one of the few content management systems that has stayed relevant for this long.

While the course of my career has evolved, Drupal has always remained a constant. It's what inspires me every day, and the impact that Drupal continues to make energizes me. Millions of people around the globe depend on Drupal to deliver their business, mission and purpose. Looking at the Drupal users in the video below gives me goosebumps.

[embedded content]

Drupal's success is not only marked by the organizations it supports, but also by our community that makes the project more than just the software. While there were hurdles in 2017, there were plenty of milestones, too:

  • At least 190,000 sites running Drupal 8, up from 105,000 sites in January 2016 (80% year over year growth)
  • 1,597 stable modules for Drupal 8, up from 810 in January 2016 (95% year over year growth)
  • 4,941 DrupalCon attendees in 2017
  • 41 DrupalCamps held in 16 different countries in the world
  • 7,240 individual code contributors, a 28% increase compared to 2016
  • 889 organizations that contributed code, a 26% increase compared to 2016
  • 13+ million visitors to Drupal.org in 2017
  • 76,374 instance hours for running automated tests (the equivalent of almost 9 years of continuous testing in one year)

Since Drupal 1.0.0 was released, our community's ability to challenge the status quo, embrace evolution and remain resilient has never faltered. 2018 will be a big year for Drupal as we will continue to tackle important initiatives that not only improve Drupal's ease of use and maintenance, but also to propel Drupal into new markets. No matter the challenge, I'm confident that the spirit and passion of our community will continue to grow Drupal for many birthdays to come.

Tonight, we're going to celebrate Drupal's birthday with a warm skillet chocolate chip cookie topped with vanilla ice cream. Drupal loves chocolate! ;-)

Note: The video was created by Acquia, but it is freely available for anyone to use when selling or promoting Drupal.
Jan 15 2018
Jan 15
Create Charts in Drupal 8 with Views

There are many ways to present data to your readers. One example would be a table or a list. Sometimes you would rather prefer to enhance such data with a graphical chart. 

It can ease understanding of large quantities of data. There is a way to make charts in Drupal with the help of the Charts module and Views.

In this tutorial, you will learn the basic usage of the module in combination with the Google Charts library. Let’s start!

Step #1. Install the Charts Module and Enable the Library

install the charts module

  • Click Extend.
  • Enable in the Modules page the Charts module and its submodule Google Charts.
  • Click Install:

click install

  • In your file manager, go to modules/contrib/charts/modules/charts_google. You will see the following folder structure:

folder structure 

  • Use the Terminal of your computer and go to the charts_google folder. There you have to run the following command:

composer install

The use of Composer is mandatory for this step. If you are new to Composer, take a look at the Drupal 8 Composer and Configuration Management class from OSTraining.

You definitely will need Composer in the future, not just for this example. After that, your folder structure should look like this:

folder structure after composer installation

  • Click Configuration > Content authoring > Charts default configuration. 
  • Select Google Charts as the default charting library.
  • Click Save defaults:

select google charts

Step #2. Create a Content Type

We need some kind of structured data to present in our charts. I’m going to compare the population of all the countries in South America. You can, of course, make your own example.

  • Go to Structure > Content types > Add content type and create your content type:

create your content type

  • Add the required fields according to your data:

add required fields

  • At the end, you should have something like this:

the final result

  • Now that you have your content type in place, proceed to create the nodes, i.e. each individual country in this case:

create countries

Step #3. Create the View

  • Click Structure > Views > Add view. 
  • Give your view a proper name. 
  • Choose the content type you want to present to your readers.
  • Choose to create a block with a display format Unformatted list of fields. You won’t be able to proceed in this step if you choose Chart due to a small bug in the logic of the module.
  • I’ve chosen 12 items per block because there are 12 countries I want to show in my chart.
  • Click Save and edit:

click save and edit

  • In the FIELDS section of Views UI click Add.
  • Look for the relevant field for your chart and click Add and configure fields.
  • Leave the defaults and click Apply:

add and configure fields

click apply

  • In the FORMAT section click Unformatted list.
  • Choose Chart.
  • Click Apply:

in the format section click apply

  • Select the Charting library in the drop-down. 
  • Select the title as the label field, if it’s not been selected already.
  • Check your relevant data field as provided data.
  • Scroll down and change the Legend position to None.
  • Click Apply. 

Feel free to play with all the configuration option available here to match the chart you want or need.

play with configuration options

  • Save the View.

Step #4. Place Your Block

  • Click Structure > Block layout.
  • Search for the region you want to place the block in.
  • Click Place block.
  • Search your block and click Place block once again.
  • Click Save blocks at the bottom of the screen and take a look at your site.

look at your site

There you have it. Of course, if you change the data in one of your nodes, the chart will adjust itself accordingly. If you want to change the chart display, just change it in the Chart settings of your view. 

You can also give the other charting libraries (C3, Highcharts) a try and see what fits your needs best.
As always thank you for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Jan 15 2018
Jan 15

Using GraphQL in Drupal with Twig instead of React, Vue or the next month's javascript framework might sound like a crazy idea, but I assure you it’s worth thinking about.

Decoupling is all the rage. Javascript frontends with entirely independent backends are state of the art for any self-respecting website right now. But sometimes it’s not worth the cost and the project simply does not justify the full Drupal + React technology stack.

Besides the technological benefits of a javascript based frontend like load times and responsiveness, there’s another reason why this approach is so popular: It moves control to the frontend, concept and design unit, which matches project workflows a lot better.

Status quo

Traditionally Drupal defines data structures that provide a “standard” rendered output, which then can be adopted by the so-called “theme developer” to meet the clients' requirements. Template overrides, preprocess functions, Display Suite, Panels, Layouts - there are myriads of ways how to do this, and twice as many opinions determining the right one. When taking over a project the first thing is to figure out how it was approached and where the rendered information actually comes from. Templates only have variables that are populated during processing or preprocessing and altered in modules or themes, which makes it very hard to reason with the data flow, if you were not the person that conceived it in the first place.

There are ideas to improve the situation, but regarding the success of decoupling, perhaps it’s time to approach the problem from a different angle.

Push versus Pull

The current push model used by Drupal scatters responsibilities across modules, preprocess functions and templates. The controller calls the view builder to prepare a “renderable” that is altered 101 times and results in a set of variables that might or might not be required by the current theme’s template.

If we would turn this around and let the template define it’s data requirements (as it happens in decoupled projects naturally), we could achieve a much clearer data flow and increase readability and maintainability significantly.

And that’s what the GraphQL Twig module is supposed to do. It allows us to add GraphQL queries to any Twig template, which will be picked up during rendering and used to populate the template with data.

A simple example node.html.twig:

This is already enough to pull the information we need and render it. Let’s have a look at what this does:

The graphql comment on top will be picked up by the module. When the template is rendered, it tries to match the queries input arguments to the current template variables, runs the GraphQL query and passes the result as a new graphql variable to the template. Simple as that, no preprocessing required. It works for every theme hook. Be it just one complex node type, an exceptional block or page.html.twig.

Imagine we use GraphQL Views to add a contextual GraphQL field similarArticles that uses SOLR to find similar articles for a given node. It could be used immediately like this:

The module even scans included templates for query fragments, so the rendering of the “similar article” teaser could be moved to a separate component:

node.html.twig

node-teaser.twig

No preprocessing, a clear path where data flows and true separation of concerns. The backend provides generic data sources (e.g. the similarArticles field) that can be used by the product development team at will. All without the cost of a fully decoupled setup. And the possibility to replace single theme hooks allows us to use existing Drupal rendering when it fits and switch to the pull-model wherever we would have to use complex layout modules or preprocessing functions to meet the requirements of the project.

Future development

There are some ideas for additional features, like mutation based forms and smarter scanning for query fragments, but first and foremost we would love to get feedback and opinions on this whole concept. So if you are interested, join on Slack or GitHub and let us know!

Jan 15 2018
Ana
Jan 15

You have already seen what Drupal blogs we trending in the previous month, and now it is time to look at all our blog post we wrote. Here are the blog topics we covered in December.

The first blog post in December was Why Drupal is the most secure CMS. It explains and shows the reasons why Drupal is a secure CMS even though many people doubt that due to being open source.

The second was In the beginning, Drupal had an ambition. We talked about Dries keynote from DrupalCon Vienna and his statement that Drupal is for ambitious digital experiences. What are three critical ingredients for ambitious websites? In this blog post, we are defining all three of them.

computer

The third blog post is an interview with our Managing director Marko Bahor, who tells us about his beginnings with AGILEDROP, what are his responsibilities now and who he is outside work.

Let's continue with blog post Do not underestimate the difference between CEM & DEM from Ales, which is focusing on some of the specific elements of digital experience we can call ambitious. It’s also covering channels of communications since presence in a digital area is not enough anymore.

Fifth in a row is What makes Drupal SEO friendly. It shows us some examples and some of the factors why Drupal is at the very top of SEO friendly content management systems.

computer

The sixth blog post is What do developers joining Drupal need to know before they start? It explains what skills does a developer needs to have to enter Drupal community, which tools need to be familiar with, and skills needed before starting.

Seventh is Drive greater business value with deeper integrations. This blog post is answering the questions what possibilities of integration with Drupal we have, what is there to integrate and why. It also provides us with a case study that serves like a great what a great digital experience means.

Next one is Why you should exceed your user expectations, which are very tricky sometimes. That is why you have to exploit the possibilities it offers entirely.

And the last one from December in an interview with Boštjan, our Development director. We talked about his responsibilities in the company, what are his daily and weekly tasks and who is he outside of the company. 

Those were our blog post from December. Looking forward to having you as readers in 2018!
 

Jan 13 2018
Jan 13
Matt and Mike the ins and outs of Out of the Box initiative, which endeavors to showcase Drupal's power "out of the box." They are joined by some of the developers from the Out of the Box initiative.
Jan 12 2018
Jan 12

As far as I know, there's nothing (yet) for triggering an arbitrary event. The complication is that every event uses a unique event class, whose constructor requires specific things passing, such as entities pertaining to the event.

Today I wanted to test the emails that Commerce sends when an order completes, and to avoid having to keep buying a product and sending it through checkout, I figured I'd mock the event object with Prophecy, mocking the methods that the OrderReceiptSubscriber calls (this is the class that does the sending of the order emails). Prophecy is a unit testing tool, but its objects can be created outside of PHPUnit quite easily.

Here's my quick code:

  $order = entity_load('commerce_order', ORDER_ID);

  $prophet = new \Prophecy\Prophet;
  $event = $prophet->prophesize('Drupal\state_machine\Event\WorkflowTransitionEvent');

  $event->getEntity()->willReturn($order);

  $subscriber = \Drupal::service('commerce_order.order_receipt_subscriber');

  $subscriber->sendOrderReceipt($event->reveal());

Could some sort of generic tool be created for triggering any event in Drupal? Perhaps. We could use reflection to detect the methods on the event class, but at some point we need some real data for the event listeners to do something with. Here, I needed to load a specific order entity and to know which method on the event class returns it. For another event, I'd need some completely different entities and different methods.

We could maybe detect the type that the event method return (by sniffing in the docblock... once we go full PHP 7, we could use reflection on the return type), and the present an admin UI that shows a form element for each method, allowing you to enter an entity ID or a scalar value.

Still, you'd need to look at the code you want to run, the event listener, to know which of those you'd actually want to fill in.

Would it same more time than cobbling together code like the above? Only if you multiply it by a sufficiently large number of developers, as is the case with many developer tools.

It's the sort of idea I might have tinkered with back in the days when I had time. As it is, I'm just going to throw this idea out in the open.

Jan 12 2018
xjm
Jan 12

Drupal 8.5.0, the next planned minor release of Drupal 8, is scheduled for Wednesday, March 7, 2018. Minor releases include new features, usability improvements, and backwards-compatible API improvements. Here's what this means now for core patches.

The goal of the alpha phase is to begin the preparation of the minor release and provide a testing target for theme or module developers and site owners. Alpha releases include most of the new features, API additions, and disruptive changes that will be in the upcoming minor version.

Drupal 8.5.0-alpha1 will be released the week of January 17

In preparation for the minor release, Drupal 8.5.x will enter the alpha phase the week of January 17, 2018. Core developers should plan to complete changes that are only allowed in minor releases prior to the alpha release. (More information on alpha and beta releases.)

  • Developers and site owners can begin testing the alpha next week.

  • The 8.6.x branch of core has been created, and future feature and API additions will be targeted against that branch instead of 8.5.x. All outstanding issues filed against 8.5.x will be automatically migrated to 8.6.

  • All issues filed against 8.4.x will then be migrated to 8.5.x, and subsequent bug reports should be targeted against the 8.5.x branch.

  • During the alpha phase, core issues will be committed according to the following policy:

    1. Most issues that are allowed for patch releases will be committed to 8.5.x and 8.6.x.

    2. Drupal 8.4.x will receive only critical bugfixes in preparation for its final patch release window. (Drupal 8.3.x and older versions are not supported anymore and changes are not made to those branches.)

    3. Most issues that are only allowed in minor releases will be committed to 8.6.x only. A few strategic issues may be backported to 8.5.x, but only at committer discretion after the issue is fixed in 8.6.x (so leave them set to 8.6.x unless you are a committer), and only up until the beta deadline.

Drupal 8.5.0-beta1 will be released the week of February 7

Roughly two weeks after the alpha release, the first beta release will be created. All the restrictions of the alpha release apply to beta releases as well. The release of the first beta is a firm deadline for all feature and API additions. Even if an issue is pending in the Reviewed & Tested by the Community (RTBC) queue when the commit freeze for the beta begins, it will be committed to the next minor release only.

The release candidate phase will begin the week of February 21, and we will post further details at that time. See the summarized key dates in the release cycle, allowed changes during the Drupal 8 release cycle, and Drupal 8 backwards compatibility and internal API policy for more information.

Jan 12 2018
Jan 12

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

In this post, I'm providing some guidance on how and when to decouple Drupal.

Almost two years ago, I had written a blog post called "How should you decouple Drupal?". Many people have found the flowchart in that post to be useful in their decision-making on how to approach their Drupal architectures. Since that point, Drupal, its community, and the surrounding market have evolved, and the original flowchart needs a big update.

Drupal's API-first initiative has introduced new capabilities, and we've seen the advent of the Waterwheel ecosystem and API-first distributions like Reservoir, Headless Lightning, and Contenta. More developers both inside and outside the Drupal community are experimenting with Node.js and adopting fully decoupled architectures.

Let's start with the new flowchart in full:

How to Decouple Drupal in 2018 | Flowchart in Full

All the ways to decouple Drupal

The traditional approach to Drupal architecture, also referred to as coupled Drupal, is a monolithic implementation where Drupal maintains control over all front-end and back-end concerns. This is Drupal as we've known it — ideal for traditional websites. If you're a content creator, keeping Drupal in its coupled form is the optimal approach, especially if you want to achieve a fast time to market without as much reliance on front-end developers. But traditional Drupal 8 also remains a great approach for developers who love Drupal 8 and want it to own the entire stack.

A second approach, progressively decoupled Drupal, offers an approach that strikes a balance between editorial needs like layout management and developer desires to use more JavaScript, by interpolating a JavaScript framework into the Drupal front end. Progressive decoupling is in fact a spectrum, whether it is Drupal only rendering the page's shell and populating initial data — or JavaScript only controlling explicitly delineated sections of the page. Progressively decoupled Drupal hasn't taken the world by storm, likely because it's a mixture of both JavaScript and PHP and doesn't take advantage of server-side rendering via Node.js. Nonetheless, it's an attractive approach because it makes more compromises and offers features important to both editors and developers.

Last but not least, fully decoupled Drupal has gained more attention in recent years as the growth of JavaScript continues with no signs of slowing down. This involves a complete separation of concerns between the structure of your content and its presentation. In short, it's like treating your web experience as just another application that needs to be served content. Even though it results in a loss of some out-of-the-box CMS functionality such as in-place editing or content preview, it's been popular because of the freedom and control it offers front-end developers.

What do you intend to build?

How to Decouple Drupal in 2018 | What do you intend to build?

The most important question to ask is what you are trying to build.

  1. If your plan is to create a single standalone website or web application, decoupling Drupal may or may not be the right choice based on the must-have features your developers and editors are asking for.
  2. If your plan is to create multiple experiences (including web, native mobile, IoT, etc.), you can use Drupal to provide web service APIs that serve content to other experiences, either as (a) a content repository with no public-facing component or (b) a traditional website that is also a content repository at the same time.

Ultimately, your needs will determine the usefulness of decoupled Drupal for your use case. There is no technical reason to decouple if you're building a standalone website that needs editorial capabilities, but that doesn't mean people don't prefer to decouple because of their preference for JavaScript over PHP. Nonetheless, you need to pay close attention to the needs of your editors and ensure you aren't removing crucial features by using a decoupled approach. By the same token, you can't avoid decoupling Drupal if you're using it as a content repository for IoT or native applications. The next part of the flowchart will help you weigh those trade-offs.

Today, Drupal makes it much easier to build applications consuming decoupled Drupal. Even if you're using Drupal as a content repository to serve content to other applications, well-understood specifications like JSON API, GraphQL, OpenAPI, and CouchDB significantly lower its learning curve and open the door to tooling ecosystems provided by the communities who wrote those standards. In addition, there are now API-first distributions optimized to serve as content repositories and SDKs like Waterwheel.js that help developers "speak" Drupal.

Are there things you can't live without?

How to Decouple Drupal in 2018 | What can't you live without?

Perhaps most critical to any decision to decouple Drupal is the must-have feature set desired for both editors and developers. In order to determine whether you should use a decoupled Drupal, it's important to isolate which features are most valuable for your editors and developers. Unfortunately, there is are no black-and-white answers here; every project will have to weigh the different pros and cons.

For example, many marketing teams choose a CMS because they want to create landing pages, and a CMS gives them the ability to lay out content on a page, quickly reorganize a page and more. The ability to do all this without the aid of a developer can make or break a CMS in marketers' eyes. Similarly, many digital marketers value the option to edit content in the context of its preview and to do so across various workflow states. These kind of features typically get lost in a fully decoupled setting where Drupal does not exert control over the front end.

On the other hand, the need for control over the visual presentation of content can hinder developers who want to craft nuanced interactions or build user experiences in a particular way. Moreover, developer teams often want to use the latest and greatest technologies, and JavaScript is no exception. Nowadays, more JavaScript developers are including modern techniques, like server-side rendering and ES6 transpilation, in their toolboxes, and this is something decision-makers should take into account as well.

How you reconcile this tension between developers' needs and editors' requirements will dictate which approach you choose. For teams that have an entirely editorial focus and lack developer resources — or whose needs are focused on the ability to edit, place, and preview content in context — decoupling Drupal will remove all of the critical linkages within Drupal that allow editors to make such visual changes. But for teams with developers itching to have more flexibility and who don't need to cater to editors or marketers, fully decoupled Drupal can be freeing and allow developers to explore new paradigms in the industry — with the caveat that many of those features that editors value are now unavailable.

What will the future hold?

In the future, and in light of the rapid evolution of decoupled Drupal, my hope is that Drupal keeps shrinking the gap between developers and editors. After all, this was the original goal of the CMS in the first place: to help content authors write and assemble their own websites. Drupal's history has always been a balancing act between editorial needs and developers' needs, even as the number of experiences driven by Drupal grows.

I believe the next big hurdle is how to begin enabling marketers to administer all of the other channels appearing now and in the future with as much ease as they manage websites in Drupal today. In an ideal future, a content creator can build a content model once, preview content on every channel, and use familiar tools to edit and place content, regardless of whether the channel in question is mobile, chatbots, digital signs, or even augmented reality.

Today, developers are beginning to use Drupal not just as a content repository for their various applications but also as a means to create custom editorial interfaces. It's my hope that we'll see more experimentation around conceiving new editorial interfaces that help give content creators the control they need over a growing number of channels. At that point, I'm sure we'll need another new flowchart.

Conclusion

Thankfully, Drupal is in the right place at the right time. We've anticipated the new world of decoupled CMS architectures with web services in Drupal 8 and older contributed modules. More recently, API-first distributions, SDKs, and even reference applications in Ember and React are giving developers who have never heard of Drupal the tools to interact with it in unprecedented ways.

Unlike many other content management systems, old and new, Drupal provides a spectrum of architectural possibilities tuned to the diverse needs of different organizations. This flexibility between fully decoupling Drupal, progressively decoupling it, and traditional Drupal — in addition to each solution's proven robustness in the wild — gives teams the ability to make an educated decision about the best approach for them. This optionality sets Drupal apart from new headless content management systems and most SaaS platforms, and it also shows Drupal's maturity as a decoupled CMS over WordPress. In other words, it doesn't matter what the team looks like or what the project's requirements are; Drupal has the answer.

Special thanks to Preston So for contributions to this blog post and to Alex Bronstein, Angie Byron, Gabe Sullice, Samuel Mortenson, Ted Bowman and Wim Leers for their feedback during the writing process.

Jan 12 2018
Jan 12

Caching is a popular technique to optimize the performance of a website. It is a process that stores web data (HTML, CSS, Image) in some accessible space. Moreover, a cache is a detailed information of a validity of a resource. This information defines for how long the resource is valid and should not be considered stale. The information can be stored at every level right from the original server to intermediate proxies to the browser.

Here you will get to learn about what key/value pairs in the header mean and Cacheability metadata in Drupal 8.

Every request or response contains a HTTP header, which provides additional information about the request/response. This also contains information related to cache.

Typical HTTP Header Example :

HTTP Header Example

Let’s see what these key/value pairs in the header mean:

  • Expires: The Expires header sets a time in the future when the content will expire. This header is probably best used only as a fallback.
     
  • Etag: It is a unique hash identifier of the cacheable object that the browser caches. Whenever a cached response is requested repeatedly, browser checks with the server if the eTag of the response is modified. If the response is not modified, the server returns 304 not modified else 200 along with new response & new eTag.
     
  • Last-modified: This parameter specifies the date and time the resource was last modified. Last-modified is used as a fallback when Etag is not specified.

          Syntax : <day-name>, <day> <month> <year> <hour>:<minute>:<second> GMT

  •  Vary: Vary specifies the list of a parameter on which the response should vary. For example, Accept-Encoding specifies the format in which the browser expects the response such as gzip, sdch or simply text/html.

Similarly, User-Agent specifies the user-agent such as a mobile user or desktop user based on which the content should vary.

Another interesting Vary parameter is ‘cookie’ which specifies to vary content based on the user. For instance, the different version of same content for the anonymous user and logged-in user.

  • Cache-Control: Cache-Control specifies directives which instruct the browser / intermediate-proxies to who can cache the response, the conditions for caching the response and the maximum time for which the response can be cached. Some of Cache-Control directives are:
     
    • No-cache: Indicates that the response should be validated with the origin server before sending.
       
    • Public/private: "Public" marked responses indicates that the response can be cached by any cache, even if it has HTTP authentication associated with it. Private marked responses are intended for a single user and intermediate cache should not store the response, but private cache such as browser may store the response.
       
    • No-store: Shows that the response should not be cached by the browser or intermediate proxies.
       
    • Max-age: Indicates the maximum time in seconds for which the content can be cached before it must-revalidate. This replaces the Expires header for modern browsing. This option takes its value in seconds with a maximum valid freshness time of one year (31536000 seconds).
       
    • s-maxage: Similar to the max-age setting. The difference is that this option is used only for intermediary caches.

Now we have a basic understanding of caching, so let’s see how this works in Drupal 8.

Cacheability metadata in Drupal 8 :

All things that are either directly renderable or used to determine what to render, providing cacheability metadata.

Cacheability metadata consists of three properties:

  • Cache tags: If our renderable arrays depend on some data such as entity data or some configuration values we use cache tags to invalidate the data.

    Syntax : “Thing:identifier” 

    Example:

    node:5 // cache tag for node entity 5
    user:4 // cache tag for user entity 4
    node_list // list cache tag for node
    
    $tags = array('node:1', 'user:7');
    \Drupal::cache()->set($cid, $data, CacheBackendInterface::CACHE_PERMANENT, $tags);
    
    // Invalidate all cache items with certain tags.
    \Drupal\Core\Cache\Cache::invalidateTags($tags);
    
  • Cache contexts: Cache context is used when our renderable arrays depend on some context such as user role, theme or URL. This is similar to Drupal 7 block constants like DRUPAL_NO_CACHE / DRUPAL_CACHE_PER_ROLE / DRUPAL_CACHE_PER_PAGE, but with many more options.

    Example :

    // Setting Cache Context & Tag for a block.
    return array(
      '#markup' => $this->t('My custom block content'),
      '#cache' => array(
        'contexts' => ['url.path'], //setting cache contexts
        'tags' => ['node:1', 'node:2'] // setting cache tags
      ),
    );       
    
  • Cache max-age: Cache controls how long an item may be cached by a number of seconds.

    0 means cacheable for zero seconds, i.e. not cacheable.

    \Drupal\Core\Cache\Cache::PERMANENT means cacheable forever, i.e. this will only ever be invalidated due to cache tags. (In other words: ∞, or infinite seconds.)


Here we have explored the basic of Caching in Drupal 8, what key/values pairs in the header mean and cacheability metadata. Caching allows retrieval without having to request the data from the original source. In case you have any questions, feel free to share in the comments below—we'll get back to you ASAP.

Below given is a presentation on Caching in Drupal 8.

Jan 12 2018
Jan 12

Drupalcamp 2018

Drupalcamp London returns in March for its 6th year. As the largest Drupal camp in Europe, it provides a scale of high-quality knowledge I rarely see elsewhere. If you’re new to Drupalcons and camps, Drupalcamp is a three-day knowledge-sharing conference in London that attracts a wide variety of over 600 Drupal stakeholders, including developers, agencies, business owners, and end users.

As a Drupal development agency contributing to the software for the past 15 years, we’re always looking to support the growth of the community. One major investment we make is sponsoring Drupalcamp London for the past 6 years. I’m a big supporter of the London event and if anything sums up the weekend, I believe this quote from Paul Johnson does the job.


I genuinely think Drupalcamp London is amongst the best Drupal event in the world. The breadth of topics covered is unparalleled and the high quality of speakers is a real draw.

Paul Johnson, Drupal Director at CTI Digital.

  

CXO Day

This year I’m set to be attending the CXO day on Friday 2nd March, a day that focuses on what Drupal means to its end users. It’s a rare opportunity to learn from the experiences of others and to hear how business leaders have been utilising Drupal and Open Source technology in the past year. The event is attended by a variety of end users new and familiar with Drupal, leaders from digital agencies, and wider Drupal business community. Opportunities for networking spaces with attendees will also be a valuable addition to the day, an area I will be frequenting.

At the CXO day our client, Dave O’Carroll, Head of Digital at War Child UK will be discussing what CTI's rebuild of the charity’s website has done for their end user experience and ability to increase their impact across the digital sphere.

Who’s attending?

Our Drupal Director and Evangelist, Paul Johnson and I will be attending. It will be an extremely busy day so if you would like to meet please do get in contact. We'll be glad to share our knowledge of Drupal and discuss our experiences working with London.gov and RCOT.

Drupal newbies and veterans will also be attending the weekend along with agencies and businesses invested in the world of Drupal. The organisers conducted an interesting survey of the attendees last year, as you can see below a majority attend to learn and share their knowledge of Drupal.

CXO Drupalcamp attendees

Drupalcamp study into reason for attending Drupalcamp 2017

 

The CXO always sells out quickly, visit their website now to find out more and register to attend. See you there.

Register Now 

 

Jan 12 2018
Jan 12

The #techfiction story about how a backend developer feels as he starts working on his first frontend task.

#techfiction #noir

It’s late. The cold breeze stings my face as I wander the dark alleys of the City. Now and then dim neon lights cut through my shadows. I’ve been wandering around aimlessly for what feels like hours. Now I find myself standing in front of Avery’s again. I enter, in a force of habit.

There are a bunch of familiar faces. Christmas is around the corner, and everybody’s celebrating, but I’m not in the mood for “jingle bells” today. I grab an empty chair at the bar and order a shot of Glenlivet.

'What’s with the grim face, Tony?'

It didn’t last long. Peter approaches holding a bottle of beer in his right hand. I can’t lie to him. He’ll see right through me.

'She’s gone, Peter. I’ll never see her again.'

'Oh, come on! There’s plenty of fish in the sea. Here, I’ve got something that might cheer you up.'

He puts down his beer and reaches into his leather jacket, pulls out a crumpled piece of paper and hands it to me. Just a short note written on it, smeared bold letters read “GZA-220”.

'What’s this?'

'I was going to give this to Sanchez, but looks like you need it more.'

Sanchez. He’s been stealing assignments from me for months laughing it up while I’ll cry myself blind, bored to death, working day in and day out on those seemingly insignificant backend nuisances. I might relish this, just to get back at him.

'Come on! Look it up.'

It turns out to be a task ID on JIRA, the project management system we use internally. I put my machine on the desk, start up the browser and navigate straight to the task page.

'It‘s a frontend task,' I mumble out of pure surprise.

'You might need some change,' he says.

He’s right. I need something to regain focus. A new kind of challenge.

'Ok. Assign that to me. I’ll do it.'

The task is as follows. A website is composed of horizontally stacked regions each having a background colour setting defined by the editor. When hovering over the areas with a blue background setting a radial smudge should follow the mouse cursor in the background.

I don’t waste any time and quickly employ my usual routine: look it up on Google. Surely someone’s already done something similar, and I don’t want to waste my precious time. I’ve got to get back to feeling sorry for myself.

Here it is, right off the bat, almost the exact same thing. But after a quick investigation, I find it uses CSS, and a transform style attribute changes as I move the mouse around. While this looks good, I’ll try another approach, draw directly on canvas and check how this performs. So for every blue region, I just add another "canvas" and resize it so that it covers the whole background area. A global variable will keep track of the smudge’s position and other movement data.

I’ll set up my mouse move listener and a draw loop and quickly find out that scrolling the document leaves my smudge hanging motionless. I need to treat the scroll same as a vertical mouse move.

It’s getting serious now. I light up a cigarette.

I used to be a non-smoker. Then I met my buddy George back in April. We had a couple of drinks, and he offered me one of his Luckies. Now I’m sucking them down like there’s no tomorrow. Two packs a day.

I've been sitting here for two hours already. Time passes by so quickly when the mind is busy. I’ve got something going on, but I see problems already. We might have multiple disconnected regions on the same page, and the effect has to flow through them seamlessly. I have to track the global coordinates and just draw the damn thing with a local offset. If the regions are far apart, the smudge might not be visible in all of them at once. No sense in redrawing the canvas if the thing is entirely out of its bounds. We can calculate if the smudge is inside of each rectangular region and omit to redraw it when they don’t overlap. I find some useful math shenanigans on StackExchange and plug it in.

My complete draw loop looks like this:

A figure reflects in my excessively glossy Dell XPS screen.

'What going on here? Is it still 2015? Ever heard of requestAnimationFrame()?'

Douglas. His real name is Bob-Douglas, but I just call him Dick. Funny guy. Rumors go he can recite the complete team channel conversation from Slack by heart.

'What are you talking about? No, I’ve never heard of it… I’ve never heard of anything. I don’t even know what I’m doing here.'

Let’s see what Dick is trying to tell me. According to the documentation, by calling window.requestAnimationFrame() you’re telling the browser that you wish to animate something and that the specified callback should be invoked before the repaint. This is better for performance reasons as requestAnimationFrame() calls are paused when running in background tabs.

This approach needs a little adjustment. If I keep calling requestAnimationFrame() the browser will try to keep up with my screen’s refresh rate and the animation is too quick. I’ll slow it down to 60Hz by checking the timestamp parameter that gets passed into my callback. Much better.

My job here is done. I close the lid, take another shot of whisky and head out on the street. I’ve got to find her. I’ve got to see her again. Don’t try to stop me and don’t you come looking for me. It’s a big city, and I’ll be hiding in the shadows. The best chance you’ve got is hearing the receding echoes of my footsteps as I fade into the darkness.

Can you spot the stalker?Can you spot the stalker?
Jan 12 2018
Jan 12

In the previous article, we covered How to stay out of SPAM folder? and today we will learn how to secure our Drupal web server.

Setting up Firewall

So, we have Debian OS powering our Drupal web server, and we need to make it secure, adjust everything so as to minimize all risks. First of, we want to configure the firewall. Basic stuff. Our "weapon of choice" here is IPTables.

Initially, the firewall is open, all traffic passes through it unimpeded. We check the list of IPTables rules with the following command:

# iptables -L -v -n

Chain INPUT (policy ACCEPT 5851 packets, 7522K bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 320M packets, 19G bytes)
pkts bytes target prot opt in out source destination

All clear. To remove all IPTables rules, we use the following command:

iptables -F

Default IPTables rules

Default rules are useful and convenient. In IPTables, they are set with the help of policies (-P). It is a common practice to drop all packets and have a series of permission rules for specific cases.

The following rule allows dropping all packets:

iptables -P INPUT DROP

Additionally, you can up the security by outlawing forwarded packets, that is, packets routed by firewall to their destination. To do that, introduce the following rule:

iptables -P FORWARD DROP

For loopback, we allow local traffic:

iptables -A INPUT -i lo -j ACCEPT

The following rule makes use of the state (-m) module. It allows checking the state of the connection, which can be RELATED or ESTABLISHED. The connection is made only when it meets the rule. ESTABLISHED means there were packets already sent through the connection, and RELATED indicates it is a new connection made by forwarding a packet, but this new connection is associated with an existing connection.

The following rule allows operation of all previously initiated connections (ESTABLISHED) and connections related to them:

iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

And this rule allows new connections too:

iptables -A OUTPUT -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT

This rule below allows forwarding new, established and related connections:

iptables -A FORWARD -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT

The following couple of rules says all packets that cannot be identified (and given a status) should be dropped.

iptables -A INPUT -m conntrack --ctstate INVALID -j DROP

iptables -A FORWARD -m conntrack --ctstate INVALID -j DROP

Allowing what needs to be allowed

The next step we make implies setting up rules that allow this and that and thus ensure correct operation of our web server. We arrange those rules in a separate chain.

First, we create custom chains:

iptables -N my_packets

The following command makes the switch to a user-defined chain:

iptables -A INPUT -p tcp -j my_packets

There is a number of restrictions imposed on going through the chains

  1. the chain must be created before it is switched to;

  2. the chain must be in the same table as the chain from which the switch is made.

Next, we open the port for SSH. Important: be sure to specify your port if it was changed!

iptables -A my_packets -p tcp -m tcp --dport 22 -j ACCEPT

If SSH is only available to a number of persons using static IPs, it makes sense to set up IP-based restrictions. To do this, we run the following command instead of the previous one:

iptables -A my_packets -s х.х.х.х -p tcp -m tcp --dport 22 -j ACCEPT

with х.х.х.х being the IP from which the connection is made.

Since we have a web server running, we need to allow firewall listening on ports 80 and 443. We do this with the following commands:

iptables -A my_packets -p tcp -m tcp --dport 80 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 443 -j ACCEPT

To complete all these rules, we can set up a some more for specific cases.

Remote connections to MySQL server

If such is possible, it is better to have the connections restricted by IP:

iptables -A my_packets -s х.х.х.х -p tcp -m tcp --dport 3306 -j ACCEPT

Server receives mail

The following set of rules helps in such a case:

iptables -A my_packets -p tcp -m tcp --dport 110 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 143 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 993 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 995 -j ACCEPT

That is all. These rules are sufficient for our web server to work correctly. Other ports and protocols follow the REJECT rule we set up for them, i.e. they do not accept anything:

iptables -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable

iptables -A INPUT -p tcp -j REJECT --reject-with tcp-reset

iptables -A INPUT -j REJECT --reject-with icmp-proto-unreachable

Compared to DROP, REJECT implies sending the "Port unreachable" ICMP message to the sender. --reject-with option allows changing type of ICMP message; its args are as follows:

  • icmp-net-unreachable — network unreachable;

  • icmp-host-unreachable — host unreachable;

  • icmp-port-unreachable — port unreachable;

  • icmp-proto-unreachable — protocol unreachable;

  • icmp-net-prohibited — network prohibited;

  • icmp-host-prohibited — host prohibited.

    By default, the message has the port-unreachable arg.

You can reject TCP packets with tcp-reset arg, which implies sending an RST message. In terms of security, this is the best way. TCP RST packets are used to close TCP connections.

Setting up fail2ban

Fail2ban is a simple local service that keeps track of log files of running programs. Guided by the rules, the service blocks IPs from which attacks come.

Fail2ban successfully protects all popular *NIX setups (Apache, Nginx, ProFTPD, vsftpd, Exim, Postfix, named, etc.), but its main advantage is the SSH server brute-force protection enabled right after you launch it.

Installing fail2ban

Fail2ban is listed in the repository, so it is very easy to install it:

apt-get install fail2ban

That's it, your SSH is now protected from brute-force attacks.

Setting up fail2ban

First of all, we need to configure SSH protocol protection. It takes finding the [ssh] section in jail.conf file and ensuring the enabled parameter is set to true.

Next, we set up monitoring with fail2ban:

  • filter - used filter used. The default is /etc/fail2ban/filter.d/sshd.conf;

  • action - actions performed by fail2ban when detecting an ip the attack comes from; the response rules are listed in /etc/fail2ban/action.d., so the value of this parameter cannot be something that is not there;

  • logpath - full path to the file storing data on attempts to access VPS;

  • findtime - time (in seconds) the suspicious activity lasted;

  • maxretry - maximum allowed number of attempts to connect to the server;

  • bantime - the time the blacklisted ip is banned for.

Important: it is not necessary to provide values for all settings, if you skip anything, the main [DEFAULT] settings (found in the namesake section) will be applied. The most important thing here is to make sure the setting's enabled, i.e. the value for enabled is true.

Making SSH protocol secure

Let's look into details of response settings. Below is an example of fail2ban configuration on the SSH port:

[ssh]
enabled = true
port     = ssh
filter = sshd
action = iptables[name=sshd, port=ssh, protocol=tcp]
    sendmail-whois[name=ssh, dest=****@yandex.ru, [email protected]***.ru]
logpath = /var/log/auth.log
maxretry = 3
bantime = 600

All these lines mean the following: If there were more than 3 failed attempts to connect to the server through the main SSH ports, the ip used for authorization is blocked for 10 minutes. The ban rule is added to IPTables, and the server owner receives a notification to the address specified in the dest variable. In the notification contains the blocked ip, WHOIS-data about this ip and the ban reason.

An extra measure to protect SSH implies activating the following section:

[ssh-ddos]
enabled = true
port     = ssh
filter = sshd-ddos
logpath = /var/log/auth.log
maxretry = 2

Configuring site files access permissions

A server cannot be considered safe if files access permissions were not configured. The following example is one of the ways to change owner and permissions on files/directories of a Drupal-powered site. Here:

Permissions setup routine:

#cd /path_to_drupal_installation
#chown -R webmaster:www-data .
#find . -type d -exec chmod u=rwx,g=rx,o= '{}' \;
#find . -type f -exec chmod u=rw,g=r,o= '{}' \;

www-data user must have write permissions for the directory, so the "files" directory in the sites/default (and any other site directories if we have a multi-site setup) has different permissions. The owner group's s bit applies to the directory, too, since we need all files created there by the web server to have the identifier of the webmaster directory group and not that of the group of owner that created the file in this directory.

#cd /path_to_drupal_installation/sites

#find . -type d -name files -exec chown -R www-data:webmaster '{}' \;
#find . -type d -name files -exec chmod ug=rwx,o=,g+s '{}' \;
#for d in ./*/files
do
  find $d -type d -exec chmod ug=rwx,o=,g+s '{}' \;
  find $d -type f -exec chmod ug=rw,o= '{}' \;
done

Temp directory here also takes on different permissions since we need web server writing into this directory.

#cd /path_to_drupal_installation
#chown -R www-data:webmaster tmp
#chmod ug=rwx,o=,g+s tmp

settings.php and .htaccess: special permissions

settings.php file contains database password and user name, and they are plain text there. To avoid problems, we want to allow just some users read it and nothing else. Typically, it means banning "other" out:

#chmod 440 './sites/*/settings.php'
#chmod 440 './sites/*/default.settings.php'

.htaccess is the Apache configuration file, which gives power over operation of the web server and site settings. Accordingly, only a handful of users should have permission to read the file's contents:

#chmod 440 ./.htaccess
#chmod 440 ./tmp/.htaccess
#chmod 440 ./sites/*/files/.htaccess

Secure configuration of the web server

To make the system as secure as possible, we want to make some changes to the SSH server settings. It is best to run it on a non-standard port, otherwise it will be constantly attacked by bruteforcing bots picking passwords. Same as other Linux distributions, Debian has SSH on port 22. We can change it to, say, 2223. In addition, it would be wise to change the setup so as to allow root's connection with ssh key only. By default, in Debian, root cannot be authorized with only a password over SSH.

It's better to change SSH port before configuring the firewall. If you forgot that, you need to do the following:

  1. Add the rule allowing connection on a new port to IPTables before changing it:

iptables -A my_packets -p tcp -m tcp --dport 2223 -j ACCEPT
  1. Change the SSH server port in # nano /etc/ssh/sshd_config. Find the relevant lines and make them look like this:

Port 2223
PermitRootLogin prohibit-password

PubkeyAuthentication yes
ChallengeResponseAuthentication no

Do not forget to save the changes! Then restart the SSH server:

# service sshd restart

Now, we want to check the changes:

# netstat -tulnp | grep ssh

tcp 0 0 0.0.0.0:2223 0.0.0.0:* LISTEN 640/sshd
tcp6 0 0 :::2223 :::* LISTEN 640/sshd

Everything is fine. The SSH server listens on port 2223, from now on, new connections will go through this port only, and after restarting SSH the old connection will not be dropped.

Attention! Before disabling password authorization for the root user, make sure you have your public key in the /root/.ssh/authorized_keys file. Also, it is wise to have another account to connect to the server with a password.

Jan 12 2018
Jan 12

The news was supposed to come out this Tuesday, but it leaked early. Last week we learned about three variations of a new class of attacks on modern computing, before many vendors could release a patch -- and we come to find out that the root cause may be entirely unpatchable, and can only be fixed by buying new computers.

Today Microsoft released a patch -- which they had to quickly pull when they discovered that it crashed computers with AMD chips.

Essentially Spectre and Meltdown demonstrate a new way of attacking your smartphone, your laptop, your company's web server, your desktop, maybe even your tv and refrigerator.

Meltdown - Animated Meltdown in Action

This all sounds dreadfully scary. And it is... but don't panic! Instead, read on to learn how this might affect you, your website, and what you can do to prevent bad things from getting worse.

How will this affect you?

All of these attacks fall into a class of "Information Disclosure." A successful Spectre attack can reveal information you want to keep secret -- mainly your passwords, and security keys widely used to protect information and identity.

Thumbnail

Have any Bitcoin lying around? Your wallet could get compromised with this type of attack. Visit any SSL sites? A secure SSL certificate on a server might have its private key stolen, and incorporated into a fake certificate -- which would make "Man in the middle" attacks from wifi hotspots a lot more effective -- a phisher could set up a fake copy of your bank's website, and there would be no way to tell it apart from the real website, because it has a copy of the real certificate. Use a password manager to keep track of all those different passwords you need for each site? Spectre can make those secrets -- not so secret. This is far worse than Heartbleed.

Over the coming months and years, this will be a headache. Security updates on your phone and all of your computers will be more important to apply promptly than ever before -- because a new 0-day attack could give the attacker the keys to your online kingdom.

The good news is, browsers have already updated with fixes that make a Spectre attack from remote Javascript much more difficult, and Meltdown is nearly patched.

The bad news is, patching for Meltdown means slowing down your computers substantially -- reports suggest by somewhere between 5% and 30%, depending on the types of computing being done. And there isn't really a way of patching Spectre -- it's a design flaw having to do with how the processor caches what it's working on, while using something called "Speculative Processing" to try to speed up its work -- fully preventing a Spectre attack means deploying new hardware processors that manage their low-level caching in a different way.

So preventing Spectre attacks falls more into the realm of viruses -- blocking specific attacks, rather than stopping the vulnerability entirely, at least as I understand the problem. For more, ZDNet has a pretty understandable explanation of the vulnerabilities.

How can they attack me?

To exploit any of these attacks, an attacker needs to get you to run malicious code. How can they do this? Well, for some Spectre attacks, through Javascript running in your browser. Firefox and Safari released updates that make the Javascript timer not so accurate -- having accurate timing to detect the difference in speed for loading particular caches is a critical part of how the currently identified attacks work. But it's scary that this level of attack could be embedded in Javascript on any website you visit...

Browsers are changing faster than ever, though, and I wonder if this will set back some proposed browser features like WebAssembly, which could be a field day for attackers wanting to deliver nasty code to you through innocuous web pages. It's relatively easy for a browser maker to make the Javascript execution environment fuzzy enough to defeat the precision needed to carry out these attacks. WebAssembly? The entire goal of that is to get programmers "closer to the metal" which is going to make it easier to get creative with exploiting side-channel vulnerabilities.

Browser extensions, mobile apps, anything you download and install now have far more opportunity to steal your secrets than ever before.

How will this affect your website?

Your website's host is almost certainly vulnerable. If you are not hosting on dedicated hardware, Spectre basically means that somebody else hosting on the same physical hardware can now possibly gain access to anything in your hosting account.

There are basically 3 "secrets" in nearly every website that's built on a CMS (like Drupal or WordPress) that might be a target:

  1. Your FTP, SSH, or other hosting account logins -- this could give them full access to your site, allow an attacker to upload malicious code, steal data, damage your site, whatever they want.
  2. The private key for your SSL certificate -- this would allow them to create a fake SSL certificate that looks legitimate to anybody visiting their copy of the site. This is particularly a problem for financial institutes, but it could happen to anyone -- this can lead to fake sites being set up under your domain name, and combined with a "man in the middle" used to phish other people, smear your reputation, or a variety of other things.
  3. Any secrets in your CMS -- your login, your passwords, any passwords of users that log into your site, ecommerce data, whatever there is to steal.
Thumbnail

If you're on a "virtual private server" or a "shared hosting account", there will be exploits coming for years, until we've all replaced all the computer hardware that has come out in the last 20 years -- and another tenant on your hardware can potentially attack your site.

And those are just the universally available targets. You may have other things of value to an attacker, unique to you.

"Meltdown" got its name because it melts down the security boundaries between what you can do as a user, and the underlying system that has full access to everybody.

Meltdown does have patches available, and these are getting deployed -- at the cost of disabling CPU features built to make them perform quickly. Which means if you're nearing the limits of what your currently provisioned hosting provides, patching for Meltdown may push you over, and force you into more costly hosting.

What should you do now to make things better?

What you can do now really isn't much different than it was a month ago -- but the consequences of failing to use best security practices have gotten a lot higher. You could stop using technology, but who is really going to do that? And who already has all your data, who might get compromised anyway?

We think there are two main things to think about when it comes to this type of security planning:

  1. Make sure you are doing all you can to avoid an attack, and
  2. Have a plan for what to do if you fall victim to an attack.

Avoid an attack

To avoid an attack, a little paranoia can go a long way. Get something in email you don't recognize, with an attachment? Don't open the attachment. On a public wifi network? Don't go anywhere well known, like a banking website -- wait until you get home or on a network you trust.

Apply all security updates promptly, and verify that you're getting them from the real source. Pay attention to anything that looks suspicious. Expect more phishing attacks for the foreseeable future (as if we didn't have enough already...) Regularly check that any sites or servers you have do not show any signs of unexpected activity, changed files, etc.

It might be hard to detect an intrusion, because if they've hacked you, they will likely be connecting as you -- so set up any 2-factor authentication you can, consider getting security dongles/hardware tokens, and just think about security before clicking that link.

Plan for disaster

Nobody can be perfect. There is no such thing as "secure" -- there's always a way in. The practice of security is really one of risk management -- identifying what the consequences of a given security breach is, what the costs to avoid that breach are, and finding a balance between the cost of securing something and the cost of suffering a breach.

That equation varies for everybody -- but some key values in that equation just shifted -- now the consequences of a minor breach can lead to a much bigger problem than before. Or, perhaps more accurately, now we know about some ways to make these breaches worse, and thus the likelihood of them happening have become higher.

When it comes to a website, the three main risks to consider are:

  • Loss of service (Denial of service -- your website goes away)
  • Loss of data (you lose access to something you need -- e.g. ransomware, hardware failure without sufficient backups, etc)
  • Information Disclosure (revealing secrets that can cost you something)

What has changed now is that these new information disclosure attacks can reveal your keys and passwords, and then an attacker can conduct the other kinds of attacks impersonating you. It used to be that information disclosure was a bigger concern for data you stored in a database, because the operating system takes such special care of your passwords and keys -- but now we've learned the operating system protections can be bypassed with an attack on the CPU. And that this has been the case for the past 20 years.

Do you have a disaster recovery plan that outlines what steps to take if you discover you've been hacked? If not, we can probably help, at least for your website and server. We've written several disaster recovery plans for ourselves and our clients -- reach out if we can help you create one. We can also do the day-to-day work on your Drupal or WordPress site and server to keep them fully up-to-date, with a full testing pipeline to detect a lot of things that can break in an update.

Let us know if we can help!

Jan 11 2018
Jan 11

Now that 2017 is over and we’re back from our well deserved holidays, it’s time to look at what the Drupal Commerce community accomplished over the past year.

There is no doubt that Drupal Commerce is one of the largest and most active projects in the Drupal community. The #commerce channel is now the most active channel on the Drupal Slack, with 550 members. Over a hundred modules have received contributions from several hundred contributors working for dozens of different agencies. Just a few months after the initial stable release, there are over 2000 reported installations with new case studies appearing every week! Let’s take a closer look.

Commerce and Commerce Shipping

We began 2017 with Drupal Commerce 2.0-beta4. We released another 3 betas followed by 3 release candidates in our search for stability, rewriting the promotion and tax subsystems based on feedback from sites in production, all while maintaining a clear upgrade path. On September 20th we packaged the full 2.0 release, which we celebrated with over dozen release parties around the world, and followed it up with 2.1 and 2.2 in November and December.

Our heavy focus on developer experience meant that we were one of the first Drupal communities to completely migrate from SimpleTest to PHPUnit, converting hundreds of tests in the process. We also made it possible to install Commerce without Composer through Ludwig, a helper module we developed that any Drupal project can accommodate. Finally, we organized a large documentation initiative, with both User and Developer guides in progress at https://docs.drupalcommerce.org.

We collaborated with Adapt A/S to develop Commerce Shipping 8.x-2.x from scratch, released 4 betas through the year with an rc1 now in progress. Commerce Shipping powers Interflora.dk, allowing customers to send flowers to multiple addresses at once, thanks to the module's native support for multiple shipments.
The module ships with support for flat rates, while the community has already provided integrations with major APIs such as UPS, USPS, FedEx, Australia Post, NZ Post, and others.

The table below identifies all contributors to Drupal Commerce and Commerce Shipping for Drupal 8 sorted by issue count. Of course, not every issue is created equal in terms of impact and time spent, but we are pleased to highlight and honor some of our most visible contributors.

Total: 430 commits by 135 contributors.

Username Credits bojanz 261 mglaman 70 jsacksick 26 vasike 25 bmcclure 13 niko- 11 sorabh.v6, czigor, sumanthkumarc 10 joachim 9 agoradesign, ransomweaver 8 steveoliver, googletorp, mitrpaka 7 alexpott, bradjones1, GoZ, mlutz 6 Berdir, drugan, finne 5 Dom. 4 AaronChristian, edwardaa, gauravjeet, mbreden, rszrama, skyredwang, steve, swickham 3 a.dmitriiev, anthonygq, casey lay, chrisrockwell, DrupalSiteBuilder, FatherShawn, floretan, harings_rob, icurk, jackbravo, maciej.zgadzaj, nikathone, padma28, ptmkenny, rakesh.gectcr, Rob C, rynnnner, tilenav 2 Baik Ho, Bojan Živkov, Frank HH-germany, Hubbs, JeroenT, Kingdutch, LotharDesmet, Lukas von Blarer, MegaChriz, Neograph734, OWast, Oostie, Regnoy, Sarahphp1, SchwebDesign, SpartyDan, Sweetchuck, Thomas Cys, TommyChris, Tyler_Marshall, Wim Leers, aby v a, acromel, andypost, arosboro, barry6575, ben.mcclure, borisson_, bryansharpe, cbildstein, chishah92, citlacom, dev.tim, droath, duozersk, durum, emanuelrighetto, erik.erskine, facine, flocondetoile, franksj, garnett2125, guptahemant, heddn, iampuma, jafacakes2011, jantoine, jespermb, jkuma, joekers, josephdpurcell, joshmiller, jtolj, kevinhowbrook, kiwimind, maijs, manojapare, markoshust, marncz, marthinal, mgoncalves, ndf, nicola85, onedotover, opdavies, patrizioemili, rajeshwari10, rgpublic, ryross, saschagros, sbelazouz, shabana.navas, skitten, smaz, smccabe, subhojit777, tbradbury, thomas.cys, tranthanhthuy, troseman, utement, xSDx, ytsurk, zaporylie, zenimagine, zvse 1

Payment gateways

We released the first stable release of Commerce Braintree, with full support for PayPal Express Checkout and PayPal Credit. Braintree is PayPal’s best API, and we are proud to have it as our reference payment gateway. This release wouldn’t be possible without community members like Nia Kathoni (nikathone) and Sophie Shanahan-Kluth (Sophie.SK) who worked on PayPal integration and other issues.

We also released 3 beta releases of Commerce Authorize.Net (RC1 soon!), which now uses the latest Accept.js API (bringing down PCI requirements from D to A-EP) and supports eChecks. Big thanks again to Nia Kathoni (nikathone), Brad Jones (bradjones1) and Chris Rockwell (chrisrockwell) for contributing to the Accept.js integration, testing it in production, and sending endless feedback.

We released two stable releases of the Commerce Square module, a rising star in the D8 ecosystem, backed by Square’s strong APIs.

We led the port of Commerce Stripe, which received two beta releases thanks to plenty of testing and patches from Patrick Kenny (ptmkenny) and Lucas Hedding (heddn).

The list goes on and on. There are currently 55 supported payment gateways listed in our documentation, up from only a few at the beginning of 2017. Commerce Guys has directly reviewed dozens of them, contributing feedback and notes on Ludwig integration, and we’re looking forward to helping them all receive stable releases in 2018.

Essential contributed modules

In addition to the Shipping and Payment module groups described above, we also identify and track progress on a variety of essential contributed modules. These provide significant features that help Drupal Commerce attain feature parity with other major eCommerce platforms and services. While they may not be widely used, they are often critical in the adoption and development process, so we try to ensure each module maintainer gets support from the core team as needed to succeed.

Commerce License and Commerce Recurring - Selling digital products and subscriptions is a major use case for Drupal Commerce and one where we have clear competitive advantages over other platforms. The D8 efforts have been led by bojanz, dawehner and joachim (sponsored by Commerce Guys and Torchbox), resulting in early releases that are still a little rough around the edges but feature rich. We’ll describe their current status and active development initiatives in a future blog post.

Commerce Migrate - Contributors spent months of collective effort on our migration paths, which currently support migrating from Ubercart 2.x (D6) and Commerce 1.x (D7) with planned migrations from Magento and WooCommerce. Big thanks to Acro Media, who has sponsored Lucas Hedding (heddn) and V Spagnolo (quietone), the Drupal 8 Migrate maintainers, to lead the work on this module.

Commerce Stock - Guy Schneerson has been leading the march towards the first Stock alpha, which includes support for multiple stores, multiple warehouses, and transactional stock movements. He needs the community’s help, so please jump in!

Commerce Product Bundle - Olaf Karsten (olafkarsten) and Steve Oliver (steveoliver) have made great progress on a solution for selling product bundles.

Commerce POS - Want to run a point of sale system on Drupal Commerce? Thanks to Acro Media, you can. The D8 version is making great progress, with an improved and reduced codebase thanks to Commerce 2.x API improvements.

Looking ahead

We managed to maintain multiple parallel initiatives throughout 2017 in addition to delivering Commerce Guys’ own client work, providing free support to the community in multiple channels, and spreading the word at conferences.

Moving forward into 2018, we now have a published Commerce 2.x roadmap with regular monthly releases (the first Wednesday of every month barring holiday related delays). Our goals include stabilizing as many contributed modules as we can and continuing to improve our documentation. We’d love to have your help, and we’re happy to train you. Join us on Drupal Slack, and let’s do this together!

Jan 11 2018
Jan 11

Setting up taxes in Drupal Commerce 2 is a snap. The component comes bundled with some predefined tax rate plugins, such as Canadian sales tax and European Union VAT. This means that enabling these tax types is as easy as checking a box. More complicated tax regions, like you would find in the United States, have integrations available with services such as Avalara AvaTax, TaxCloud and more. Custom tax types can also be created out-of-the-box.

In this Acro Media Tech Talk video, we user our Urban Hipster Commerce 2 demo site to quickly show you how to configure the predefined tax plugins as well as add a custom tax type. 

Its important to note that this video was recorded before the official 2.0 release of Drupal Commerce. The current state of the Taxes sub-module is even more robust than what you see here, and additional plugins have been added out-of-the-box. Documentation is also still lacking at the time of this post, however, we've added a link anyway so that whoever finds this in the future will benefit.

Urban Hipster Commerce 2 Demo site

This video was created using the Urban Hipster Commerce 2 demo site. We've built this site to show the adaptability of the Drupal 8, Commerce 2 platform. Most of what you see is out-of-the-box functionality combined with expert configuration and theming.

Visit Our Drupal Commerce 2 Demo Site

More from Acro Media
Drupal modules used in this video
Additional resources

Contact us and learn more about our custom ecommerce solutions

Jan 11 2018
Jan 11

With exponential growth in marketing tools and website builders, why are marketers still adopting Drupal and maintaining their existing Drupal systems? And how has Drupal evolved to become a crucial piece of leading brands’ martech ecosystems?

For marketing decision makers, there are many reasons to choose and stick with Drupal, including:  

  • Designed to integrate with other marketing tools

  • Increased administrative efficiencies

  • Flexible front-end design options

  • Reduced Costs

Plays Well With Others

Your customer experience  no longer just depends on your CMS. Your CMS must integrate with new technologies and channels, as well as your CRM and marketing automation tools, to perform as a cohesive digital experience platform that reaches customers where they are.

Drupal is the most flexible CMS available when it comes to third-party integrations. Along with the power of APIs, Drupal can help you outfit your digital presence with the latest emerging tech more quickly than your competitors, allowing you to deliver an unparalleled customer experience.

Check out how Workday integrated disparate systems and tools, such as Salesforce and Mulesoft, to create a seamless experience that serves both their customer community members and internal support teams.

INCREASED ADMINISTRATIVE EFFICIENCIES

In large organizations, interdepartmental collaboration obstacles often translate into inefficient content publishing practices. This is even more compounded when marketers and content editors need a developer in the loop to help them make changes. When these hurdles aren’t properly navigated, prospects and customers suffer by not being able to gain easy access to the most relevant and up-to-date product or service information.

Over the years, Drupal has evolved to be flexible and accommodating for non-technical content admins, providing a highly customizable and user-friendly administration dashboard and flexible user privileges. Drupal powers marketing teams to design content independent of developers with modules like Paragraphs, which lets content admins rearrange page layouts without code adjustments while enforcing consistency across company sites.

Flexible Front-End Design Options

Drupal 8 provides increased design flexibility by letting the front and back end architectures work as separate independent systems. Therefore the visual design of a website can be completely rebuilt without having to invest in any back-end architecture changes.

While this may seem a bit technical and in the weeds, this has significant benefits for marketing resources and budget! With this design flexibility, marketers can implement new designs faster and more frequently, empowering your team to test and iterate on UX to optimize the customer experience.

REDUCED COSTS

The number of marketing tools required to run a comprehensive omnichannel marketing strategy is only growing. We add tools to our martech stack to help us grow our reach, understand our customers better, and personalize customer engagement. Each one of these tools has its own associated package cost or service agreement.

As an open source platform Drupal does not incur any licensing costs. While in contrast, a large implementation can easily cost hundreds of thousands of dollars just to have the right to use proprietary software, Drupal’s community-developed software is free, saving companies millions.

Drupal is also fully customizable from the get go--not only when it comes to features and what site visitors see, but also with regard to editor tools, workflows, user roles and permissions, and more. This means the money that would go towards customization projects is freed up to benefit customers.

Digital marketing managers considering Drupal, or those contemplating a migration to Drupal 8, should consider these benefits and how Drupal is helping digital marketers evolve to provide a more agile and user-friendly digital experience for prospects and customers.

Strongly considering a move to Drupal or a migration to Drupal 8? Reach out to us with any questions. And in the meantime check out Drupal 8 Content Migration: A Guide for Marketers.

Jan 11 2018
Jan 11

The Drupal Community Working Group is pleased to announce that nominations for the 2018 Aaron Winborn Award are now open. This annual award recognizes an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community. It will include a scholarship and stipend to attend DrupalCon and recognition in a plenary session at the event.

Nominations are open to not only well-known Drupal contributors, but also people who have made a big impact in their local or regional community. If you know of someone who has made a big difference to any number of people in our community, we want to hear about it.

This award was created in honor of long-time Drupal contributor Aaron Winborn, whose battle with Amyotrophic lateral sclerosis (ALS) (also referred to as Lou Gehrig's Disease) came to an end on March 24, 2015. Based on a suggestion by Hans Riemenschneider, the Community Working Group, with the support of the Drupal Association, launched the Aaron Winborn Award.

Nominations are open until March 1, 2018. A committee consisting of the Community Working Group members and past award winners will select a winner from the submissions. Members of this committee and previous winners are exempt from winning the award.

Previous winners of the award are:

  • 2015: Cathy Theys
  • 2016: Gábor Hojtsy
  • 2017: Nikki Stevens

If you know someone amazing who should benefit from this award you can make your nomination.
 

Jan 11 2018
Jan 11

I had a great time talking general and Drupal SEO at last night's CharDUG meetup! Just wanted to drop my slide deck here for reference. There are a lot of fantastic links in the deck.

Thanks to Mediacurrent for having a great culture for internal training. They invested in a large group of staff to go through the SEO Olympian program. I also want to thank the Mediacurrent Digital Strategy team for putting this training together. This training inspired me to dig more into SEO and put together this presentation and live demo of Drupal SEO modules.

I hope to present this talk again in the future.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web