Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
May 07 2016
May 07

Drupal 8 has greatly improved editor experience out-of-the-box. It comes shipped with CKEditor for WYSIWYG editing. Although, D8 ships with a custom build of CKEditor and it may not have the plugins that you would like to have or that your client wants to have. I will show you how to add new plugins into the CKEditor that comes with Drupal 8.

Adding plugins with a button

First, create a bare-bones custom module called editor_experience. Files will be added here that will tell Drupal that there is a new CKEditor plugin. Find a plugin to actually install... for the first example I will use bootstrap buttons ckeditor plugin. Place the downloaded plugin inside libraries directory at the root of the Drupal installation; or use a make file to place it there. Also make sure you have the libraries module installed drupal module:download libraries.

Create a file inside of the editor_experience module inside of src/Plugin/CKEditorPlugin called BtButton.php. Add the name space and the two use statements shown below.





namespace Drupal\editor_experience\Plugin\CKEditorPlugin;

use Drupal\ckeditor\CKEditorPluginBase;
use Drupal\editor\Entity\Editor;


class BtButton extends CKEditorPluginBase {

  ... 

}

The annotation @CKEditorPlugin tells Drupal there is a plugin for CKEditor to load. For the id, use the name of the plugin as defined in the plugin.js file that came with the btbutton download. Now we add several methods to our BtButton class.

First method will return false since it is not part of the internal CKEditor build.





public function isInternal() {
  return FALSE;
}

Next method will get the plugin's javascript file.





public function getFile() {
  return libraries_get_path('btbutton') . '/plugin.js';
}

Let Drupal know where your button is. Be sure that the key is set to the name of the plugin. In this case btbutton.





  public function getButtons() {
    return [
      'btbutton' => [
        'label' => t('Bootstrap Buttons'),
        'image' => libraries_get_path('btbutton') . '/icons/btbutton.png'
      ]
    ];
  }

Also implement getConfig() and return an empty array since this plugin has no configurations.

Then go to admin/config/content/formats/manage/basic_html or whatever format you have that uses the CKEditor and pull the Bootstrap button icon down into the toolbar.

Now the button is available for use on the CKEditor!

Adding plugins without a button (CKEditor font)

Some plugins do not come with a button png that allows users to drag the tool into the configuration, so what then?

In order to get a plugin into Drupal that does not have a button, the implementation of getButtons() is a little different. For example to add the Font/Font size dropdowns use image_alternative like below:





 public function getButtons() {
   return [
     'Font' => [
       'label' => t('Font'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '{{ font }}',
         '#context' => [
           'font' => t('Font'),
         ],
       ],
     ],
     'FontSize' => [
       'label' => t('Font Size'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '{{ font }}',
         '#context' => [
           'font' => t('Font Size'),
         ],
       ],
     ],
   ];
}

Then pull in the dropdown the same way the Bootstrap button plugin was added! Have any questions? Comment below or tweet us @activelamp.

Apr 09 2016
Apr 09

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Apr 04 2016
Apr 04

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Mar 01 2016
Mar 01

The biggest thing that got me excited with Drupal 8 is the first-class use of services & dependency-injection throughout the entire system. From aspects like routing, templating, managing configuration, querying and persisting data, you name it -- everything is done with services. This is a great thing, because it grants developers a level of flexibility in extending Drupal that is far greater than what Drupal 7 was able to.

I'll walk you through a few strategies of extending existing functionality, leveraging the power of Symfony's DependencyInjection component.

Example 1: Inheritance

Since everything in Drupal 8 can be traced back to a method call on an object within the service container, using inheritance to modify core behavior is a valid strategy.

For example, say we want to add the ability to specify the hostname when generating absolute URLs using Drupal\Core\Url::fromRoute. A quick look at this method will tell you that all the work is actually done by the @url_generator service (a lot of Drupal 8's API is set up like this -- a static method that simply delegates work to a service from the container somehow).

drupal/core/core.services.yml can tell us a lot about what makes up the @url_generator service:

services:
...
url_generator:
class: 'Drupal\Core\Render\MetadataBubblingUrlGenerator'
arguments: ['@url_generator.non_bubbling', '@renderer']
calls: - [setContext, ['@?router.request_context']]

url_generator.non_bubbling:
class: 'Drupal\Core\Routing\UrlGenerator'
arguments: ['@router.route_provider', '@path_processor_manager', '@route_processor_manager', '@request_stack', '%filter_protocols%']
public: false
calls: - [setContext, ['@?router.request_context']]
...

A peek at Drupal\Core\Render\MetadataBubblingUrlGenerator will actually show that it just delegates the core work of constructing a URL to the @url_generator.non_bubbling service.

To add the desired ability to the URL generator, we will have to write a new class that handles the extra logic:



namespace Drupal\foo\Routing;

use Drupal\Core\Routing\UrlGenerator;

class HostOverridingUrlGenerator extends UrlGenerator
{
    public function generateFromRoute(
      $name,
      $parameters = array(),
      $options = array(),
      $collected_bubbleable_metadata = NULL
    ) {

        
        $hasHostOverride = array_key_exists('host', $options) && $options['host'];

        if ($hasHostOverride) {
            
            $originalHost = $this->context->getHost();
            $this->context->setHost((string) $options['host']);
            $options['absolute'] = true;
        }

        $result = parent::generateFromRoute($name, $parameters, $options, $collected_bubbleable_metadata);

        if ($hasHostOverride) {
            
            $this->context->setHost($originalHost);
        }

        return $result;
    }
}

We now have a new type that is capable of overriding the hosts in absolute URLs that are generated. The next step is to tell Drupal use this in favor of the original. We can do that by manipulating the existing definition through a service provider.

Service Providers

Drupal will look for a service provider in your module directory and will hand it the container-builder for it to be manipulated. Telling Drupal to use our new class is not done by editing drupal/core/core.services.yml but by modifying the definition through the service provider:






namespace Drupal\foo;

use Drupal\Core\DependencyInjection\ServiceProviderInterface;
use Drupal\Core\DependencyInjection\ContainerBuilder;

class FooServiceProvider  implements ServiceProviderInterface
{
    public function register(ContainerBuilder $container)
    {
        $urlGenerator = $container->getDefinition('url_generator.non_bubbling');
        $urlGenerator->setClass(__NAMESPACE__ . '\Routing\HostOverridingUrlGenerator');
    }
}

Done! Once the foo module is installed, your service provider will now have the chance to modify the service container as it see fit.

Example 2: Decorators

In a nutshell, a decorator modifies the functionality of an existing object not by extending its type but by wrapping the object and putting logic around the existing functionality.

This distinction between extending the object's type versus wrapping it seems trivial, but in practice it can be very powerful. It means you can change an object's behavior at run-time, with the added benefit of not having to care what the object's type is. An update to Drupal core could change the type (a.k.a. the class) of a service at any point, as long as the substitute object still respect the agreed contract imposed by an interface, then the decorator would still work.

Say another module declared a service named @twitter_feed and we want to cache the result of some expensive method call:



use Drupal\foo\Twitter;

use Drupal\some_twitter_module\Api\TwitterFeedInterface;
use Drupal\Core\Cache\CacheBackendInterface;

class CachingTwitterFeed implements TwitterFeedInterface
{
      public function __construct(TwitterFeedInterface $feed, CacheBackendInterface $cache)
      {
          $this->feed = $feed;
          $this->cache = $cache;
      }

      public function getLatestTweet($handle)
      {

          
          if ($this->cache->has($handle)) {
            return $this->cache->get($handle);
          }

          
          
          $tweet = $this->feed->getLatestTweet($handle);
          $this->cache->set($handle, $tweet, 60 * 60 * 5); 

          return $tweet;
      }

      public function setAuthenticationToken($token)
      {
          
          return $this->feed->setAuthenticationToken($token);
      }
}

To tell Drupal to decorate a service, you can do so in YAML notation:



services:
cached.twitter_feed:
class: 'Drupal\foo\Twitter\CachingTwitterFeed'
decorates: 'twitter_feed'
arguments: ['@twitter_feed.inner', '@cache.twittter_feed']

With this in place, all references and services requiring @twitter_feed will get our instance that does caching instead.

The original @twitter_feed service will be renamed to @twitter_feed.inner by convention.

Decorators vs sub-classes

Decorators are perfect for when you need to add logic around existing ones. One beauty behind decorators is that it doesn't need to know the actual type of the object it tries to change. It only needs to know what methods it responds to i.e. it only cares about the objects interface, and not much else.

Another beautiful thing is that you can effectively modify the object's behavior at run-time:




$feed = new TwitterFeed();
$feed->setAuthenticationToken($token);

if ($isProduction) { 
  $feed = new CachingTwitterFeed($feed, $cache);
}

$feed->getLastTweet(...);

or:



$feed = new TwitterFeed();
$cachedFeed = new CachingTwitterFeed($feed, $cache);

$feed->setAuthenticationToken($token);

$feed->getLastTweet(...) 
$cachedFeed->getLastTweet(...) 

$feed->setAuthenticateToken($newToken); 

Compare that to if you have the caching version as a sub-class, then you'll need to instantiate two objects, and (re)authenticate both.

And lastly, one cool thing about decorators is you can layer them with greater flexibility:



$feed = new TwitterFeed();

$feed = new CachingTwitterFeed($feed, $cache); 
$feed = new LoggingTwitterFeed($feed); 
$feed = new CachingTwitterFeed(new LoggingTwitterFeed($feed), $cache); 

These are contrived examples but I hope you get the gist.

However there are cases where using decorators just wouldn't cut it (for example, if you need to access a protected property or method, which you can't do with decorators). I'd say that if you can accomplish the necessary modifications using only an object's public API, think about achieving it using decorator(s) instead and see if it's advantageous.

The HostOverridingUrlGenerator can actually be written as a decorator, as we can achieve the required operations using the objects public API only -- instead of using $this->context, we can use $this->inner->getContext() instead, etc.

In fact, the @url_generator service, an instance of Drupal\Core\Render\MetadataBubblingUrlGenerator > is a decorator in itself. The host override behavior can be modelled as:

new MetadataBubblingUrlGenerator(new HostOverridingUrlGenerator(new UrlGenerator(...)), ...)

One down-side of using decorators is you will end up with a bunch of boilerplate logic of simply passing parameters to the inner object without doing much else. It will also break if there are any changes to the interface, although this shouldn't happen until a next major version bump.

Composites

You might want to create a new service whose functionality (or part thereof) involves the application of multiple objects that it manages.

For example:




use Drupal\foo\Processor;

class CompositeProcessor implements ProcessorInterface
{
    
    protected $processors = array();

    public function process($value)
    {
        
        foreach ($this->processors as $processor) {
            $value = $processor->process($value);
        }

        return $value;
    }

    public function addProcessor(ProcessorInterface $processor)
    {
        $this->processors[] = $processor;
    }
}

Composite objects like this are quite common, and there are a bunch of them in Drupal 8 as well. Traditionally, an object that wants to be added to the collection must be declared as tagged service. They are then gathered together during a compiler pass and added to the composite object's definition.

In Drupal 8, you don't need to code the compiler pass logic anymore. You can just tag your composite service as a service_collector, like so:




services:
the_processor:
class: 'Drupal\foo\Processor\CompositeProcessor'
tags: - { name: 'service_collector', tag: 'awesome_processor', call: 'addProcessor' }

    bar_processor:
      class: 'Drupal\foo\Processor\BarProcessor'
      arguments: [ '@bar' ]
      tags:
        - { name: 'awesome_processor' }

    foo_processor:
      class: 'Drupal\foo\Processor\FooProcessor'
      arguments: [ '@foo' ]
      tags:
        - { name: 'awesome_processor' }

With this configuration, the service container will make sure that @bar_processor and @foo_processor are injected into the @the_processor service whenever you ask for it. This also allows other modules to hook into your service by tagging their service with awesome_processor, which is great.

Conclusion

These are just a few OOP techniques that the addition of a dependency injection component has opened up to Drupal 8 development. These are things that PHP developers using Symfony2, Laravel, ZF2 (using its own DI component, Zend\Di), and many others have enjoyed in the recent years, and they are now ripe for the taking by the Drupal community.

For more info on Symfony's DependencyInjection component, head to http://symfony.com/doc/2.7/components/dependency_injection/index.html. I urge you to read up on manipulating the container-builder in code as there are a bunch of things you can do in service providers that you can't achieve by using the YAML notation.

If you have any questions, comments, criticisms, or some insights to share, feel free to leave a comment! Happy coding!f you have any questions, comments, criticisms, etc feel free to leave a comment! Happy coding!

Jan 27 2016
Jan 27

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Jan 20 2016
Jan 20

This post is part 4 in the series ["Hashing out a docker workflow"]({% post_url 2015-06-04-hashing-out-docker-workflow %}). For background, checkout my previous posts.

My previous posts talked about getting your local environment setup using the Drupal Docker image with Vagrant. It's now time to bake a Docker image with our custom application code within the container, so that we can deploy containers implementing the immutable server pattern. One of the main reasons we starting venturing down the Docker path was to achieve deployable fully baked containers that are ready to run in whatever environment you put them in, similar to what we've done in the past with Packer, as I've mentioned in a previous post.

Review

The instructions in this post are assumming you followed my [previous post]({% post_url 2015-09-22-local-docker-development-with-vagrant %}) to get a Drupal environment setup with the custom "myprofile" profile. In that post we brought up a Drupal environment by just referencing the already built Drupal image on DockerHub. We are going to use that same Docker image, and add our custom application to that.

All the code that I'm going to show below can be found in this repo on Github.

Putting the custom code into the container

We need to create our own image, create a Dockerfile in our project that extends the Drupal image that we are pulling down.

Create a file called Dockerfile in the root of your project that looks like the following:

FROM drupal:7.41

ADD drupal/profiles/myprofile /var/www/html/profiles/myprofile

We are basically using everything from the Drupal image, and adding our installation profile to the profiles directory of the document root.

This is a very simplistic approach, typically there are more steps than just copying files over. In more complex scenarios, you will likely run some sort of build within the Dockerfile as well, such as Gulp, Composer, or Drush Make.

Setting up Jenkins

We now need to setup a Jenkins server that will checkout our code and run docker build and docker push. Let's setup a local jenkins container on our Docker host to do this.

Open up the main Vagrantfile in the project root and add another container to the file like the following:






Vagrant.configure(2) do |config|

config.vm.define "jenkins" do |v|
v.vm.provider "docker" do |d|
d.vagrant_vagrantfile = "./host/Vagrantfile"
d.build_dir = "./Dockerfiles/jenkins"
d.create_args = ['--privileged']
d.remains_running = true
d.ports = ["8080:8080"]
d.name = "jenkins-container"
end
end

config.vm.define "drupal" do |v|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
end

Two things to notice from the jenkins container definition, 1) The Dockerfile for this container is in the Dockerfiles/jenkins directory, and 2) we are passing the --privileged argument when the container is run so that our container has all the capabilities of the docker host. We need special access to be able to run Docker within Docker.

Lets create the Dockerfile:

$ mkdir -p Dockerfiles/jenkins
$ cd !$
$ touch Dockerfile

Now open up that Dockerfile and install Docker onto this Jenkins container:

FROM jenkins:1.625.2

USER root


RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D


RUN echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list

VOLUME /var/lib/docker

RUN apt-get update && \
  apt-get -y install \
    docker-engine

ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh

ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/dockerjenkins.sh" ]

We are using a little script that is found in The Docker Book as our entry point to start the docker daemon, as well as Jenkins. It also does some stuff on the filesystem to ensure cgroups are mounted correctly. If you want to read more about running Docker in Docker, go check out this article

Boot up the new container

Before we boot this container up, edit your host Vagrantfile and setup the port forward so that 8080 points to 8080:






Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.network :forwarded_port, guest: 8080, host: 8080
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

Now bring up the new container:

$ vagrant up jenkins

or if you've already brought it up once before, you may just need to run reload:

\$ vagrant reload jenkins

You should now be able to hit Jenkins at the URL http://localhost:8080

Jenkins Dashboard

Install the git plugins for Jenkins

Now that you have Jenkins up and running, we need to install the git plugins. Click on the "Manage Jenkins" link in the left navigation, then click "Manage Plugins" in the list given to you, and then click on the "Available" Tab. Filter the list with the phrase "git client" in the filter box. Check the two boxes to install plugins, then hit "Download now and install after restart".

Jenkins Plugin Install

On the following screen, check the box to Restart Jenkins when installation is complete.

Setup the Jenkins job

It's time to setup Jenkins. If you've never setup a Jenkins job, here is a quick crash course.

  1. Click the New Item link in the left navigation. Name your build job, and choose Freestyle project. Click Ok. New Build Job
  2. Configure the git repo. We are going to configure Jenkins to pull code directly from your repository and build the Docker image from that. Add the git Repo
  3. Add the build steps. Scroll down toward the bottom of the screen and click the arrow next to Add build step and choose Execute Shell. We are going to add three build steps as shown below. First we build the Docker image with docker build -t="tomfriedhof/docker_blog_post" . (notice the trailing dot) and give it a name with the -t parameter, then we login to DockerHub, and finally push the newly created image that was created to DockerHub. Jenkins Build Steps
  4. Hit Save, then on the next screen hit the button that says Build Now

If everything went as planned, you should have a new Docker image posted on DockerHub: https://hub.docker.com/r/tomfriedhof/docker_blog_post/

Wrapping it up

There you have it, we now have an automated build that will automatically create and push Docker images to DockerHub. You can add on to this Jenkins job so that it polls your Github Repository so that it automatically runs this build anytime something changes in the tracking repo.

As another option, if you don't want to go through all the trouble of setting up your own Jenkins server just to do what I just showed you, DockerHub can do this for you. Go checkout their article on how to setup automated builds with Docker.

Now that we have a baked container with our application code within it, the next step is to deploy the container. That is the next post in this series. Stay tuned!

Dec 15 2015
Dec 15

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Dec 02 2015
Dec 02

Now that the release of Drupal 8 is finally here, it is time to adapt our Drupal 7 build process to Drupal 8, while utilizing Docker. This post will take you through how we construct sites on Drupal 8 using dependency managers on top of Docker with Vagrant.

Keep a clean upstream repo

Over the past 3 or 4 years developing websites has changed dramatically with the increasing popularity of dependency management such as Composer, Bundler, npm, Bower, etc... amongst other tools. Drupal even has it's own system that can handle dependencies called Drush, albiet it is more than just a dependency manager for Drupal.

With all of these tools at our disposal, it makes it very easy to include code from other projects in our application while not storing any of that code in the application code repository. This concept dramatically changes how you would typically maintain a Drupal site, since the typical way to manage a Drupal codebase is to have the entire Drupal Docroot, including all dependencies, in the application code repository. Having everything in the docroot is fine, but you gain so much more power using dependency managers. You also lighten up the actual application codebase when you utilize dependency managers, because your repo only contains code that you wrote. There are tons of advantages to building applications this way, but I have digressed, this post is about how we utilize these tools to build Drupal sites, not an exhaustive list of why this is a good idea. Leave a comment if you want to discuss the advantages / disadvantages of this approach.

Application Code Repository

We've got a lot going on in this repository. We won't dive too deep into the weeds looking at every single file, but I will give a high level overview of how things are put together.

Installation Automation (begin with the end in mind)

The simplicity in this process is that when a new developer needs to get a local development environment for this project, they only have to execute two commands:

$ vagrant up --no-parallel
$ make install

Within minutes a new development environment is constructed with Virtualbox and Docker on the developers machine, so that they can immediately start contributing to the project. The first command boots up 3 Docker containers -- a webserver, mysql server, and jenkins server. The second command invokes Drush to build the document root within the webserver container and then installs Drupal.

We also utilize one more command to keep running within a seperate terminal window, to keep files synced from our host machine to the Drupal 8 container.

$ vagrant rsync-auto drupal8

Breaking down the two installation commands

vagrant up --no-parallel

If you've read any of [my]({% post_url 2015-06-04-hashing-out-docker-workflow %}) [previous]({% post_url 2015-07-19-docker-with-vagrant %}) [posts]({% post_url 2015-09-22-local-docker-development-with-vagrant %}), I'm a fan of using Vagrant with Docker. I won't go into detail about how the environment is getting set up. You can read my previous posts on how we used Docker with Vagrant. For completeness, here is the Vagrantfile and Dockerfile that vagrant up reads to setup the environment.

Vagrantfile






require 'fileutils'

MYSQL_ROOT_PASSWORD="root"

unless File.exists?("keys")
Dir.mkdir("keys")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
File.open("keys/id_rsa.pub", 'w') { |file| file.write(ssh_pub_key) }
end

unless File.exists?("Dockerfiles/jenkins/keys")
Dir.mkdir("Dockerfiles/jenkins/keys")
FileUtils.copy("#{Dir.home}/.ssh/id_rsa", "Dockerfiles/jenkins/keys/id_rsa")
end

Vagrant.configure("2") do |config|

config.vm.define "mysql" do |v|

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.image = "mysql:5.7.9"
      d.env = { :MYSQL_ROOT_PASSWORD => MYSQL_ROOT_PASSWORD }
      d.name = "mysql-container"
      d.remains_running = true
      d.ports = [
        "3306:3306"
      ]
    end

end

config.vm.define "jenkins" do |v|

    v.vm.synced_folder ".", "/srv", type: "rsync",
        rsync__exclude: get_ignored_files(),
        rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"]

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "./Dockerfiles/jenkins"
      d.name = "jenkins-container"
      
      d.volumes = [
          "/home/rancher/.composer:/root/.composer",
          "/home/rancher/.drush:/root/.drush"
      ]
      d.remains_running = true
      d.ports = [
          "8080:8080"
      ]
    end

end

config.vm.define "drupal8" do |v|

    v.vm.synced_folder ".", "/srv/app", type: "rsync",
      rsync__exclude: get_ignored_files(),
      rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"],
      rsync__chown: false

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "."
      d.name = "drupal8-container"
      d.remains_running = true
      
      d.volumes = [
        "/home/rancher/.composer:/root/.composer",
        "/home/rancher/.drush:/root/.drush"
      ]
      d.ports = [
        "80:80",
        "2222:22"
      ]
      d.link("mysql-container:mysql")
    end

end

end

def get_ignored_files()
ignore_file = ".rsyncignore"
ignore_array = []

if File.exists? ignore_file and File.readable? ignore_file
File.read(ignore_file).each_line do |line|
ignore_array << line.chomp
end
end

ignore_array
end

One of the cool things to point out that we are doing in this Vagrantfile is setting up a VOLUME for the composer and drush cache that should persist beyond the life of the container. When our application container is rebuilt we don't want to download 100MB of composer dependencies every time. By utilizing a Docker VOLUME, that folder is mounted to the actual Docker host.

Dockerfile (drupal8-container)

FROM ubuntu:trusty


ENV PROJECT_ROOT /srv/app
ENV DOCUMENT_ROOT /var/www/html
ENV DRUPAL_PROFILE=apa_profile


RUN apt-get update
RUN apt-get install -y \
	vim \
	git \
	apache2 \
	php-apc \
	php5-fpm \
	php5-cli \
	php5-mysql \
	php5-gd \
	php5-curl \
	libapache2-mod-php5 \
	curl \
	mysql-client \
	openssh-server \
	phpmyadmin \
	wget \
	unzip \
	supervisor
RUN apt-get clean


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer


RUN mkdir /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN mkdir /var/run/sshd && chmod 0755 /var/run/sshd
RUN mkdir -p /root/.ssh
COPY keys/id_rsa.pub /root/.ssh/authorized_keys
RUN chmod 600 /root/.ssh/authorized_keys
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd


RUN composer global require drush/drush:8.0.0-rc3
RUN ln -nsf /root/.composer/vendor/bin/drush /usr/local/bin/drush


RUN mv /root/.composer /tmp/


RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/apache2/php.ini
RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/cli/php.ini


RUN sed -i 's/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf
RUN a2enmod rewrite


RUN echo '[program:apache2]\ncommand=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"\nautorestart=true\n\n' >> /etc/supervisor/supervisord.conf
RUN echo '[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n' >> /etc/supervisor/supervisord.conf





WORKDIR $PROJECT_ROOT
EXPOSE 80 22
CMD exec supervisord -n

We have xdebug commented out in the Dockerfile, but it can easily be uncommented if you need to step through code. Simply uncomment the two RUN commands and run vagrant reload drupal8

make install

We utilize a Makefile in all of our projects whether it be Drupal, nodejs, or Laravel. This is so that we have a similar way to install applications, regardless of the underlying technology that is being executed. In this case make install is executing a drush command. Below is the contents of our Makefile for this project:

all: init install

init:
vagrant up --no-parallel

install:
bin/drush @dev install.sh

rebuild:
bin/drush @dev rebuild.sh

clean:
vagrant destroy drupal8
vagrant destroy mysql

mnt:
sshfs -C -p 2222 [email protected]:/var/www/html docroot

What this commmand does is ssh into the drupal8-container, utilizing drush aliases and drush shell aliases.

install.sh

The make install command executes a file, within the drupal8-container, that looks like this:

#!/usr/bin/env bash

echo "Moving the contents of composer cache into place..."
mv /tmp/.composer/* /root/.composer/

PROJECT_ROOT=$PROJECT_ROOT DOCUMENT_ROOT=$DOCUMENT_ROOT $PROJECT_ROOT/bin/rebuild.sh

echo "Installing Drupal..."
cd $DOCUMENT_ROOT && drush si $DRUPAL_PROFILE --account-pass=admin -y
chgrp -R www-data sites/default/files
rm -rf ~/.drush/files && cp -R sites/default/files ~/.drush/

echo "Importing config from sync directory"
drush cim -y

You can see on line 6 of install.sh file that it executes a rebuild.sh file to actually build the Drupal document root utilizing Drush Make. The reason for separating the build from the install is so that you can run make rebuild without completely reinstalling the Drupal database. After the document root is built, the drush site-install apa_profile command is run to actually install the site. Notice that we are utilizing Installation Profiles for Drupal.

We utilize installation profiles so that we can define modules available for the site, as well as specify default configuration to be installed with the site.

We work hard to achieve the ability to have Drupal install with all the necessary configuration in place out of the gate. We don't want to be passing around a database to get up and running with a new site.

We utilize the Devel Generate module to create the initial content for sites while developing.

rebuild.sh

The rebuild.sh file is responsible for building the Drupal docroot:

#!/usr/bin/env bash

if [ -d "$DOCUMENT_ROOT/sites/default/files" ]
then
echo "Moving files to ~/.drush/..."
mv \$DOCUMENT_ROOT/sites/default/files /root/.drush/
fi

echo "Deleting Drupal and rebuilding..."
rm -rf \$DOCUMENT_ROOT

echo "Downloading contributed modules..."
drush make -y $PROJECT_ROOT/drupal/make/dev.make $DOCUMENT_ROOT

echo "Symlink profile..."
ln -nsf $PROJECT_ROOT/drupal/profiles/apa_profile $DOCUMENT_ROOT/profiles/apa_profile

echo "Downloading Composer Dependencies..."
cd $DOCUMENT_ROOT && php $DOCUMENT_ROOT/modules/contrib/composer_manager/scripts/init.php && composer drupal-update

echo "Moving settings.php file to $DOCUMENT_ROOT/sites/default/..."
rm -f $DOCUMENT_ROOT/sites/default/settings\*
cp $PROJECT_ROOT/drupal/config/settings.php $DOCUMENT_ROOT/sites/default/
cp $PROJECT_ROOT/drupal/config/settings.local.php $DOCUMENT_ROOT/sites/default/
ln -nsf $PROJECT_ROOT/drupal/config/sync $DOCUMENT_ROOT/sites/default/config
chown -R www-data \$PROJECT_ROOT/drupal/config/sync

if [ -d "/root/.drush/files" ]
then
cp -Rf /root/.drush/files $DOCUMENT_ROOT/sites/default/
    chmod -R g+w $DOCUMENT_ROOT/sites/default/files
chgrp -R www-data sites/default/files
fi

This file essentially downloads Drupal using the dev.make drush make file. It then runs composer drupal-update to download any composer dependencies in any of the modules. We use the composer manager module to help with composer dependencies within the Drupal application.

Running the drush make dev.make includes two other Drush Make files, apa-cms.make (the application make file) and drupal-core.make. Only dev dependencies should go in dev.make. Application dependencies go into apa-cms.make. Any core patches that need to be applied go into drupal-core.make.

Our Jenkins server builds the prod.make file, instead of dev.make. Any production specific modules would go in prod.make file.

Our make files for this project look like this so far:

dev.make

core: "8.x"

api: 2

defaults:
  projects:
    subdir: "contrib"

includes:
  - "apa-cms.make"

projects:
  devel:
    version: "1.x-dev"

apa-cms.make

core: "8.x"

api: 2

defaults:
projects:
subdir: "contrib"

includes:

- drupal-core.make

projects:
address:
version: "1.0-beta2"

composer_manager:
version: "1.0-rc1"

config_update:
version: "1.x-dev"

ctools:
version: "3.0-alpha17"

draggableviews:
version: "1.x-dev"

ds:
version: "2.0"

features:
version: "3.0-alpha4"

field_collection:
version: "1.x-dev"

field_group:
version: "1.0-rc3"

juicebox:
version: "2.0-beta1"

layout_plugin:
version: "1.0-alpha19"

libraries:
version: "3.x-dev"

menu_link_attributes:
version: "1.0-beta1"

page_manager:
version: "1.0-alpha19"

pathauto:
type: "module"
download:
branch: "8.x-1.x"
type: "git"
url: "http://github.com/md-systems/pathauto.git"

panels:
version: "3.0-alpha19"

token:
version: "1.x-dev"

zurb_foundation:
version: "5.0-beta1"
type: "theme"

libraries:
juicebox:
download:
type: "file"
url: "https://www.dropbox.com/s/hrthl8t1r9cei5k/juicebox.zip?dl=1"

(once this project goes live, we will pin the version numbers)

drupal-core.make

core: "8.x"

api: 2

projects:
  drupal:
    version: 8.0.0
    patch:
      - https://www.drupal.org/files/issues/2611758-2.patch

prod.make

core: "8.x"

api: 2

includes:

- "apa-cms.make"

projects:
apa_profile:
type: "profile"
subdir: "."
download:
type: "copy"
url: "file://./drupal/profiles/apa_profile"

At the root of our project we also have a Gemfile, specifically to install the compass compiler along with various sass libraries. We install these tools on the host machine, and "watch" those directories from the host. vagrant rsync-auto watches any changed files and rsyncs them to the drupal8-container.

bundler

From the project root, installing these dependencies and running a compass watch is simple:

$ bundle
$ bundle exec compass watch path/to/theme

bower

We pull in any 3rd party front-end libraries such as Foundation, Font Awesome, etc... using Bower. From within the theme directory:

\$ bower install

There are a few things we do not commit to the application repo, as a result of the above commands.

  • The CSS directory
  • Bower Components directory

Deploy process

As I stated earlier, we utilize Jenkins CI to build an artifact that we can deploy. Within the jenkins job that handles deploys, each of the above steps is executed, to create a document root that can be deployed. Projects that we build to work on Acquia or Pantheon actually have a build step to also push the updated artifact to their respected repositories at the host, to take advantage of the automation that Pantheon and Acquia provide.

Conclusion

Although this wasn't an exhaustive walk thru of how we structure and build sites using Drupal, it should give you a general idea of how we do it. If you have specific questions as to why we go through this entire build process just to setup Drupal, please leave a comment. I would love to continue the conversation.

Look out for a video on this topic in the next coming weeks. I covered a lot in this post, without going into much detail. The intent of this post was to give a 10,000 foot view of the process. The upcoming video on this process will get much closer to the Tarmac!

As an aside, one caveat that we did run into with setting up default configuration in our Installation Profile was with Configuration Management UUID's. You can only sync configuration between sites that are clones. We have overcome this limitation with a workaround in our installation profile. I'll leave that topic for my next blog post in a few weeks.

Nov 14 2015
Nov 14

Shoov.io is a nifty website testing tool created by Gizra. We at ActiveLAMP were first introduced to Shoov.io at DrupalCon LA, in fact, Shoov.io is built on, you guessed it, Drupal 7 and it is an open source visual regression toolkit.

Shoov.io uses webdrivercss, graphicsmagick, and a few other libraries to compare images. Once the images are compared you can visually see the changes in the Shoov.io online app. When installing Shoov you can choose to install it directly into your project directory/repository or you can use a separate directory/repository to house all of your tests and screenshots. Initially when testing Shoov we had it contained in a separate directory but with our most recent project, we opted to install Shoov directly into our project with the hopes to have it run on a commit or pull request basis using Travis CI and SauceLabs.

[embedded content]

##Installation

To get Shoov installed into your project, I will, for this install, assume that you want to install it into your project, navigate into your project using the terminal.

Install the Yeoman Shoov generator globally (may have to sudo)

npm install -g mocha yo generator-shoov

Make sure you have Composer installed globally

curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer

Make sure you have Brew installed (MacOSX)

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install GraphicsMagick (MacOSX)

brew install graphicsmagick

Next you will need to install dependencies if you don't have them already.

npm install -g yo npm install -g file-type

Now we can build the test suite using the Yeoman generator*

yo shoov --base-url=http://activelamp.com

*running this command may give you more dependencies you need to install first.

This generator will scaffold all of the directories that you need to start writing tests for your project. It will also give you some that you may not need at the moment, such as behat. You will find the example test within the directory test in the visual-monitor directory named test.js. We like to split our tests into multiple files so we might rename our test to homepage.js. Here is what the homepage.js file looks like when you first open it.

'use strict'

var shoovWebdrivercss = require('shoov-webdrivercss')






var capsConfig = {
  chrome: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    resolution: '1024x768',
  },
  ie11: {
    browser: 'IE',
    browser_version: '11.0',
    os: 'Windows',
    os_version: '7',
    resolution: '1024x768',
  },
  iphone5: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    chromeOptions: {
      mobileEmulation: {
        deviceName: 'Apple iPhone 5',
      },
    },
  },
}

var selectedCaps = process.env.SELECTED_CAPS || undefined
var caps = selectedCaps ? capsConfig[selectedCaps] : undefined

var providerPrefix = process.env.PROVIDER_PREFIX
  ? process.env.PROVIDER_PREFIX + '-'
  : ''
var testName = selectedCaps
  ? providerPrefix + selectedCaps
  : providerPrefix + 'default'

var baseUrl = process.env.BASE_URL
  ? process.env.BASE_URL
  : 'http://activelamp.com'

var resultsCallback = process.env.DEBUG
  ? console.log
  : shoovWebdrivercss.processResults

describe('Visual monitor testing', function() {
  this.timeout(99999999)
  var client = {}

  before(function(done) {
    client = shoovWebdrivercss.before(done, caps)
  })

  after(function(done) {
    shoovWebdrivercss.after(done)
  })

  it('should show the home page', function(done) {
    client
      .url(baseUrl)
      .webdrivercss(
        testName + '.homepage',
        {
          name: '1',
          exclude: [],
          remove: [],
          hide: [],
          screenWidth: selectedCaps == 'chrome' ? [640, 960, 1200] : undefined,
        },
        resultsCallback
      )
      .call(done)
  })
})

##Modifications We prefer not to repeat configuration in our projects. We move the configuration setup to a file outside of the test folder and require it. We make this file by copying and removing the config from the above file and adding module.exports for each of the variables. Our config file looks like this

var shoovWebdrivercss = require('shoov-webdrivercss')






var capsConfig = {
  chrome: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    resolution: '1024x768',
  },
  ie11: {
    browser: 'IE',
    browser_version: '11.0',
    os: 'Windows',
    os_version: '7',
    resolution: '1024x768',
  },
  iphone5: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    chromeOptions: {
      mobileEmulation: {
        deviceName: 'Apple iPhone 5',
      },
    },
  },
}

var selectedCaps = process.env.SELECTED_CAPS || undefined
var caps = selectedCaps ? capsConfig[selectedCaps] : undefined

var providerPrefix = process.env.PROVIDER_PREFIX
  ? process.env.PROVIDER_PREFIX + '-'
  : ''
var testName = selectedCaps
  ? providerPrefix + selectedCaps
  : providerPrefix + 'default'

var baseUrl = process.env.BASE_URL
  ? process.env.BASE_URL
  : 'http://activelamp.com'

var resultsCallback = process.env.DEBUG
  ? console.log
  : shoovWebdrivercss.processResults

module.exports = {
  caps: caps,
  selectedCaps: selectedCaps,
  testName: testName,
  baseUrl: baseUrl,
  resultsCallback: resultsCallback,
}

Once we have this setup, we need to require it into our test and rewrite the variables from our test to make it work with the new configuration file. That file now looks like this.

'use strict'

var shoovWebdrivercss = require('shoov-webdrivercss')
var config = require('../configuration.js')

describe('Visual monitor testing', function() {
  this.timeout(99999999)
  var client = {}

  before(function(done) {
    client = shoovWebdrivercss.before(done, config.caps)
  })

  after(function(done) {
    shoovWebdrivercss.after(done)
  })

  it('should show the home page', function(done) {
    client
      .url(config.baseUrl)
      .webdrivercss(
        config.testName + '.homepage',
        {
          name: '1',
          exclude: [],
          remove: [],
          hide: [],
          screenWidth:
            config.selectedCaps == 'chrome' ? [640, 960, 1200] : undefined,
        },
        config.resultsCallback
      )
      .call(done)
  })
})

##Running the test Now we can run our test. For initial testing, if you don't have a BrowserStack account or SauceLabs, you can test using phantom js

Note: You must have a repository or the test will fail.

In another terminal window run:

phantomjs --webdriver=4444

Return to the original terminal window and run:

SELECTED_CAPS=chrome mocha

This will run the tests specified for "chrome" in the configuration file and the screenWidths from within each test as specified by the default test.

Once the test runs you should see that it has passed. Of course, our test passed because we didn't have anything to compare it to. This test will create your initial baseline images. You will want to review these images in the webdrivercss directory and decide if you need to fix your site, your tests, or both. You may have to remove, exclude or hide elements from your tests. Removing an element will completely rip it from the dom for the test and will shift your site around. Excluding will create a black box over the content that you want to not show up, this is great for areas that you want to keep a consistent layout and the item is a fixed size. Hiding an element will hide the element from view, works similar to remove but works better with child elements outside of the parent. Once you review the baseline images you may want to take the time to commit and push the new images to GitHub (this commit will be the one that appears in the interface later)

##Comparing the Regressions Once you modify your test or site you can test it against the baseline that exists. Now that you probably have a regression you can go to the Shoov interface. From within the interface, you will select Visual Regression. The commit from your project will appear in a list and you will click the commit to be able to view the regressions and take action on any other issues that exist or you can save your new baseline. Only images with a regression will show up in the interface and only tests with regressions will show up on the list.

##What's Next You can view the standalone GitHub repository here.

This is just the tip of the iceberg for us with Visual Regression testing. We hope to share more about our testing process and how we are using Shoov for our projects. Don't forget to share or comment if you like this post.

Oct 17 2015
Oct 17

A little over a year ago the ActiveLAMP website had undergone a major change -- we made the huge decision of moving away from using Drupal to manage its content in favor of building it as a static HTML site using Jekyll, hosted on Amazon S3. Not only did this extremely simplify our development stack, it also trimmed down our server requirements to the very bare minimum. Now, we are just hosting everything on a file storage server like it's 1993.

A few months ago we identified the need to restructure our URL schemes as part of an ongoing SEO campaign. As easy as that sounds, this, however, necessitates the implementation of 301 redirects from the older URL scheme to their newer, more SEO-friendly versions.

I'm gonna detail how I managed to (1) implement these redirects quite easily using an nginx service acting as a proxy, and (2) achieve parity between our local and production environments while keeping everything light-weight with the help of Docker.

Nginx vs Amazon S3 Redirects

S3 is a file storage service offered by AWS that not only allows you to store files but also allows you to host static websites in conjunction with Route 53. Although S3 gives you the ability to specify redirects, you'll need to use S3-specific configuration and routines. This alone wouldn't be ideal because not only would it tie us to S3 by the hips, but it is not a methodology that we could apply to any other environment (i.e. testing and dev environments on our local network and machines). For these reasons, I opted to use nginx as a very thin reverse proxy to accomplish the job.

Configuring Nginx

Rather than compiling the list of redirects manually, I wrote a tiny Jekyll plugin that can do it faster and more reliably. The plugin allows me to specify certain things within the main Jekyll configuration file and it will generate the proxy.conf file for me:



nginx:
    proxy_host: ></span>;
    proxy_port: 80
    from_format: "/:year/:month/:day/:title/"
    
    redirects:
        - { from: "^/splash(.*)", to: "/$1" type: redirect }

With this in place, I am able to generate the proxy.conf by simply issuing this command:

> jekyll nginx_config > proxy.conf

This command will produce a proxy.conf file which will look like this:




rewrite ^/2008/09/21/drupalcampla\-revision\-control\-presentation/?(\?.*)?$ /blog/drupal/drupalcampla-revision-control-presentation$1 permanent;




rewrite ^/blog/development/aysnchronous\-php\-with\-message\-queues/?(\?.*)?$ /blog/development/asynchronous-php-with-message-queues/$1 permanent;


location / {
	proxy_set_header Host ;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_pass http://:80;
}

You probably noticed that this is not a complete nginx configuration. However all that is need to be done is to define a server directive which will import this file:



server {
    listen 80;
    include /etc/nginx/proxy.conf;
}

Here we have an nginx proxy listening on port 80 that knows how to redirect old URLs to the new ones and pass on any request to the S3 bucket on AWS.

From this point, I was able to change the DNS records on our domain to point to the nginx proxy service instead of pointing directly at S3.

Check out the documentation for more ways of specifying redirects and more advanced usage.

Docker

Spinning up the proxy locally is a breeze with the help of Docker. Doing this in Vagrant and a provisioner would require a lot of boilerplate and code. With Docker, everything (with the exception of the config file that is automatically generated), is under 10 lines!

The Dockerfile

Here I am using the official nginx image straight from DockerHub but added some minor modifications:



FROM nginx


RUN rm /etc/nginx/conf.d/default.conf


ADD server.conf /etc/nginx/conf.d/
ADD proxy.conf /etc/nginx/

The build/nginx directory will contain everything the nginx proxy will need: the server.conf that you saw from the previous section, and the proxy.conf file which was generated by the jekyll nginx_config command.

Automating it with Grunt

Since we are using generator-jekyllrb, a Yeoman generator for Jekyll sites which uses Grunt to run a gamut of various tasks, I just had to write a grunt proxy task which does all the needed work when invoked:



...

grunt.initConfig({
    ...
    local_ip: process.env.LOCAL_IP,
    shell: {
        nginx_config: {
            command: 'jekyll nginx_config --proxy_host=192.168.0.3
            --proxy_port=8080 --config=_config.yml,_config.build.yml --source=app > build/nginx/proxy.conf'
        }
        docker_build: {
            command: 'docker build -t jekyll-proxy build/nginx'
        },
        docker_run: {
            command: 'docker run -d -p 80:80 jekyll-proxy'
        }
    },
    ...
});

...

grunt.registerTask('proxy', [
      'shell:nginx_config',
      'shell:docker_build',
      'shell:docker_run'
]);

This requires grunt-shell

With this in place, running grunt proxy will prepare the configuration, build the image, and run the proxy on http://192.168.99.100 where 192.168.99.100 is the address to the Docker host VM on my machine.

Note that this is a very simplified version of the actual Grunt task config that we actually use which just serves to illustrate the meager list of commands that is required to get the proxy configured and running.

I have set up a GitHub repository that replicates this set-up plus the actual Grunt task configuration we use that adds more logic around things like an auto-rebuilding the Docker image, cleaning up of stale Docker processes, configuration for different build parameters for use in production, etc. You can find it here: bezhermoso/jekyll-nginx-proxy-with-docker-demo.

Oct 10 2015
Oct 10

Running headless Drupal with a separate javascript framework on the front-end can provide amazing user experiences and easy theming. Although, working with content editors with this separation can prove to be a tricky situation.

Problem

User story As part of the content team, I need to be able to see a preview of the article on a separate front-end (Ember) application without ever saving the node.

As I started reading this user story I was tasked with, the wheels started turning as to how I would complete this. I had to get information not saved anywhere to an Ember front-end application with no POST abilities. I love challenges that take a lot of thought before jumping in. I talked with some of the other developers here and came up with a pretty nifty solution to the problem, which come to find out was just a mis-communication on the story... and when I was done the client was still excited at the possibilities it could open up; but keep in mind this was still in it's early stages and there are still a lot of holes to fill.

Solution

The first thing I did was add a Preview link and attach a javascript file to the node (article type) edit page via a simple hook_form_FORM_ID_alter().



 function mymodule_form_article_node_form_alter(&$form, &$form_state, $form_id) {
	$form['#attached']['js'][] = drupal_get_path('module', 'mymodule') . '/js/preview.js'
	$form['actions']['preview_new'] = [
		'#weight' => '40',
		'#markup' => 'Preview on front-end'
	];
}

Pretty simple. Since it was just the beginning stages I just threw it in an anchor tag to get the link there. So it wasn't pretty. Now to start gathering the node's data since it won't be saved anywhere.

Next step: Gather node info in js

So obviously the next step was to gather up the node's information from the node edit screen to prepare for shipping to the front-end application. Most time spent here was just trying to mimic the api that the Ember front-end was already expecting.


(function ($, Drupal) {
  Drupal.behaviors.openNewSite = {
    attach : function(context, settings) {
      viewInNewSite.init();
    }
  }
})(jQuery, Drupal);

var viewInNewSite = (function($) {

  
  var getMedia = function() {...};

  
  var getMessage = function() {...};

 
  var attachPreviewClick = function(link) {
    var $link = $(link);
    $link.on('click', function() {
      var newWindow = window.open('http://the-front-end.com/articles/preview/preview-article');
      var message = getMessage();
      
      setTimeout(function() {
        newWindow.postMessage(JSON.stringify(message), 'http://the-front-end.com');
      }, 1000);
    });
  };

  
  var init = function() {
    attachPreviewClick('#preview-new');
  };

  return {
    init: init
  }
})(jQuery);

Using window.open() to open what will be the route in the Ember application returns the newWindow for us to post a message to. (I used a setTimeout() here because the opening application took a while to get started and the message would get missed... this held me up for a while since I knew the message should be getting sent.) Then using postMessage() on newWindow to ship off a message (our json object) to the opening window, regardless of if it is another domain. Insert security concerns here... but now we're ready to setup the front end Ember application route.

To Ember!

The next step was to set up the Ember front-end application to listen for the message from the original window. Set up the basic route:


this.resource('articles', function() {
  this.route('preview', { path: '/preview/article-preview' })
  this.route('article', { path: '/:article_id' })
})

The application that I was previewing articles into already had a way to view articles by id as you see in the above code. So I didn't want to have to duplicate anything... I wanted to use the same template and model for articles that were already developed. Since that was taken care of for me, it was just time to create a model for this page and make sure that I use the correct template. So start by creating the ArticlesPreviewRoute:

App.ArticlesPreviewRoute = (function() {
 return Ember.Route.extend({

renderTemplate: function() {
this.render('articles.article');
},
controllerName: 'articles.article',

setupController: function(controller, model) {
controller.set('model', model);
},
model: function (params) {
return getArticle();
}
});

/\*\*

- Adds event listener to the window for incoming messages.
-
- @return {Promise}
  \*/
  var getArticle = function() {
  return new Promise(function(resolve, reject) {
  window.addEventListener('message', function(event) {
  if (event.origin.indexOf('the-back-end.com') !== -1) {
  var data = JSON.parse(event.data);
  resolve(data.articles[0]);
  }
  }, false);
  });
  };
  })();

The getArticle() function above returns a new Promise and adds an event listener that verifies that the message is coming from the correct origin. Clicking the link from Drupal would now take content to a new tab and load up the article. There would be some concerns that need to be resolved such as security measures and if a user visits the path directly.

To cover the latter concern, a second promise would either resolve the promise or reject it if the set amount of time has passed without a message coming from the origin window.

App.ArticlesPreviewRoute = (function() {
  return Ember.Route.extend({
    ...
    model: function (params) {
	  var self = this;
      var article = previewWait(10, getArticle()).catch(function() {
        self.transitionTo('not-found');
      });
      return article;
    }
  });

  var getArticle = function() {
	return new Promise(function(resolve, reject) {...}
  };

  
  var previewWait = function(seconds, promise) {
    return new Promise(function(resolve, reject) {
      setTimeout(function() {
        reject({'Bad': 'No data Found'});
      }, seconds * 1000);
      promise.then(function(data) {
        resolve(data);
      }).catch(reject);
    });
  };
})();

There you have it! A way to preview an article from a Drupal backend to an Ember front-end application without ever saving the article. A similar approach could be used for any of your favorite Javascript frameworks. Plus, this can be advanced even further into an "almost live" updating front-end that constantly checks the state of the fields on the Drupal backend. There have been thoughts of turning this into a Drupal module with some extra bells and whistles for configuring the way the json object is structured to fit any front-end framework... Is there a need for this? Let us know in the comments below or tweet us! Now, go watch the video!

Sep 23 2015
Sep 23

This post is part 3 in the series ["Hashing out a docker workflow"]({% post_url 2015-06-04-hashing-out-docker-workflow %}). For background, checkout my previous posts.

Now that I've laid the ground work for the approach that I want to take with local environment development with Docker, it's time to explore how to make the local environment "workable". In this post we will we will build on top of what we did in my last post, [Docker and Vagrant]({% post_url 2015-07-19-docker-with-vagrant %}), and create a working local copy that automatically updates the code inside the container running Drupal.

Requirements of a local dev environment

Before we get started, it is always a good idea to define what we expect to get out of our local development environment and define some requirements. You can define these requirements however you like, but since ActiveLAMP is an agile shop, I'll define our requirements as users stories.

User Stories

As a developer, I want my local development environment setup to be easy and automatic, so that I don't have to spend the entire day following a list of instructions. The fewer the commands, the better.

As a developer, my local development environment should run the same exact OS configuration as stage and prod environments, so that we don't run into "works on my machine" scenario's.

As a developer, I want the ability to log into the local dev server / container, so that I can debug things if necessary.

As a developer, I want to work on files local to my host filesystem, so that the IDE I am working in is as fast as possible.

As a developer, I want the files that I change on my localhost to automatically sync to the guest filesystem that is running my development environment, so that I do not have to manually push or pull files to the local server.

Now that we know what done looks like, let's start fulfilling these user stories.

Things we get for free with Vagrant

We have all worked on projects that have a README file with a long list of steps just to setup a working local copy. To fulfill the first user story, we need to encapsulate all steps, as much as possible, into one command:

\$ vagrant up

We got a good start on our one command setup in [my last blog post]({% post_url 2015-07-19-docker-with-vagrant %}). If you haven't read that post yet, go check it out now. We are going to be building on that in this post. My last post essentially resolves the first three stories in our user story list. This is the essence of using Vagrant, to aid in setting up virtual environments with very little effort, and dispose them when no longer needed with vagrant up and vagrant destroy, respectively.

Since we will be defining Docker images and/or using existing docker containers from DockerHub, user story #2 is fulfilled as well.

For user story #3, it's not as straight forward to log into your docker host. Typically with vagrant you would type vagrant ssh to get into the virtual machine, but since our host machine's Vagrantfile is in a subdirectory called /host, you have to change directory into that directory first.

$ cd host
$ vagrant ssh

Another way you can do this is by using the vagrant global-status command. You can execute that command from anywhere and it will provide a list of all known virtual machines with a short hash in the first column. To ssh into any of these machines just type:

\$ vagrant ssh <short-hash>

Replace with the actual hash of the machine.

Connecting into a container

Most containers run a single process and may not have an SSH daemon running. You can use the docker attach command to connect to any running container, but beware if you didn't start the container with a STDIN and STDOUT you won't get very far.

Another option you have for connecting is using docker exec to start an interactive process inside the container. For example, to connect to the drupal-container that we created in my last post, you can start an interactive shell using the following command:

$ sudo docker exec -t -i drupal-container /bin/bash

This will return an interactive shell on the drupal-container that you will be able to poke around on. Once you disconnect from that shell, the process will end inside the container.

Getting files from host to app container

Our next two user stories have to do with working on files native to the localhost. When developing our application, we don't want to bake the source code into a docker image. Code is always changing and we don't want to rebuild the image every time we make a change during the development process. For production, we do want to bake the source code into the image, to achieve the immutable server pattern. However in development, we need a way to share files between our host development machine and the container running the code.

We've probably tried every approach available to us when it comes to working on shared files with vagrant. Virtualbox shared files is just way too slow. NFS shared files was a little bit faster, but still really slow. We've used sshfs to connect the remote filesystem directly to the localhost, which created a huge performance increase in terms of how the app responded, but was a pain in the neck in terms of how we used VCS as well as it caused performance issues with the IDE. PHPStorm had to index files over a network connection, albiet a local network connection, but still noticebly slower when working on large codebases like Drupal.

The solution that we use to date is rsync, specifically vagrant-gatling-rsync. You can checkout the vagrant gatling rsync plugin on github, or just install it by typing:

\$ vagrant plugin install vagrant-gatling-rsync

Syncing files from host to container

To achieve getting files from our localhost to the container we must first get our working files to the docker host. Using the host Vagrantfile that we built in my last blog post, this can be achieved by adding one line:

config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'

Your Vagrantfile within the /host directory should now look like this:






Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

We are syncing a drupal profile from a within the drupal directory off of the project root to a the /srv/myprofile directory within the docker host.

Now it's time to add an argument to run when docker run is executed by Vagrant. To do this we can specify the create_args parameter in the container Vagrant file. Add the following line into the container Vagrantfile:

docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']

This file should now look like:






Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

This parameter that we are passing maps the directory we are rsyncing to on the docker host to the profiles directory within the Drupal installation that was included in the Drupal docker image from DockerHub.

Create the installation profile

This blog post doesn't intend to go into how to create a Drupal install profile, but if you aren't using profiles for building Drupal sites, you should definitely have a look. If you have questions regarding why using Drupal profiles are a good idea, leave a comment.

Lets create our simple profile. Drupal requires two files to create a profile. From the project root, type the following:

$ mkdir -p drupal/profiles/myprofile
$ touch drupal/profiles/myprofile/{myprofile.info,myprofile.profile}

Now edit each file that you just created with the minimum information that you need.

myprofile.info

name = Custom Profile
description = My custom profile
core = 7.x

myprofile.profile



function myprofile_install_tasks() {
  
}

Start everything up

We now have everything we need in place to just type vagrant up and also have a working copy. Go to the project root and run:

\$ vagrant up

This will build your docker host as well as create your drupal container. As I mentioned in a previous post, starting up the container sometimes requires me to run vagrant up a second time. I'm still not sure what's going on there.

After everything is up and running, you will want to run the rsync-auto command for the docker host, so that any changes you make locally traverses down to the docker host and then to the container. The easiest way to do this is:

$ cd host
$ vagrant gatling-rsync-auto

Now visit the URL to your running container at http://localhost:4567 and you should see the new profile that you've added.

Custom Install Profile

Conclusion

We covered a lot of ground in this blog post. We were able to accomplish all of the stated requirements above with just a little tweaking of a couple Vagrantfiles. We now have files that are shared from the host machine all the way down to the container that our app is run on, utilizing features built into Vagrant and Docker. Any files we change in our installation profile on our host immediately syncs to the drupal-container on the docker host.

At ActiveLAMP, we use a much more robust approach to build out installation profiles, utilizing Drush Make, which is out of scope for this post. This blog post simply lays out the general idea of how to accomplish getting a working copy of your code downstream using Vagrant and Docker.

In my next post, I'll continue to build on what I did here today, and introduce automation to automatically bake a Docker image with our source code baked in, utilizing Jenkins. This will allow us to release any number of containers easily to scale our app out as needed.

Aug 14 2015
Aug 14

I recently had time to install and take a look at Drupal 8. I am going to share my first take on Drupal 8 and some of the hang-ups that I came across. I read a few other blog posts that mentioned not to rely too heavily on one source for D8 documentation with the rapid changing pace of D8 the information has become outdated rather quickly.

##Getting Started My first instinct was to run over to drupal.org and grab a copy of the code base and set it up on MAMP. Then I saw an advertisement for running Drupal 8 on Acquia Cloud Free and decided that would probably be a great starting point. Running through the setup for Acquia took only about eight minutes. This was great, having used Acquia and Pantheon before this was an easy way to get started.

Next, I decided to pull down the code and get it running locally so that I could start testing out adding my own theme. Well... What took 8 minutes for Acquia took relatively longer for me.

##Troubleshooting and Upgrading The first roadblock that I ran into was that my MAMP was not running the required version of PHP ( 5.5.9 or higher) and I decided to upgrade to MAMP 3 to make life a little bit nicer. After setting up MAMP from scratch and making sure the other projects that I had installed with MAMP still work correctly I was able to continue with the site install.

The second roadblock that I came across was not having Drush 7+ installed. It doesn't come out and say in the command line that you need to upgrade Drush (it does in the docs on drupal.org and if you search the error it is one of the first results). It just spits out this error:

Fatal error: Class 'Drupal\Core\Session\AccountInterface' not found in .../docroot/core/includes/bootstrap.inc on line 64
Drush command terminated abnormally due to an unrecoverable error.

The next roadblock was that I was trying to clear cache with Drush and didn't bother to read the documentation on drupal.org that outlined that drush cc all no longer exists and is replaced by drush cr. Drush now uses the cache-rebuild command. However, this is not exactly clear given that if you run drush cc all you get the same exact error as the one above.

Finally everything was setup and working properly. I decided to look around for a good guide to jumpstart getting the theme setup and landed here at this Lullabot article. For the most part, the article was straight forward. Some parts didn't work and I skipped and others didn't work and I tried to figure out why. Here is the list of things that I couldn't figure out:

  • Drush wouldn't change the default theme (complained about bootstrap level even though I was in my docroot)
  • Stylesheets-remove didn't work inside of my theme.info.yml file
  • Specifying my CSS in my theme.libraries.yml file seemed to be problematic but got it working after some time. (probably user error)

##Conclusion Drupal 8 looks clean and feels sleek and slimmed down. I'm really excited for the direction that Drupal is headed. Overall the interface within Drupal hasn't changed too drastically (maybe some naming conventions ie. extend over modules). It looks like one of our current sites running Panopoly which has a great look and feel over out-of-the-box D7. I really like the simplicity of separating out yml files for specific needs and setting up the theme.

I look forward to writing more blog posts about Drupal 8 and maybe some tutorials and insights. Let us know your thoughts and ideas in the comments section below.

Jul 19 2015
Jul 19

This post is part 2 in a series of Docker posts hashing out a new docker workflow for our team. To gain background of what I want to accomplish with docker, checkout my previous post [hashing out a docker workflow]({% post_url 2015-06-04-hashing-out-docker-workflow %}).

In this post, we will venture into setting up docker locally, in the same repeatable way from developer to developer, by using Vagrant. By the end of this post, we'll have Drupal running in a container, using Docker. This post is focused on hashing out a Docker workflow with Vagrant, less about Drupal itself. I want to give a shout out to the folks that maintain the Drupal Docker image in the Docker Registry. It's definitely worth checking out, and using that as a base to build FROM for your custom images.

Running Docker Locally

There are several ways to go about setting up Docker locally. I don't intend to walk you through how to install Docker, you can find step-by-step installation instructions based on your OS in the Docker documentation. However, I am leaning toward taking an unconventional approach to installing Docker locally, not mentioned in the Docker documentation. Let me tell you why.

Running a Docker host

For background, we are specifically an all Mac OS X shop at ActiveLAMP, so I'll be speaking from this context.

Unfortunately you can't run Docker natively on OS X, Docker needs to run in a Virtual Machine with an operating system such as Linux. If you read the Docker OS X installation docs, you see there are two options for running Docker on Mac OS X, Boot2Docker or Kitematic.

Running either of these two packages looks appealing to get Docker up locally very quickly (and you should use one of these packages if you're just trying out Docker), but thinking big picture and how we plan to use Docker in production, it seems that we should take a different approach locally. Let me tell you why I think you shouldn't use Boot2Docker or Kitematic locally, but first a rabbit trail.

Thinking ahead (the rabbit trail)

My opinion may change after gaining more real world experience with Docker in production, but the mindset that I'm coming from is that in production our Docker hosts will be managed via Chef.

Our team has extensive experience using Chef to manage infrastructure at scale. It doesn't seem quite right to completely abandon Chef yet, since Docker still needs a machine to run the Docker host. Chef is great for machine level provisioning.

My thought is that we would use Chef to manage the various Docker hosts that we deploy containers to and use the Dockerfile with Docker Compose to manage the actual app container configuration. Chef would be used in a much more limited capacity, only managing configuration on a system level not an application level. One thing to mention is that we have yet to dive into the Docker specific hosts such as AWS ECS, dotCloud, or Tutum. If we end up adopting a service like one of these, we may end up dropping Chef all together, but we're not ready to let go of those reigns yet.

One step at a time for us. The initial goal is to get application infrastructure into immutable containers managed by Docker. Not ready to make a decision on what is managing Docker or where we are hosting Docker, that comes next.

Manage your own Docker Host

The main reason I was turned off from using Boot2Docker or Kitematic is that it creates a Virtual Machine in Virtualbox or VMWare from a default box / image that you can't easily manage with configuration management. I want control of the host machine that Docker is run on, locally and in production. This is where Chef comes into play in conjunction with Vagrant.

Local Docker Host in Vagrant

As I mentioned in my last post, we are no stranger to Vagrant. Vagrant is great for managing virtual machines. If Boot2Docker or Kitematic are going to spin up a virtual machine behind the scenes in order to use Docker, then why not spin up a virtual machine with Vagrant? This way I can manage the configuration with a provisioner, such as Chef. This is the reason I've decided to go down the Vagrant with Docker route, instead of Boot2Docker or Kitematic.

The latest version of Vagrant ships with a Docker provider built-in, so that you can manage Docker containers via the Vagrantfile. The Vagrant Docker integration was a turn off to me initially because it didn't seem it was very Docker-esque. It seemed Vagrant was just abstracting established Docker workflows (specifically Docker Compose), but in a Vagrant syntax. However within the container Vagrantfile, I saw you can also build images from a Dockerfile, and launch those images into a container. It didn't feel so distant from Docker any more.

It seems that there might be a little overlap in areas between what Vagrant and Docker does, but at the end of the day it's a matter of figuring out the right combination of using the tools together. The boundary being that Vagrant should be used for "orchestration" and Docker for application infrastructure.

When all is setup we will have two Vagrantfiles to manage, one to define containers and one to define the host machine.

Setting up the Docker Host with Vagrant

The first thing to do is to define the Vagrantfile for your host machine. We will be referencing this Vagrantfile from the container Vagrantfile. The easiest way to do this is to just type the following in an empty directory (your project root):

\$ vagrant init ubuntu/trusty64

You can configure that Vagrantfile however you like. Typically you would also use a tool like Chef solo, Puppet, or Ansible to provision the machine as well. For now, just to get Docker installed on the box we'll add to the Vagrantfile a provision statement. We will also give the Docker host a hostname and a port mapping too, since we know we'll be creating a Drupal container that should EXPOSE port 80. Open up your Vagrantfile and add the following:

config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567

This ensures that Docker is installed on the host when you run vagrant up, as well as maps port 4567 on your local machine to port 80 on the Docker host (guest machine). Your Vagrantfile should look something like this (with all the comments removed):






Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
end

Note: This post is not intended to walk through the fundamentals of Vagrant, for further resources on how to configure the Vagrantfile check out the docs.

As I mentioned earlier, we are going to end up with two Vagrantfiles in our setup. I also mentioned the container Vagrantfile will reference the host Vagrantfile. This means the container Vagrantfile is the configuration file we want used when vagrant up is run. We need to move the host Vagrantfile to another directory within the project, out of the project root directory. Create a host directory and move the file there:

$ mkdir host
$ mv Vagrantfile !\$

Bonus Tip: The !$ destination that I used when moving the file is a shell shortcut to use the last argument from the previous command.

Define the containers

Now that we have the host Vagrantfile defined, lets create the container Vagrantfile. Create a Vagrantfile in the project root directory with the following contents:






Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

To summarize the configuration above, we are using the Vagrant Docker provider, we have specified the path to the Docker host Vagrant configuration that we setup earlier, and we defined a container using the Drupal image from the Docker Registry along with exposing some ports on the Docker host.

Start containers on vagrant up

Now it's time to start up the container. It should be as easy as going to your project root directory and typing vagrant up. It's almost that easy. For some reason after running vagrant up I get the following error:

A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.

Command: "docker" "ps" "-a" "-q" "--no-trunc"

Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

Stdout:

I've gotten around this is by just running vagrant up again. If anyone has ideas what is causing that error, please feel free to leave a comment.

Drupal in a Docker Container

You should now be able to navigate to http://localhost:4567 to see the installation screen of Drupal. Go ahead and install Drupal using an sqlite database (we didn't setup a mysql container) to see that everything is working. Pretty cool stuff!

Development Environment

There are other things I want to accomplish with our local Vagrant environment to make it easy to develop on, such as setting up synced folders and using the vagrant rsync-auto tool. I also want to customize our Drupal builds with Drush Make, to make developing on Drupal much more efficient when adding modules, updating core, etc... I'll leave those details for another post, this post has become very long.

Conclusion

As you can see, you don't have to use Boot2Docker or Kitematic to run Docker locally. I would advise that if you just want to figure out how Docker works, then you should use one of these packages. Thinking longer term, your local Docker Host should be managed the same way your production Docker Host(s) are managed. Using Vagrant, instead of Boot2Docker or Kitematic, allows me to manage my local Docker Host similar to how I would manage production Docker Hosts using tools such as Chef, Puppet, or Ansible.

In my next post, I'll build on what we did here today, and get our Vagrant environment into a working local development environment.

Jul 03 2015
Jul 03

The Picture module is a backport of Drupal 8 Responsive Image module. It allows you to select different images to be loaded for different devices and resolutions using media queries and Drupal’s image styles. You can also use the Image Replace module to specify a different image to load at certain breakpoints.

The Picture module gives you the correct formatting for an HTML5 “” element and includes a polyfill library for backwards compatibility with unsupported browsers. Unfortunately, this includes IE, Edge, Safari, iOS Safari, and Opera Mini. For more information, Can I Use.

The picture module works great with views. It supports WYSIWYG and creditor. It can be implemented directly with code if necessary.

##Installation Picture has two important dependencies

  1. Breakpoints module
  2. Ctools module

You can install the Picture module using Drush.

You will also want the Media module and Image Replace module.

##Setting Up Once you have the modules installed and enabled you can start to configure your theme.

###1. Set up your breakpoints

You can add breakpoint settings directly in Drupal under Configuration > Media > Breakpoints. Or if you prefer you can add them to the .info file of your theme. We added ours to our theme like this.

breakpoints[xxlarge] = (min-width: 120.063em)
breakpoints[xlarge]  = (min-width: 90.063em) and (max-width: 120em)
breakpoints[large]   = (min-width: 64.063em) and (max-width: 90em)
breakpoints[medium]  = (min-width: 40.063em) and (max-width: 64em)
breakpoints[small]   = (min-width: 0em) and (max-width: 40em)

###2. Add Responsive Styles From within the Drupal UI, you will want to go to Configuration > Media > Breakpoints

  1. Click “Add Responsive Style”.
  2. Select which breakpoints you want to use for this specific style.
  3. Then you will choose an existing image style from the drop-down list that this style will clone.
  4. Finally you give it a base name for your new image styles. I would recommend naming it something logical and ending it with “_”.

You can now edit your image styles, you can do your normal scale crop for different sizes or you can setup image replace.

###3. Setting up Picture Mappings Once you have your image styles you have to create a mapping for them to associate with if you go to Configuration > Media > Picture Mappings

  1. Click Add to add a new picture mapping
  2. Select your breakpoint group.
  3. Give your mapping an Administrative title (something you can pick out of a select list)
  4. Then you will select "use image style" and select each of the corresponding image styles that you setup from the select list that appears.
Once you have that created you can now select that mapping and use it in different areas of your site. For example, you can use it in the Display of a node using the Manage Display section and formatting it as picture.
You can use it in views by selecting picture format and selecting you picture mapping.

###4. Setting up Image Replace Click edit on your desired image style and as an effect you can add the Replace Image setting. Replace can be combined with other styles as well so if you need to scale and crop you can do that too

Once you have your style setup you need to specify a field to replace with the image with a specific dimensions. We added a secondary field to our structure. We named them with horizontal and vertical because it made sense for us because we only use the vertical field when we are at smaller widths and the content stretches move vertically. You can use whatever naming convention will work best for you.

On the main image that you added to your view or display you edit the field settings and there is a section called Image Replace. Find the image style you set Replace image on and select your field you want to replace your current field.

##Finished Product Once that is done you are all set. If you have any questions or comments be sure to leave your comment in the comments section below.

Here is an example of how we used image replace.

May 01 2015
May 01

In the [previous blog post]({% post_url 2015-01-21-divide-and-conquer-part-1 %}) we shared how we implemented the first part of our problem in Drupal and how we decided that splitting our project into discrete parts was a good idea. I'll pick up where we left off and discuss why and how we used Symfony to build our web service instead of implementing it in Drupal.

RESTful services with Symfony

Symfony being a web application framework, did not really provide built-in user-facing features that we were able to use immediately, but it gave us development tools and a development framework that expedite the implementation of various functionality for our web service. Even though much of the work was cut out for us, the framework took care of the most common problems and the usual plumbing that goes with building web applications. This enabled us to tackle the web service problem with a more focused approach.

Other than my familiarity and proficiency with the framework, the reason we chose Symfony over other web application frameworks is that there is already a well-established ecosystem of Symfony bundles (akin to modules in Drupal) that are centered around building a RESTful web service: FOSRestBundle provided us with a little framework for defining and implementing our RESTful endpoints, and does all the content-type negotiation and other REST-related plumbing for us. JMSSerializerBundle took care of the complexities of representing our objects into JSON which our clients to consume. We also wrote our own little bundle with which we use Swagger UI to provide a beautiful documentation to our API. Any changes to our code-base that affects the API will automatically update the documentation, thanks to NelmioApiDocBundle in which we contributed the support for generating Swagger-compliant API specifications.

We managed to encapsulate all the complexities behind our search engine within our API: not only do we index content sent over by Drupal, but we also had to index thousands of data that we are pulling from a partner at a daily basis. On top of that, the API also appends search results from another search API provided by one other partner should we run out of data to provide. Our consumers doesn't know this and neither should Drupal -- we let it worry about content management and sending us the data, that's it. In fact,

In fact, Drupal never talks to Elasticsearch directly. It only talks to our API and authenticating itself should it need to write or delete anything. This also means we can deploy the API on another server without Drupal breaking because it can no longer talk to a firewalled search index. This way we keep everything discrete and secure.

In the end, we have an REST API with three endpoints:

  1. a secured endpoint which receives content which are then validated and indexed, which is used by Drupal,
  2. a secured endpoint which is used to delete content from the index, which is also used by Drupal, and finally;
  3. a public endpoint used searching for content that matches the specifications provided via GET parameters, which will be used in Drupal and by other consumers.

Symfony and Drupal 8

Symfony is not just a web application framework, but is also a collection of stand-alone libraries that can be used by themselves. In fact, the next major iteration of Drupal will use Symfony components to modernize its implementations of routing & URL dispatch, request and response handling, templating, data persistence and other internals like organizing the Drupal API. This change will definitely enhance the experience of developing Drupal extensions as well as bring new paradigms to Drupal development, especially with the introduction of services and dependency-injection.

Gluing it together with AngularJS

Given that we have a functioning web service, we then used AngularJS to implement the rich search tools on our Drupal 7 site.

AngularJS is a front-end web framework which we use to create rich web applications straight on the browser with no hard dependencies on specific back-ends. This actually helped us prototype our search tools and the search functionality faster outside of Drupal 7. We made sure that everything we wrote in AngularJS are as self-contained as possible, in which case we can just drop them into Drupal 7 and have them running with almost zero extra work. It was just a matter of putting our custom AngularJS directives and/or mini-applications into a panel, which in turn we put into Drupal pages. We have done this AngularJS-as-Drupal-panels before in other projects and it has been really effective and fun to do.

To complete the integration, it was just a matter of hooking into Drupal's internal mechanism in order to pass along authored content into our indexing API when they are approved, or deleting them when they are unpublished.

Headless Drupal comes to Drupal 8!

The popularity of front-end web frameworks has increased the demand for data to be available via APIs as templating and other display-oriented tasks has rightfully entered into the domain of client-side languages and out of back-end systems. It's exciting to see that Drupal has taken initiative and has made the content it manage available through APIs out-of-the-box. This means it will be easier to build single-page applications on top of content managed in Drupal. It is something that we at ActiveLAMP are actually itching to try.

Also, now that Google has added support for crawling Javascript-driven sites for SEO, I think single-page applications will soon rise from being just "experimental" and become a real choice for content-driven websites.

Using Composer dependencies in Drupal modules

We used the Guzzle HTTP client library in one of our modules to communicate with the API in the background. We pulled the library into our Drupal installation by defining it as a project dependency via the Composer Manager module. It was as simple as putting a bare-minimum composer.json file in the root directory of one of our modules:

{
  "require" : {
    "guzzlehttp/guzzle" : "4.\*"
  }
}

...and running these Drush commands during build:

$ drush composer-rebuild
$ drush composer-manager install

The first command collects all defined dependency information in all composer.json files found in modules, and the second command finally downloads them into Drupal's library directory.

Composer is awesome. Learn more about it here.

How a design deicision saved us from a potentially costly mistake

One hard lesson we learned is that its not ideal to use Elasticsearch as the primary and sole data persistence layer in our API.

During the early stages of developing our web service, we treated Elasticsearch as the sole data store by removing Doctrine2 from our Symfony application and doing away with MySQL completely from our REST API stack. However we still employed the Repository Pattern and wrote classes to store and retrieve from Elasticsearch using the elasticsearch-php library. These classes also hide away the details on how objects are transformed into their JSON representation, and vice-versa. We used the jms-serializer library for the data transformations; its an excellent package that takes care of the complexities behind data serialization from PHP objects to JSON or XML. (We use the same library for delivering objects through our search API which could be a topic for a future blog post.)

This setup worked just fine, until we had to explicitly define date-time fields in our documents. Since we used UNIX timestamps for our date-time fields in the beginning, Elasticsearch mistakenly inferred them to be float fields. The explicit schema conflicted with the inferred schema and we were forced to flush out all existing documents before the update can be applied. This prompted us to use a real data store which we treat as the Single Version of Truth and relegate Elasticsearch as just an index lest we lose real data in the future, which would be a disaster.

Making this change was easy and almost painless, though, thanks to the level of abstraction that the Repository Pattern provides. We just implemented new repository classes with the help of Doctrine which talk to MySQL, and dropped them in places where we used their Elasticsearch counter-part. We then hooked into Doctrine's event system to get our data automatically indexed as they are written in and out of the database:



use ActiveLAMP\AppBundle\Entity\Indexable;
use ActiveLAMP\AppBundle\Model\ElasticsearchRepository;
use Doctrine\Common\EventSubscriber;
use Doctrine\ORM\Event\LifecycleEventArgs;
use Doctrine\ORM\Events;
use Elasticsearch\Common\Exceptions\Missing404Exception;

class IndexEntities implements EventSubscriber
{

    protected $elastic;

    public function __construct(ElasticsearchRepository $repository)
    {
        $this->elastic = $repository;
    }

    public function getSubscribedEvents()
    {
        return array(
            Events::postPersist,
            Events::postUpdate,
            Events::preRemove,
        );
    }

    public function postPersist(LifecycleEventArgs $args)
    {
        $entity = $args->getEntity();

        if (!$entity instanceof Indexable) {
            return;
        }

        $this->elastic->save($entity);
    }

    public function postUpdate(LifecycleEventArgs $args)
    {
        $entity = $args->getEntity();

        if (!$entity instanceof Indexable) {
            return;
        }

        $this->elastic->save($entity);
    }

    public function preRemove(LifecycleEventArgs $args)
    {
        $entity = $args->getEntity();

        if (!$entity instanceof Indexable) {
            return;
        }

        try {
            $this->elastic->delete($entity);
        } catch (Missing404Exception $e) {
            
        }
    }
}

Thanks, Repository Pattern!

Overall, I really enjoyed building out the app using the tools we decided to use and I personally like how we put the many parts together. We observed some tenets behind service-oriented architectures by splitting the project into multiple discrete problems and solving each with different technologies. We handed Drupal the problems that Drupal knows best, and used more suitable solutions for the rest.

Another benefit we reaped is that developers within ActiveLAMP can focus in on their own domain of expertise: our Drupal guys take care of Drupal work that non-Drupal guys like me aren't the best fit for, while I can knock out Symfony work which is right up my alley. I think we at ActiveLAMP has seen the value of solving big problems through divide-and-conquer being diversified in the technologies we use.

Jan 22 2015
Jan 22

It isn't just about Drupal here at ActiveLAMP -- when the right project comes along that diverges from the usual demands of content management, we get to use other cool technologies to satisfy more exotic requirements. Last year we had a project that presented us with the opportunity to broaden our arsenal beyond the Drupal toolbox. Basically, we had to build a website which handles a growing amount of vetted content coming in from the site's community and 2 external sources, and the whole catalog is available through the use of a rich search tool and also through a RESTful web service which other of our client's partners can use to search for content to display on their respective websites.

Drupal 7 -- more than just a CMS

We love Drupal and we recognize its power in managing content of varying types and complexity. We at ActiveLAMP have solved a lot of problems with it in the past, and have seen how potent it can be. We were able to map out many of the project's requirements to Drupal functionality and we grew confident that it is the right tool for the project.

We pretty much implemented the majority of the site's content-management, user-management, and access-control functionality with Drupal, from content creation, revision, display, and for printing. We relied heavily on built-in functionality to tie things together. Did I mention that the site and content-base and theme components are bi-lingual? Yeah, the wide foray of i18n modules took care of that.

One huge reason we love Drupal is because of its striving community which drives to make it better and more powerful every day. We leveraged open-sourced modules that the community has produced over the years to satisfy project requirements that Drupal does not provide out-of-the-box.

For starters, we based our project on the Panopoly distribution of Drupal which bundles a wide selection of modules that gave us great flexibility in structuring our pages and saving us precious time in site-building and theming. We leveraged a lot of modules to solve more specialized problems. For example, we used the Workbench suite of modules to take care of the implementation of the review-publish-reject workflow that was essential to maintain the site's integrity and quality. We also used the ZURB Foundation starter theme as the foundation for our site pages.

What vanilla Drupal and the community modules cannot provide us we wrote ourselves, thanks to Drupal's uber-powerful "plug-and-play" architecture which easily allowed us to write custom modules to tell Drupal exactly what we need it to do. The amount of work that can be accomplished by the architecture's hook system is phenomenal, and it elevates Drupal from being just a content management system to a content management framework. Whatever your problem, there most probably is a Drupal module for it.

Flexible indexing and searching with Elasticsearch

A large aspect to our project is that the content we handle should be subject to a search tool available on the site. The criterias for searching do not only demand the support for full-text searches, but also filtering by date-range, categorizations ("taxonomies" in Drupal), and most importantly, geo-location queries and sorting by distance (e.g., within n miles from a given location, etc.) It was readily apparent that SQL LIKE expressions or full-text search queries with the MyISAM engine for MySQL just wouldn't cut it.

We needed a full-pledged full-text search engine that also supports geo-spatial operations. And surprise! -- there is a Drupal module for that (A confession: not really a surprise). The Apache Solr Search modules readily provide us the ability to index all our content straight from Drupal and into Apache Solr, an open-source search platform built on top of the famous Apache Lucene engine.

Despite the comfort that the module provided, I evaluated other options which eventually led us to Elasticsearch, which we ended up using over Solr.

Elasticsearch advertises itself as:

“a powerful open source search and analytics engine that makes data easy to explore”

...and we really found this to be true. Since it is basically a wrapper around Lucene and exposing its features through a RESTful API, it is readily available to any apps no matter which language it is written in. Given the wide proliferation and usage of REST APIs in web development, it puts a familiar face on a not-so-common technology. As long as you speak HTTP, the lingua franca of the Web, you are in business.

Writing/indexing documents into Elasticsearch is straight-forward: represent your content as a JSON object and POST it up into the appropriate endpoints. If you wish to retrieve it on its own, simply issue a GET request together with its unique ID which Elasticsearch assigned it and gave back during indexing. Updating it is also a PUT request away. Its all RESTful and nice.

Making searches is also done through API calls, too. Here is an example of a query which contains a Lucene-like text search (grouping conditions with parentheses and ANDs and ORs), a negation filter, a basic geo-location filtering, and with results sorted by distance from a given location:


POST /volsearch/toolkit_opportunity/_search HTTP/1.1
Host: localhost:9200
{
  "from":0,
  "size":10,
  "query":{
    "filtered":{
      "filter":{
        "bool":{
          "must":[
            {
              "geo_distance":{
                "distance":"100mi",
                "location.coordinates":{
                  "lat":34.493311,
                  "lon":-117.30288
                }
              }
            }
          ],
          "must_not":[
            {
              "term":{
                "partner":"Mentor Up"
              }
            }
          ]
        }
      },
      "query":{
        "query_string":{
          "fields":[
            "title",
            "body"
          ],
          "query":"hunger AND (financial OR finance)",
          "use_dis_max":true
        }
      }
    }
  },
  "sort":[
    {
      "_geo_distance":{
        "location.coordinates":[
          34.493311,
          -117.30288
        ],
        "order":"asc",
        "unit":"mi",
        "distance_type":"plane"
      }
    }
  ]
}

Queries are written following Elasticsearch's own DSL (domain-specific language) which are in the form of JSON objects. The fact that queries are represented as tree of search specifications in the form of dictionaries (or “associative arrays” in PHP parlance) makes them a lot easier to understand, traverse, and manipulate as needed without the need of third-party query builders that Lucene's query syntax leaves to be desired. It is this syntactic sugar that helped convinced us to use Elasticsearch.

What makes Elasticsearch flexible is that it is at some degree schema-less. It really made it quite quick for us to get started and get things done. We just hand it with documents with no pre-defined schema and it just does it job at trying to guess the field types, inferring from the data we provided. We can specify new text fields and filter against them on-the-go. If you decide to start using richer queries like geo-spatial and date-ranges, then you should explicitly declare fields as having richer types like dates, date-ranges, and geo-points to tell Elasticsearch how to index the data accordingly.

To be clear, Apache Solr also exposes Lucene through a web service. However we think Elasticsearch API design is more modern and much easier to use. Elasticsearch also provides a suite of features that lends it to easier scalability. Visualizing the data is also really nifty with the use of Kibana.

The Search API

Because of the lack of built-in access control in Elasticsearch, we cannot just expose it to third-parties who wish to consume our data. Anyone who can see the Elasticsearch server will invariably have the ability to write and delete content from it. We needed a layer that firewalls our search index away from public. Not only that, it will also have to enforce our own simplified query DSL that the API consumers will use.

This is another aspect that we looked beyond Drupal. Building web services isn't exactly within Drupal's purview, although it can be accomplished with the help of third-party modules. However, our major concern was in regards to the operational cost of involving it in the web service solution in general: we felt that the overhead of Drupal's bootstrap process is just too much for responding to API requests. It would be akin to swatting a fruit fly with a sledge-hammer. We decided to implement all search functionality and the search API itself in a separate application and writing it with Symfony.

More details on how we introduced Symfony into the equation and how we integrated together will be the subject of my next blog post. For now we just like to say that we are happy with our decision to split the project's scope into smaller discrete sub-problems because it allowed us to target each one of them with more focused solutions and expand our horizon.

Apr 30 2014
Apr 30

This gonna be short and sweet.

If you need/want the Drupal Update Manager to work through SSH then you need to install the “libssh2-php” php package on your Ubuntu server. You know the Update Manager; it’s the admin interface when you install a module or theme, or more importantly if you are doing system-wide updates.
Update Manager

If you do not have the “libssh2-php” package installed then the only option you will have is FTP.
FTP Only

Unless you have a very specific reason, you do not want to run an FTP server on your Ubuntu server. Especially, when you have alternatives like SFTP and SCP for transferring files and they are based on SSH.

Now to enable the SSH option on the Update Manager page, you need to install the “libssh2-php” package and reload your apache server.

apt-get install libssh2-php
service apache2 reload

Now you have the SSH option on the same page.
SSH Option

Well, that being said, using Drush would be a better choice for these operations but there might be times where you need this.

Share this:

Like Loading...

Apr 28 2014
Apr 28

When I got my first VPS (from Linode) like 4 years ago, for heavy Drupal use, I read a lot of guides about setting up a LAMP stack on Ubuntu. At first most of those guides I read and followed were Drupal specific but later on I read a lot of non-drupal, LAMP stack related stuff as well.

In addition to the guides I read (and still reading), now I have 4 years of experience and knowledge that I learned by trial & error. Not to mention that I have a long System Admin (Windows only) and Database Admin (mostly Oracle) past. I still wouldn’t call myself a full-blown Linux System Admin but I believe I have come quite a long way since then.

Now I am thinking about the guides and wondering why none of the ones I read does not tell people to delete the default site configuration that comes enabled upon Apache installation. As if this is not enough, almost all of them relies on making changes on that default site config (Drupal or not).

99 times out of 100, you do not want/need a default site running on your server; which will service to any request that finds your server via IP or DNS; unless the request belongs to a website that you specifically configured. And I am sure you don’t want your apache to service a request, let’s say, http://imap.example.com unless you specifically configured a site for imap.example.com.

One of the first things I do is to delete that default website.
I can either delete the symlink…

cd /etc/apache2/sites-enabled/
rm 000-default.conf
service apache2 reload

or you can do it by disabling the site with “a2dissite” command. Some might say that this is the proper way to do it but actually they do the same thing; removes the symlink.

a2dissite 000-default.conf
service apache2 reload

As you have noticed that I did not actually delete the default site configuration file which resides in “/etc/apache2/sites-available/” I have only disabled that site. Who knows, I might need that file in the future (for reference purposes most likely).

Now the question pops in mind; the guides you follow tells you to make a change in that default site config file. Of course the changes will not have any effect since the default site is disabled. As for Drupal, it will ask you to change “AllowOverride None” to “AllowOverride All” in the below shown block.

        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted

This is how you do it. Open your “apache2.conf” file, where your real defaults are set. Find the same block and make the same change there.

cd /etc/apache2/
vi apache2.conf
##  Make the changes  ##
service apache2 reload

This is on Ubuntu 14.04 …

Share this:

Like Loading...

Apr 27 2014
Apr 27

If you have upgraded (or planning an upgrade) your Drupal 7 platform to Ubuntu 14.04 then you most likely know about the “install creates 31 tables then stops” and “Installation failure when opcode cache is enabled” issues. Which is caused by a problem between the Drupal code and OPcache.

A few words about the OPcache. Ubuntu 14.04 comes with php 5.5, which has Zend OPcache built-in. If you have already tried to install APC extension for your php setup, you failed. And if you googled about this failure then you heard that the APC is included in php5.5. Well, you can say that. Actually, the type of these caching solutions are called “OpCode Cache“. “APC” is one of them. “Zend OPcache” is another one; and this Zend OPcache (or OPcache for short) is built into php 5.5, not APC.

The Drupal problem has been fixed for D8 on this issue but no patch is available for D7 yet.

The workaround is to disable the OPcache, which is enabled by default. It is a setting in php.ini file.

opcache.enable=0

The question has been raised if disabling the OPcache before installation and enabling it right after would be good enough. While I don’t have a solid answer for that, it should be good enough to keep it disabled during installation and upgrades. I permanately turned it off on my test site. Maybe I should turn it on again and do some tests..

Another question I have seen but not answered was, if we can disable the OPcache per site basis. Like disabling it for a D7 sites and enabling it for others.

Yes, we can do that. As the title of this article suggests, we can disable OPcache per site basis but we cannot enable it whenever we want it; it should be enabled by default. If you have disabled it through php.ini file, then you need to revert it back.

Placing below line in your “settings.php” file will disable it.

ini_set('opcache.enable', '0');

However, I like the “.htaccess” method much better.

php_flag opcache.enable Off

Remember that your apache config should have “AllowOverride All” in order to make the .htaccess method work; which is also a requirement for installing & running Drupal websites.

Share this:

Like Loading...

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web