Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jan 30 2020
Jan 30

Recently, we were asked if we could integrate some small, one-page websites into an existing Drupal website. This would not only make it easier to manage those different websites and their content, but also reduce the hosting and maintenance costs.

In this article, I will discuss how we tackled this problem, improved the content management experience and implemented this using the best Drupal practices.

First, some background information: America’s Promise is an organization that launches many national campaigns that focus on improving the lives and futures of America’s youth. Besides their main website (www.americaspromise.org), they also had separate websites and domain names for some of these campaigns, e.g. www.everyschoolhealthy.org.

We came up with a Drupal solution where they could easily configure and manage their campaigns. Next to having the convenience of managing all these campaigns from one admin panel, they could also reference content items easily from their main website or other campaigns by tagging the content with specific taxonomy terms (keywords).

We created a new content type “Campaign” with many custom paragraph types as the building blocks for creating a new campaign. We wanted this to be as easy as possible for the content editors, but also give enough freedom where every campaign can have their own branding, by selecting a font color, background image/color/video.

Below are some of the paragraph types we created:

  • Hero
  • Column Layout
  • WYSIWYG
  • Latest News
  • Newsletter Signup
  • Twitter Feed
  • Video Popup
  • Community Partners
  • Latest Resources
  • Grantee Spotlight
  • Statistics Map
  • Partner Spotlight
  • Media Mentions

These paragraphs offer lots of flexibility to create unique and interactive campaigns. By drag and drop, these paragraphs can be ordered however you’d like.

Below is a screenshot of some of these paragraph types in action, and how easy they can be configured on the backend.

Every School Healthy paragraphs

Below you can see how the “Hero” paragraph looks like in the admin panel. The editor enters a tagline, chooses a font color, uploads a logo, an optional background image or video, and a background overlay color with optional opacity.

Campaign Builder Hero Backend

As you can see in the above screenshot, this is a very basic paragraph type, but it shows the flexibility in customizing the building blocks for the campaign. We also created more complex paragraph types that required quite some custom development.

One of the more complicated paragraph types we created is a statistics map. America’s Promise uses national and state statistics to educate and strengthen its campaign causes.

Campaign Builder Statistics Map

The data for this map comes from a Google Sheet. All necessary settings can be configured in the backend system. Users can then view these state statistics by hovering over the map or see even more details by clicking on an individual state.

Campaign Builder Statistics Map Backend

Some other interesting paragraph types we created are:

  • Twitter Feed, where the editors can specify a certain #hashtag and the tweets will display in a nice masonry layout
  • Newsletter Signup, editors can select what newsletter campaign the user signs up for
  • Latest News/Resources, editors can select the taxonomy term they want to use to filter the content on

Time to dive into some of the more technical approaches we took. The campaign builder we developed for America’s Promise depends on several Drupal contrib modules:

  • paragraphs
  • bg_image_formatter
  • color_field
  • video
  • masonry (used for the Twitter Feed)

Font color and background image/color/video don’t need any custom code, those can be accomplished using the above modules and configuring the correct CSS selectors on the paragraph display:

Campaign Builder Hero Display

In our custom campaign builder module, we have several custom Entities, Controllers, Services, Forms, REST resources and many twig template files. Still, the module mainly consists of custom field formatters and custom theme functions.

Example: the “Latest News” paragraph only has one field where the editor can select a taxonomy term. With a custom field formatter, we will display this field as a rendered view instead. We pass the selected term as an argument to the Latest News view, execute the view and display it with a custom #theme function.

Conclusion

By leveraging the strength of paragraphs, other contrib modules and some custom code, we were able to create a reusable and intuitive campaign builder. Where the ease of content management was a priority without limiting the design or branding of each campaign.

Several campaigns that are currently live and built with our campaign builder:

Could your organization benefit from having your own custom campaign builder and want to see more? Contact us for a demo.

Sep 28 2018
Sep 28

Pairing Composer template for Drupal Projects with Lando gives you a fully working Drupal environment with barely any setup.

Lando is an open-source, cross-platform local development environment. It uses Docker to build containers for well-known frameworks and services written in simple recipes. If you haven’t started using Lando for your local development, we highly recommend it. It is easier, faster, and relatively pain-free compared to MAMP, WAMP, VirtualBox VMs, Vagrant or building your own Docker infrastructure.

Prerequisites

You’ll need to have Composer and Lando installed:

Setting up Composer Template Drupal Project

If you want to find details about what you are getting when you install the drupal-project you can view the repo. Otherwise, if you’d rather simply set up a Drupal template site, run the following command.

composer create-project drupal-composer/drupal-project:8.x-dev [your-project] --stability dev --no-interaction

Once that is done running, cd into the newly created directory. You’ll find that you now have a more than basic Drupal installation.

Getting the site setup on Lando

Next, run lando init, which prompts you with 3 simple questions:

? What recipe do you want to use? > drupal8
? Where is your webroot relative to the init destination? > web
? What do you want to call this app? > [your-project]

Once that is done provisioning, run lando start—which downloads and spins up the necessary containers. Providing you with a set of URLs that you can use to visit your site:

https://localhost:32807
http://localhost:32808
http://[your-project].lndo.site:8000
https://[your-project].lndo.site

Setup Drupal

Visit any of the URLs to initialize the Drupal installation flow. Run lando info to get the database detail:

Database: drupal8
Username: drupal8
Password: drupal8
Host: database

Working with your new Site

One of the useful benefits of using Lando is that your toolchain does not need to be installed on your local machine, it can be installed in the Docker container that Lando uses. Meaning you can use commands provided by Lando without having to install other packages. The commands that come with Lando include lando drush, lando drupal, and lando composer. Execute these commands in your command prompt as usual, though they'll execute from within the container.

Once you commit your lando.yml file others can use the same Lando configuration on their machines. Having this shared configuration makes it easy to share and set up local environments that have the same configuration.

Aug 27 2018
Aug 27

This post is part 5 in the series [“Hashing out a docker workflow”]({% post_url 2015-06-04-hashing-out-docker-workflow %}). I have resurrected this series from over a year ago, but if you want to checkout the previous posts, you can find the [first post here]({% post_url 2015-06-04-hashing-out-docker-workflow %}). Although the beginning of this blog series pre-dates Docker Machine, Docker for Mac, or Docker for Window’s. The Docker concepts still apply, just not using it with Vagrant any more. Instead, check out the Docker Toolbox. There isn’t a need to use Vagrant any longer.

We are going to take the Drupal image that I created from my last post [“Creating a deployable Docker image with Jenkins”]({% post_url 2016-01-20-jenkins-build-docker-images %}) and deploy it. You can find the image that we created last time up on Docker Hub, that is where we pushed the image last time. You have several options on how to deploy Docker images to production, whether that be manually, using a service like AWS ECS, or OpenShift, etc… Today, I’m going to walk you through a deployment process using Kubernetes also known as simply k8s.

Why use Kubernetes?

There are an abundance of options out there to deploy Docker containers to the cloud easily. Most of the options provide a nice UI with a form wizard that will take you through deploying your containers. So why use k8s? The biggest advantage in my opinion is that Kubernetes is agnostic of the cloud that you are deploying on. This means if/when you decide you no longer want to host your application on AWS, or whatever cloud you happen to be on, and instead want to move to Google Cloud or Azure, you can pick up your entire cluster configuration and move it very easily to another cloud provider.

Obviously there is the trade-off of needing to learn yet another technology (Kubernetes) to get your app deployed, but you also won’t have the vendor lock-in when it is time to move your application to a different cloud. Some of the other benefits to mention about K8s is the large community, all the add-ons, and the ability to have all of your cluster/deployment configuration in code. I don't want to turn this post into the benefits of Kubernetes over others, so lets jump into some hands-on and start setting things up.

Setup a local cluster.

Instead of spinning up servers in a cloud provider and paying for the cost of those servers while we explore k8s, we are going to setup a cluster locally and configure Kubernetes without paying a dime out of our pocket. Setting up a local cluster is super simple with a tool called Minikube. Head over to the Kubernetes website and get that installed. Once you have Minikube installed, boot it up by typing minkube start. You should see something similar to what is shown below:

$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
 160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

This command setup a virtual machine on your computer, likely using Virtualbox. If you want to double check, pop open the Virtualbox UI to see a new VM created there. This virtual machine has loaded on it all the necessary components to run a Kubernetes cluster. In K8s speak, each virtual machine is called a node. If you want to log in to the node to explore a bit, type minikube ssh. Below I have ssh'd into the machine and ran docker ps. You’ll notice that this vm has quite a few Docker containers running to make this cluster.

 $ minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
aa766ccc69e2        k8s.gcr.io/k8s-dns-sidecar-amd64           "/sidecar --v=2 --lo…"   5 minutes ago       Up 5 minutes                            k8s_sidecar_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
6dc978b31b0d        k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     "/dnsmasq-nanny -v=2…"   5 minutes ago       Up 5 minutes                            k8s_dnsmasq_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
0c08805e8068        k8s.gcr.io/kubernetes-dashboard-amd64      "/dashboard --insecu…"   5 minutes ago       Up 5 minutes                            k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
f5d725b1c96a        gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   6 minutes ago       Up 6 minutes                            k8s_storage-provisioner_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
3bab9f953f14        k8s.gcr.io/k8s-dns-kube-dns-amd64          "/kube-dns --domain=…"   6 minutes ago       Up 6 minutes                            k8s_kubedns_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
9b8306dbaab7        k8s.gcr.io/kube-proxy-amd64                "/usr/local/bin/kube…"   6 minutes ago       Up 6 minutes                            k8s_kube-proxy_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
5446ddd71cf5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
17907c340c66        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
71ed3f405944        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
daf1cac5a9a5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
9d00a680eac4        k8s.gcr.io/kube-scheduler-amd64            "kube-scheduler --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
4d545d0f4298        k8s.gcr.io/kube-apiserver-amd64            "kube-apiserver --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-apiserver_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
66589606f12d        k8s.gcr.io/kube-controller-manager-amd64   "kube-controller-man…"   8 minutes ago       Up 8 minutes                            k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
1054b57bf3bf        k8s.gcr.io/etcd-amd64                      "etcd --data-dir=/da…"   8 minutes ago       Up 8 minutes                            k8s_etcd_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
bb5a121078e8        k8s.gcr.io/kube-addon-manager              "/opt/kube-addons.sh"    9 minutes ago       Up 9 minutes                            k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
04e262a1f675        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
25a86a334555        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
e1f0bd797091        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
0db163f8c68d        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
4badf1309a58        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0

When you’re done snooping around the inside the node, log out of the session by typing Ctrl+D. This should take you back to a session on your local machine.

Interacting with the cluster

Kubernetes is managed via a REST API, however you will find yourself interacting with the cluster mainly with a CLI tool called kubectl. With kubectl, we will issue it commands and the tool will generate the necessary Create, Read, Update, and Delete requests for us, and execute those requests against the API. It’s time to install the CLI tool, go checkout the docs here to install on your OS.

Once you have the command line tool installed, it should be automatically configured to interface with the cluster that you just setup with minikube. To verify, run a command to see all of the nodes in the cluster kubectl get nodes.

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    6m        v1.10.0

We have one node in the cluster! Lets deploy our app using the Docker image that we created last time.

Writing Config Files

With the kubectl cli tool, you can define all of your Kubernetes objects directly, but I like to create config files that I can commit in a repository and mange changes as we expand the cluster. For this deployment, I’ll take you through creating 3 different K8s objects. We will explicitly create a Deployment object, which will implicitly create a Pod object, and we will create a Service object. For details on what these 3 objects are, check out the Kubernetes docs.

In a nutshell, a Pod is a wrapper around a Docker container, a Service is a way to expose a Pod, or several Pods, on a specific port to the outside world. Pods are only accessible inside the Kubernetes cluster, the only way to access any services in a Pod is to expose the Pod with a Service. A Deployment is an object that manages Pod’s, and ensures that Pod’s are healthy and are up. If you configure a deployment to have 2 replicas, then the deployment will ensure 2 Pods are always up, and if one crashes, Kubernetes will spin up another Pod to match the Deployment definition.

deployment.yml

Head over to the API reference and grab the example config file https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deployment-v1-apps. We will modify the config file from the docs to our needs. Change the template to look like below (I changed the image, app, and name properties in the yml below):

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  
  name: deployment-example
spec:
  
  replicas: 3
  template:
    metadata:
      labels:
        
        
        app: drupal
    spec:
      containers:
      - name: drupal
        
        image: tomfriedhof/docker_blog_post

Now it’s time to feed that config file into the Kubernetes API, we will use the CLI tool for this:

$ kubectl create -f deployment.yml

You can check the status of that deployment by asking the k8s for all Pod and Deployment objects:

$ kubectl get deploy,po

Once everything is up and running you should see something like this:

 $ kubectl get deploy,po
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           3m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          3m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          3m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          3m

service.yml

We have no way of accessing any of those Pods in the deployment. We need to expose the Pods using a Kubernetes Service. To do this, grab the example file from the docs again and change it to the following: https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#service-v1-core

kind: Service
apiVersion: v1
metadata:
  
  name: service-example
spec:
  ports:
    
    - name: http
      port: 80
      targetPort: 80
  selector:
    
    
    app: drupal
  
  
  
  type: LoadBalancer

Create this service object using the CLI tool again:

$ kubectl create -f service.yml

You can now ask Kubernetes to show you all 3 objects that you created by typing the following:

$ kubectl get deploy,po,svc
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           7m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          7m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          7m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          7m

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
svc/kubernetes        ClusterIP      10.96.0.1       <none>        443/TCP        1h
svc/service-example   LoadBalancer   10.96.176.233   <pending>     80:31337/TCP   13s

You can see under the services at the bottom that port 31337 was mapped to port 80 on the Pods. Now if we hit any node in the cluster, in our case it's just the one VM, on port 31337 we should see the Drupal app that we built from the Docker image we created in the last post. Since we are using Minikube, there is a command to open a browser on the specific port of the service, type minikube service :

$ minikube service service-example

This should open up a browser window and you should see the Installation screen for Drupal. You have successfully deployed the Docker image that we created to a production-like environment.

What is next?

We have just barely scratched the surface of what is possible with Kubernetes. I showed you the bare minimum to get a Docker image deployed on Kubernetes. The next step is to deploy your cluster to an actual cloud provider. For further reading on how to do that, definitely check-out the KOPS project.

If you have any questions, feel free to leave a comment below. If you want to see a demo of everything that I wrote about on the ActiveLAMP YouTube channel, let us know in the comments as well.

Aug 15 2017
Aug 15

So you just finished building an awesome new website on Drupal, but now you’ve run into a new dilemma. How do optimize the site for search engines? Search engine optimization, or SEO, can be overwhelming, but don’t let that cause you to ignore certain things you can do to help drive traffic to your website. There’s nothing worse than spending countless hours to develop a web application, only to find out that users aren’t able to find your site. This can be extremely frustrating, as well as devastating if your company or business heavily relies on organic traffic.

Now there are countless philosophies of SEO, many of which are well-educated assumptions of what Google is looking for. The reality is that no one knows exactly how Google’s algorithm is calculated, and it doesn’t help when their algorithm is constantly being updated. Luckily, there are a few best practices that are accepted across the board, most of which have been confirmed by Google as being a contributing factor to search engine ranking. This blog is going to focus on a few of those best practices and which modules we have found to be helpful in both our Drupal 7 and Drupal 8 projects.

So, without further ado, here is our list of Drupal modules you should consider using on your site to help improve your SEO:

###XML Sitemap Module

As the name suggests, XML Sitemap allows you to effortlessly generate a sitemap for your website. A sitemap allows for Google and other search engines like Bing and Yahoo, to be able to easily find and crawl pages on your site. Is a sitemap necessary? No. But if it helps the pages of your site to become easily discoverable, then why not reduce the risk of not having pages of your site indexed? This is especially important if you have a large site with thousands or even hundreds of pages. Having a sitemap also provides search engines with some valuable information, such as how often the page is updated and the level of significance compared to other pages on your site.

XML Sitemap allows you to generate a sitemap with a click of a button, and best of all you can configure it to periodically generate a new sitemap which will add any new pages you’ve published on your Drupal site. Once your website has a sitemap, it is recommended to submit that sitemap on Google Search Console, and if you haven’t claimed your website on Google Search Console yet, I would highly advise doing so as it will provide you with helpful insight such as indexing information, critical issues, and more.

###Metatag Module

The next Drupal module is one that can really help boost your search engine ranking and visibility. Metatag is a powerful module that gives you the ability to update a large number of various meta tags on your site. A meta tag is an HTML tag which contains valuable information that search engines use to determine the relevance of a page when determining search ranking. The more information available to search engines such as Google, the better your chances will be that your pages will rank well. The Metatag module allows you to easily update some of the more popular tags, such as meta description, meta content type, title tag, viewport, and more.

Adding and/or updating your meta tags is the first step of best SEO practice. I’ve come across many sites who pay little to no attention to their meta tags. Luckily, the Metatag module for Drupal can help you easily boost your SEO, and even if you don’t have time to go through and update your meta tags manually (which is recommend), the module also has a feature to have your tags automatically generated.

###Real-Time SEO for Drupal Module

The Real-Time SEO for Drupal module is a powerful tool on its own, but it is even better when paired with the Metatag module which we just finished discussing. This module takes into account many SEO best practices and gives you a real-time analysis, ensuring that your content is best optimized for search engines. It will inform you if your content is too short, how readable your posts are, and also provides you a snapshot of how your page will appear in Google. The other helpful information it provides is regarding missing or potentially weak tags, which is why I mentioned that this module and the Metatag module work extremely well together. Real-Time SEO for Drupal can let you know how to better improve your meta tags and by using the Metatags module, you can quickly update your tags and watch in real-time how the changes affect your SEO.

The Real-Time SEO for Drupal module is a simple, yet incredibly useful tool in helping you see the SEO health of your pages. If you are just getting into SEO, this is a great place to start, and even if you’re a seasoned pro this is a nice tool to have to remind you of any meta tags or keyword optimization opportunities you may be missing.

###Google Analytics Module

The final module is the Google Analytics module. Google Analytics is by far the most widely used analytics platform. The invaluable information it provides, the numerous tools available, and the integrations it allows, make it a requirement for anyone looking to improve the SEO of their Drupal website. This Drupal module is extremely convenient, as it does not require a developer to have to mess with any of the site's code. After installing the module all you have to do is enter the web property ID that is provided to you after you setup your account on Google Analytics.
From the Google Analytics module UI, you have a number of helpful options, such as what domains to track, which pages to exclude, adjusting page roles, tracking clicks and downloads, and more. The Google Analytics module for Drupal is another great tool to add to your tool belt when trying to best improve your SEO.

###Final Thoughts

This list of helpful SEO modules for your Drupal 7 or 8 site could easily have been much longer, but these are a few key modules to help you get started. SEO is something that should not be ignored, as I mentioned in the beginning of the blog, it’s a shame to build a site only to find that no one is actually visiting it, but using these modules properly can definitely help prevent this issue. if you would like to learn of other great modules to help your SEO, please leave a comment below and I'll write a follow-up blog.

Aug 02 2017
Aug 02

When migrating from Drupal 7 to Drupal 8, it is important to remember to migrate over the redirects as well. Without the migrations users will not find your content if for example: the redirect was shared on social media. Using the Migrate Plus module, it is quite simple to write a migration for the redirects. The Migrate Plus module contains some good examples on how to get started writing your custom migrations.

Write your node migrations

I am going to assume that you have written migration for some content types and have the group already written. Once those migrations have been written, in your database should now be a migrate_map_{name}_{type} table. This is where we will be able to find the imported node's new id which will be necessary for importing the redirects.

Write the yml file for redirect migrations

For example, let's say we have a module called blog_migrations. In that module we have a group for blog and a migration for a news and opinion content type. Inside the config/install directory add a new yml file called migrate_plus.migration.blog_redirect.yml where blog is the name of the group being migrated. This file will give an id, label, and the process to use for the migration.

id: blog_redirect
label: Path Redirect
migration_group: blog
migration_tags:
  - Drupal 7
source:
  
  
  plugin: blog_redirect
  key: blog
process:
  rid: rid
  uid: uid
  redirect_source/path: source
  redirect_source/query:
    
    plugin: d7_redirect_source_query
    source: source_options
  redirect_redirect/uri:
    
    plugin: d7_path_redirect
    source:
      - redirect
      - redirect_options
  language:
    plugin: default_value
    source: language
    default_value: und
  status_code: status_code
destination:
  plugin: entity:redirect

Write the migrate source

Create the file BlogRedirect.php in the module's src/Plugin/migrate/source folder.



namespace Drupal\apa_migrate\Plugin\migrate\source;

use Drupal\Core\Database\Database;
use Drupal\migrate\Row;
use Drupal\redirect\Plugin\migrate\source\d7\PathRedirect;


class BlogRedirect extends PathRedirect {

  
  public function query() {
    
    $query = $this->select('redirect', 'p')->fields('p')
      ->condition('redirect', '%user%', 'NOT LIKE');

    return $query;
  }

  
  public function prepareRow(Row $row) {
    
    $current_status_code = $row->getSourceProperty('status_code');
    $status_code = $current_status_code != 0 ? $current_status_code : 301;
    $row->setSourceProperty('status_code', $status_code);

    $current_redirect = $row->getSourceProperty('redirect');
    $explode_current_redirect = explode("/", $current_redirect);

    $map_blog_array = array(
      'news',
      'opinion'
    );
   
    if ($explode_current_redirect[0] == 'node') {
      
      $resource_type = $this->getDatabase()
        ->select('node', 'n')
        ->fields('n', ['type'])
        ->condition('nid', $explode_current_redirect[1])
        ->execute()
        ->fetchField();

      
      if (in_array($resource_type, $map_apa_array)) {
        
        $new_node_id = Database::getConnection('default', 'default')
          ->select('migrate_map_apa_' . $resource_type, 'm')
          ->fields('m', ['destid1'])
          ->condition('sourceid1', $explode_current_redirect[1])
          ->execute()
          ->fetchField();

        
        $new_redirect = 'node/' . $new_node_id;
        $row->setSourceProperty('redirect', $new_redirect);
      }
    }
  }
}

Run the migrations

Using the config_devel module, now import the configuration into active store to be able to run the migration using:

drush cdi1 /modules/custom/blog_migration/config/install/migrate_plus.migration.blog_redirect.yml

Then run the actual migration:

drush mi blog_redirect

After running that you should now have migrated the two content type's redirects with the new node id they were given! Any questions, let us know in the comments below.

Jun 15 2017
Jun 15

Tom Friedhof: There’s a lot of hype around integrating Pattern Lab with your Drupal theme these days. Particularly because Drupal’s template engine is now using twig, which is one of the template engines Pattern Lab uses.  The holy grail of having a living style guide and component library is now a lot more feasible! But what about Drupal 7 sites? Twig doesn't exist in Drupal 7.  Today I’m going to show you something we’re working on at active lamp to implement Pattern Lab templates in Drupal 7.

Hey guys, I’m Tom Friedhof, a solutions architect here at ActiveLAMP.  Let me first start off by defining what I mean when I a say living style and component library.  This idea can mean different things to different people. A living style guide and component library is the HTML, CSS, and Javascript that document the design of a user interface. The “living” part of that means that the style guide should be constantly in sync with the actual app that implements the interface as the design improves or changes.

How do you easily keep the real app and the style guide in constant sync?  That could be a lot of work, given that once the initial designs are done, the design iterations are typically done directly in the app being built, making the style guide obsolete and outdated.

That’s where the promise of Pattern Lab integration with Drupal comes in.  You can easily keep the style guide in sync, if your app depends on the style guide for all of it’s HTML, CSS, and Javascript. That’s why there is so much hype around building “Pattern Lab” themes in Drupal 8 right now. Drupal 8’s theme engine is a theme engine that Pattern Lab uses, and reusing the same twig templates that your UX Designer created in Pattern Lab within Drupal, is now an option.

Well, we’re still working on Drupal 7 sites, so how do we benefit from this approach in Drupal 7? To be honest, we’re still hashing out our approach to do this in Drupal 7.  We have the process built out enough, and we're using it on a new theme we developing for a client, but we’re still constantly iterating on the process, and improving it as we run into things.

What I want to show you guys today is the direction that we’re going, and I’m hoping to get your feedback in the comments so that we can continually improve and iterate on this system. First off, we decided not to use the twig version of Pattern Lab.  We spent half a day trying to get twig working in Drupal 7 with the twig for drupal 7 module, and realized we’d be going down a pretty deep rabbit hole just to make twig work in D7.

Rather than fight Drupal 7 and twig, we decided to use a much simpler template engine called Mustache.  Mustache is a language agnostic template engine, and there is a really nice PHP implementation of it. With that said, we installed the gulp version of Pattern Lab, which uses Mustache templates in JavaScript.  We now have the ability to share templates.

I’m going to jump into a demo here in a second. However, I’m not going to do a deep dive of how Pattern Lab works or how Drupal and Panels work.  I’ll dive deeper in future videos with those details if you’re interested.  Leave a comment if you want to see that stuff and we’ll put it on our list of content to share. I’m going to give you guys a 10,000 foot view of how things are shaping up with our Drupal 7 integration to Pattern Lab process.

All right, so here we are in our Drupal seven install. This is pretty much a vanilla Drupal installation. If I jump over to the Drupal directory, you can see here within my sites all modules file or directory, here are all the modules that I need for this actual demo that I'm gonna use for today. We like to use panels and panels everywhere, so what I'm going be demoing today is with panels and panels everywhere, but the stuff that I'm gonna show does apply to just regular template files if you don't want to use panels and just want to stick with the core TPL system within Drupal. One of the the other things that we have in here is a theme called Hills, this is where all the magic actually happens. One thing that you'll notice in this Hills theme is we have two directories called node modules and vendor. We're actually pulling in dependencies from NPM and from Composer or from Packagist into this theme. If we open up our package.jason, which actually defines the NPM dependencies, you can see that we're defining a dependency called hills-patternlab. This is basically the repo that holds our Pattern Lab instants, it's the living style guide that the UX designer uses to actually update the patterns, update CSS and make any changes that need to be changed in the UI.

The composer.json file is requiring the Mustache PHP implementation. We're using this library for obvious reasons to render the Mustache templates that we're pulling in from Pattern Lab. This theme needs to be instantiated with an NPM install and a composer install to get these dependencies and once you've done that, then you're ready to start working on the theme.

One other thing I want to do before I actually start building this our in Drupal is I want to show you the Pattern Lab instants. I am in our Hills directory, I can run NPM start and this should pull up our Pattern Lab instants. Here is our Pattern Lab instants, not going to go into the details of what Pattern Lab is, but essentially it's all the components that make up a website. For example, you can see a page header looks like that, if we wanted to see what the header looks like, it looks like this and all of these templates are basically Mustache templates within our Patterns Lab. Let me open up node modules so you can actually see these templates real quick. The Pattern Lab directory structure looks like this, within the source directory inside of Patterns we can go into organisms and actually look at what a header looks like within Pattern Lab. This is including a couple other patterns from within Pattern Lab so let's see what the navigation actually looks like by going in here. This is the HTML that make up a navigation.

This template includes other patterns within Pattern Lab, let me drill down to the primary links pattern. Here's what our primary links look like, you can see that this is outputting variables, for example, href and name here and then it's including yet another pattern within Pattern Lab, let me open that one as well. Here you can see that it's outputting more variables, classes and name. These variables are actually defined within Pattern Lab's data directory. I'm not going go into detail how that works, but let me just show you what that ends up rendering. You can see here's our organism header, that primary links pattern is this here. This is basically rendering data from Pattern Lab's data directory. If I go into the data directory just real quick, within the primary-links.json file you can see this is the actual data that it's pulling in. If we wanted to say staff services, this is going to rebuild and we see staff services here. That's essentially how Pattern Lab works with data in a nutshell. What I'm gonna show you guys is how we actually integrate this with Drupal. Eventually what's going to happen is these Mustache templates are going to render variables from Drupal and not the data specified in this Pattern Lab data directory.

Let's jump over to Drupal. Here's our Drupal installation. First thing that I'm going do is I'm going to switch the theme to our Hills theme, our Hail to the Hills theme, so I'm gonna enable this and set this as the default. Now I'm gonna open up the homepage in a new tab and drag that over here. So now we can see here's what we get out of the box with this Hail to the Hills theme, there's really nothing in this theme yet. There is stuff in the theme, but I'll get to that in a second, but this is what you'll get once you enable it initially. We're using Panels Everywhere with this theme so what I'm going to do is I'm going to go configure Panels Everywhere. Panels Everywhere gives you a site template by default, so I'm gonna come over here and edit it and I'm gonna add a new variant. We'll just call this default and I'm going come over here and choose a layout within the PL Templates layout category and I'm going to hit full width one column, continue and then we'll just work through this UI. Then I'm going to give it basically the basic content that we need to render so that you can actually see something on the page when you visit a page within the site. We'll create the variant here, we'll update and save that so now let's look to see what our homepage looks like.

Our homepage is starting to look a little bit better, we're basically hitting the default homepage for Drupal, which just shows the title and then a no front page content has been created yet. You noticed here in this layout tab, we had this category called PL Templates and it's pulling in full width and main content. Let me show you where these are defined, if I jump back into our theme, within our theme info hook or info file, panels allows you to specify a directory where you're going to define your layout plugins. The way you specify that is by using the string plugins panel layouts and then you just give it a path to your directory. Let me close node modules so this is a little bit easier to see. If I come into this layouts directory, you can see that I have two layouts specified here, we used the full width layout so I'm gonna jump into that first.

This isn't a tutorial on how to create plugins, but essentially what we're doing is we're just creating a ctools layout plugin. We've called this the full width one column, we've said the theme function or implementation for this is panels full width and you can see that we have a panels full width template here. So when this layout is used, it's going to actually use this template. If we just into that, all this template is doing is it's printing out whatever is in the content area. This has nothing to do with Patten Lab yet, but this is how you essentially setup a default template with regions within panels everywhere. Let's jump back to Drupal and go into our content area. You remember we have the default template set up now, but now let's start to pull in some of the patterns from Pattern Lab. If I come over here, this pattern here called organisms header, let's pull that into Drupal first. I'm gonna come in here and add content, I have this PL components over here and we have a pattern called header. I'm gonna click on that and this header is asking for four pieces of data, so I'm gonna give it the data that it needs. I'm gonna browse for a file, let's look at that, that looks good, we'll upload this, go to next, we'll say logo, logo. Then we'll give it a path and we'll tell it what main menu to use, we'll just say use this as the main menu and with help menu we'll just tell it to use the user menu for now, and then finish. Let's drag this up to the top, hit update and save and let's go see what happened. Let's go back to our homepage and voila, we've got a header pulled from Pattern Lab in here. You'll notice that the menu is not the same menu that's coming from Pattern Lab and why is that? It's because it's pulling the actual primary links from Drupal.

If we go into the menu system here, we chose the main menu to use, the main menu here and if we create another link here we can say another link, have this go to front. Then we come back to our homepage, you can see that this is actually pulling from Drupal. We have no sub-menus underneath that and that's why it's not showing anything underneath it. But you can see that we're actually using our own data from within Drupal.

How did we actually pull in this whole header section from Pattern Lab? Let's go back to where we actually pulled that in. I'm gonna go to structure, pages, back into our site template. We had this content type that we pulled in, this PL header content type. This is a ctools content type and you can define those with a content type plugin. Because this plugin only exists within this theme, we defined the ctools content type within this theme. The way we did that is within our info file, we're specifying where content types for ctools should live and we're saying those content types should live in the Pattern Lab directory, which is right here. This behavior isn't default behavior in ctools, so we did have to patch the ctools modules so that we could do this. You can check out that patch here and leave any comments if you have any comments or suggestions regarding that patch. It's a very small patch, but it basically allows us to define content types, ctools content types, within our theme and not have to define a module just for these content types.

Let's look inside of this Pattern Lab directory and see what we have. The way ctools plugins work is it'll traverse the directory that you've defined and look for any .inc files and read those in while it's processing plugins. Within the organisms header directory, I have a file called organisms_header_inc. The content type system within ctools will pick up this file and it'll read this variable that defines the actual plug in. You can specify other functions to be able to expose, for example, an edit form. Here you see we have a submit handler for that edit form. But here's where all the magic happens, here's a function that we've defined called preprocess. This is where we actually map the Drupal data into data that Pattern Lab understands and we pass this data to Mustache to actually render the pattern. Let me back up and show you what Pattern Lab is expecting to see within this content type. I'm going to open up a new PHP storm directory so that I don't have to continue to scroll up. Go into sites, all, themes, hills, node_modules, hills-patternlab, open a new window, yes. Here is the Hills Pattern Lab directory within that theme, that Hills theme. What I'm going to do is go into the pattern that we're actually pulling in and so it is here. This pattern is pulling in data from this data file and the hints that you can get from this data file is basically looking at what these patterns rely on. We're not producing or we're not outputting any data here, so what we need to do is we need to drill down to see where data actually is being output. If we go into navigation, we can see navigation still isn't outputting any data, it's still just including other atoms and molecules.

So let's jump into the primary links. Now primary links is starting to actually output data. We have that href file there or href variable, we have a name variable there. But then you can also see that it's pulling in yet another include. But this is where we want to start. We're using data here in this primary links component, within Pattern Lab you can specify data in this data directory. We have this primary links Jason file and we basically specified an object with a primary links key. Now Pattern Lab's going read into all these data files and essentially merge the objects so that you can reference them by whatever key is at the root of that object, they all get merged into the data json file. If we look here, primary links is being looped through and then the href and the name is being rendered out. If we look at this, primary links is an array with name and href. If we collapse these guys, you can see we have name, we have several links here. Staff services, work tools, news, administrative units, contact us, that all coincides with our pattern over here.

Within that pattern, coming back over here to primary links, you can see that it's including molecules drop down and that is here. That's also rendering or looping through the links and using the classes variable and the name variables. So if I come back into that data file and open one of these guys up, you can see here's the links array and there's the name and href that is being used. It looks like we're only specifying classes on this very last button here. If we come back here, you'll see that that class is actually specified there and that's what makes that look a little bit different.

Essentially what we're doing in Drupal is we're just mapping data to the data that Pattern Lab expects. Let's jump back over to Drupal and so now here's our Drupal content type. You can see here, essentially we're returning an array, nav bar brand. Here's our primary links, so that's what we just looked at, the primary links is essentially creating an array that looks like this, but in Drupal. You can see here, hills_menu_tree, this is essentially creating that array that Pattern Lab is expecting.

I'll show you an easier example of what that looks like as we continue to build out this page. Let's add another pattern to this site layout. If we come into the default template and we add content, go to Pattern Lab components, I'm going to add in a footer calling card. So that footer calling card, if we come over here into molecules, then footer, we can see the footer calling card looks like this. If we come into the template for that, that's a molecule under footer, the calling card looks like this. This takes several variables, it takes a title, phone, email and then it loops through a social array and outputs the href and the network we defaulted in Pattern Lab. If we look at the data that we defaulted in Pattern Lab, we can come over here to footer calling card and you can see that we've got a calling card key and then we've just specified the data down there.

We're gonna render this in Drupal so we created a content type that essentially has an edit form for all of this information. Let's just fill this out, information technology and let's just go sure and yes. Let's keep that there, update and save. Now there's our footer calling card. All right, you guys get the idea there, we're able to create content types, we're able to create an edit field that passes in data or we're collecting data and then we pass that data into Mustache and render the Pattern Lab template with the data that we are pre-processing.

What is you're working with content that isn't going to be a ctools content type, for example nodes? Let's create some nodes and see what that actually looks like. I'm going to come in here into configuration and we're going to devel generate a few nodes. I can come down here, hit generate content and let's just create 10 basic pages, we'll generate that. Now let's go back to our homepage here, so there we have five nodes listed on the homepage now. How do we actually style this so this looks like something? What I'm going to show is first thing I'm gonna do is I'm going to put this into a content container. Let me go over to our Pattern Lab, I'm gonna go to layouts and look at our main content. Our main content goes into a container that looks like this and the content is output inside of that container. What I'm going to do is I'm going to actually create this homepage as a view so we can actually control the template that's being output here. I'm gonna come over here to structure, go down to views and let's add a new view, let's call this homepage list, we'll continue and edit that.

We're gonna make this a content pane, I'm actually going to get rid of this page here, we didn't need that I should've unchecked it. Within that content pane, we're going to render fields and we're not really gonna be using the views output so uncheck that and then we'll also throw in the body here and we're going to limit that to 600 characters, so that's what our view is going to look like that we're going to use on the homepage. Let's go ahead and save that. What I'm gonna do is I'm gonna create a new front page over in page manager. Within page manager, I'm gonna add a custom page, we'll call this front page and then we'll give this the path of front and we're gonna check this box that says make this your site homepage, we'll continue. Then I'm going to choose the layout called main content and what that's going to do is that's going to use the layout from Pattern Lab that uses main content and I'll show you that here in a second.

I'll hit continue, continue and then inside of here, we're going to output the view that we just created. So here's that view there, so we'll save that and we'll hit finish there. Update and save so now we have a front page that's going to render a view called homepage list using the layout main content. So let's take a look to see what happened here, let's go back to the homepage and there you go, you can see that we're now outputting that actual view within the page content. This home site install shouldn't output here and this is actually being output by panels, so what we're gonna do is we're gonna disable that title there. If we come back here into content and then ... Actually this is going to be in the site template. We'll edit that, go into content and within the page content section, we're going to override the title and make it nothing, update and save that. Now we're getting a lot closer to what our styles look like in Pattern Lab.

Now the next step that we want to do is actually make this view look like something in Pattern Lab. What we're going to do is we're going to make that look like a two column stack view. We have this data here that's set up in a two column stack, we're gonna make the data from views output this template when it renders. Let's jump back into views, so let's go into the front page that we just created and go into the content and so here's that view. I'm going to open this cog here and edit this view and a new tab, so views gives you the ability to specify a theme file. So what we're gonna do is we're actually going to specify this theme file in our theme. I'm gonna copy that and then jump into our theme over here. So here's our Drupal theme, going to my templated directory let's create a views directory so that all our views live in the same directory within our templates directory. Let's create a file called views unformatted home page list.

Now, let's just put in hello world so that you can see that this is actually working. When we re scan the templates, views is going to pick up that template file as you can see now that it's bolded. We'll save this and refresh the homepage and you can see now it is outputting hello world, which is in our template file. How do we actually use the template that is in Pattern Lab? Let's go back into views, so this is where the magic happens in this theme. We have a variable expose called ‘m’, which is basically the Mustache connector to Pattern Lab. On that connector, we have a method called render, so this is where we're going to specify the actual template that we want to use within Mustache. There is a naming convention to this and we'll document what that naming convention is, but essentially what you need to do is specify what type of pattern it is, this is an organism so it's in organisms and then what the name of the template it, this is a two column stack. That's really it, that's all you have to do to render this template, so let's go ahead and save this and then look at our view here.

That didn't render anything, let's go back to our template and you can see that we're actually not printing anything out, so let's actually print out the results of that call and then let's see what happens. There you go, now we're actually printing out the template from Pattern Lab, but you can see that this is actually pulling the data from Pattern Lab, it's pulling out its default data from Pattern Lab. How do we actually make it put our own data? Just like the content types that I was showing you, we can send it a map of how our data should look. There's two ways we can do this, this method here, render, actually takes two arguments. One of them is an array and I'm gonna show you that first and this array is the actual map that the component is expecting. If we look to see what the component is expecting, let me jump back over to our Pattern Lab and then go into the data file that the two column stack is expecting, we can see that it's expecting an array with a two column stack as a key and then that as an array of objects with card as the key and then title and nutgraf.

What I'm going to do is I'm actually gonna just pull some of that data out of there. Let's go back to our Drupal theme and paste that in here. Obviously json syntax doesn't work in PHP, so we need to convert some of this, so I'm gonna make that an array so that looks more like what PHP can understand. Now what I'm going to do is copy this array, but we also need a key of what it's expecting. What it's expecting as the key is primary links, sorry we're looking at the two column stack, what is that expecting, two column stack is what that's expecting. Let's grab that, so now this should do it, probably because that ends the map there. Now we have this two column stack, we're actually passing it data so you can pass it whatever data you want, but essentially this is what Drupal is looking for, so if you have your data then just go ahead and do it right here. Let's see what this actually looks like when we save that and then come back over to Drupal and then hit refresh. You can see how it's printing out five cards with the same data in there.

We have another way of actually mapping this data and this is through a pre-process callback. The way that works is there's a third parameter that you can pass to this render function. Let me delete that array that we just defined and what we're gonna do for the second parameter is we're just going to actually pass it the variables that Drupal knows about. Inside of a Drupal theme, when you're working with your template, there's a variable exposed called variables that you can use and then you can output whatever variables you want. What we're going to do is we're gonna actually pass that to a callback that we define here and we'll pass that in as ‘v’ just so that we don't confuse it with variables here. What's happening behind the scenes is this variables is being passed into this callback and now you can run PHP logic to actually pre-process your variables. Just to show you what variables looks like, let's demo that so you can see what's actually being passed into this callback. If we hit save there, refresh this, you can see here is what the view is actually outputting and what we really want within this data is the results that the view is outputting. Here's all the data that the view is outputting, so this is what we want to actually map to what Drupal is expecting.

If we come back into here, what we'll do is we'll return an array that Pattern Lab is expecting, that array expects to have this key so we'll copy that and then that key has a bunch of cards that are associated with it. What we're gonna do is we're actually gonna do the pre-process just to keep this clean in a separate function, so I'll just define this as process card data. In this function that we're going to define now, we'll just copy this function guy here. This function's going to need to take the data that we're passing it, so pass in ‘v’ here and we'll pass in ‘v’ here, but really what we really need from ‘v’ is just the results from the view, so maybe we just pass in ‘v’ view results. Then down here we can just say that this is going to be called the results, since it's an array of results.

Now, essentially what we need to do is we need to create this data format again with this function that we're using inside of Drupal. What we'll do is we're just going to loop through the results as result and we're going to create what we need. Actually we need to specify that data array up here and then let's return the data array down here. What we need to do is specify each element of the data array. That's going to be equal to another array and in that array... Let's see what that needs to look like, that needs to have a card key with another array with title, nutgraf and href. We're just gonna leave href off, since we don't have the links yet. Title, nutgraf and href. So let's go back here and let's start to set up what this looks like. It needs title, it's gonna be something, nutgraf and href. We also need this to be in an array with cards, so let's actually create the card here and pull this inside of the card. Now this is starting to look a lot like the data that Pattern Lab is expecting.

Now let's actually map the data from views. If we come back over here and we look at what we have, we have each one of these objects, we have the note title and we have the field body. Essentially we just need to write that into the template, no title and then for the body we have field body zero rendered markup. For the href, we do have the NID so I guess we can pass that here. All right and that should be all that we need, so if we save this and then go back and refresh this, that didn't work. Let's take a look to see what we did wrong here. I'm going to output our variables again and just make sure that we map this properly. View, result ... So we got view, ‘v’, result. View is an array, it's not an object, this will probably fix it. So we have ‘v’, view, result, view is an object, result is an array object. We're looping through the array and then an object. Let's save this and see if that actually works and get rid of this criminal.

There we go, there is our views data within our template from Pattern Lab. The idea is that the Drupal theme developer just needs to specify one file that renders the template from Pattern Lab while passing it the variables that Pattern Lab is expecting or passing a callback function that our Mustache connector can then call and map out the variables that Pattern Lab is expecting.

So that’s the direction we’re going.  Hopefully that gives you the idea of what we’re trying to do. We don’t have any HTML, CSS, or JavaScript in our theme.  Any changes needed in those files get pushed upstream into the living style guide, and we pull the changes back down into Drupal.

There’s a lot going on under the hood in this theme to make all of this work. We’re thinking that this theme will end up as a base theme that you can extend, to take advantage of all this functionality.  However, that is still yet to be determined and we may change our minds on that approach.  If you have an opinion on that, please let us know in the comments.

There are definitely some trade offs to using this living style guide approach, and those trade offs exist regardless of the Drupal version you use.  I plan to release a future video to talk about the benefits and disadvantages of the living style guide approach with Drupal.  Taking this approach definitely does not fit every Drupal theme.  More about that later.

Also, we’re going to be releasing more videos as we iterate on this theme, so if you’re interested in following along with us, make sure you subscribe to our channel. Thanks for watching!

Mar 23 2017
Mar 23

Preface

We recently had the opportunity to work on a Symfony app for one of our Higher Ed clients that we recently built a Drupal distribution for. Drupal 8 moving to Symfony has enabled us to expand our service offering. We have found more opportunities building apps directly using Symfony when a CMS is not needed. This post is not about Drupal, but cross posting to Drupal Planet to demonstrate the value of getting off the island. Enjoy!

Writing custom authentication schemes in Symfony used to be on the complicated side. But with the introduction of the Guard authentication component, it has gotten a lot easier.

One of our recent projects required use to interface with Shibboleth to authenticate users into the application. The application was written in Symfony 2 and was using this bundle to authenticate with Shibboleth sessions. However, since we were rewriting everything in Symfony 3 which the bundle is not compatible with, we had to look for a different solution. Fortunately for us, the built-in Guard authentication component turns out to be a sufficient solution, which allows us to drop a bundle dependency and only requiring us to write only one class. Really neat!

How Shibboleth authentication works

One way Shibboleth provisions a request with an authenticated entity is by setting a "remote user" environment variable that the web-server and/or residing applications can peruse.

There is obviously more to Shibboleth than that; it has to do a bunch of stuff to do the actual authenticaiton process. We defer all the heavy-lifting to the mod_shib Apache2 module, and rely on the availability of the REMOTE_USER environment variable to identify the user.

That is pretty much all we really need to know; now we can start writing our custom Shibboleth authentication guard:




namespace AppBundle\Security\Http;

use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\HttpFoundation\RedirectResponse;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Generator\UrlGeneratorInterface;
use Symfony\Component\Security\Core\Authentication\Token\TokenInterface;
use Symfony\Component\Security\Core\Exception\AuthenticationException;
use Symfony\Component\Security\Core\User\UserInterface;
use Symfony\Component\Security\Core\User\UserProviderInterface;
use Symfony\Component\Security\Guard\AbstractGuardAuthenticator;
use Symfony\Component\Security\Http\Logout\LogoutSuccessHandlerInterface;

class ShibbolethAuthenticator extends AbstractGuardAuthenticator implements LogoutSuccessHandlerInterface
{
    
    private $idpUrl;

    
    private $remoteUserVar;

    
    private $urlGenerator;

    public function __construct(UrlGeneratorInterface $urlGenerator, $idpUrl, $remoteUserVar = null)
    {
        $this->idpUrl = $idpUrl;
        $this->remoteUserVar = $remoteUserVar ?: 'HTTP_EPPN';
        $this->urlGenerator = $urlGenerator;
    }

    protected function getRedirectUrl()
    {
        return $this->urlGenerator->generateUrl('shib_login');
    }

    
    public function start(Request $request, AuthenticationException $authException = null)
    {
        $redirectTo = $this->getRedirectUrl();
        if (in_array('application/json', $request->getAcceptableContentTypes())) {
            return new JsonResponse(array(
                'status' => 'error',
                'message' => 'You are not authenticated.',
                'redirect' => $redirectTo,
            ), Response::HTTP_FORBIDDEN);
        } else {
            return new RedirectResponse($redirectTo);
        }
    }

    
    public function getCredentials(Request $request)
    {
        if (!$request->server->has($this->remoteUserVar)) {
            return;
        }

        $id = $request->server->get($this->remoteUserVar);

        if ($id) {
            return array('eppn' => $id);
        } else {
            return null;
        }
    }

    
    public function getUser($credentials, UserProviderInterface $userProvider)
    {
        return $userProvider->loadUserByUsername($credentials['eppn']);
    }

    
    public function checkCredentials($credentials, UserInterface $user)
    {
        return true;
    }

    
    public function onAuthenticationFailure(Request $request, AuthenticationException $exception)
    {
        $redirectTo = $this->getRedirectUrl();
        if (in_array('application/json', $request->getAcceptableContentTypes())) {
            return new JsonResponse(array(
                'status' => 'error',
                'message' => 'Authentication failed.',
                'redirect' => $redirectTo,
            ), Response::HTTP_FORBIDDEN);
        } else {
            return new RedirectResponse($redirectTo);
        }
    }

    
    public function onAuthenticationSuccess(Request $request, TokenInterface $token, $providerKey)
    {
        return null;
    }

    
    public function supportsRememberMe()
    {
        return false;
    }

    
    public function onLogoutSuccess(Request $request)
    {
        $redirectTo = $this->urlGenerator->generate('shib_logout', array(
            'return'  => $this->idpUrl . '/profile/Logout'
        ));
        return new RedirectResponse($redirectTo);
    }
}

Let's break it down:

  1. class ShibbolethAuthenticator extends AbstractGuardAuthenticator ... - We'll extend the built-in abstract to take care of the non-Shibboleth specific plumbing required.

  2. __construct(...) - As you would guess, we are passing in all the things we need for the authentication guard to work; we are getting the Shibboleth iDP URL, the remote user variable to check, and the URL generator service which we need later.

  3. getRedirectUrl() - This is just a convenience method which returns the Shibboleth login URL.

  4. start(...) - This is where everything begins; this method is responsible for producing a request that will help the Security component drive the user to authenticate. Here, we are simply either 1.) redirecting the user to the Shibboleth login page; or 2.) producing a JSON response that tells consumers that the request is forbidden, if the client is expecting application/json content back. In which case, the payload will conveniently inform consumers where to go to start authenticating via the redirect property. Our front-end application knows how to handle this.

  5. getCredentials(...) - This method is responsible for extracting authentication credentials from the HTTP request i.e. username and password, JWT token in the Authorization header, etc. Here, we are interested in the remote user environment variable that mod_shib might have set for us. It is important that we check that the environment variable is actually not empty because mob_shib will still have it set but leaves it empty for un-authenticated sessions.

  6. getUser(...) - Here we get the credentials that getCredentials(...) returned and construct a user object from it. The user provider will also be passed into this method; whatever it is that is configured for the firewall.

  7. checkCredentials(...) - Following the getUser(...) call, the security component will call this method to actually verify whether or not the authentication attempt is valid. For example, in form logins, this is where you would typically check the supplied password against the encrypted credentials in the the data-store. However we only need to return true unconditionally, since we are trusting Shibboleth to filter out invalid credentials and only let valid sessions to get through to the application. In short, we are already expecting a pre-authenticated request.

  8. onAuthenticationFailure(...) - This method is called whenever our authenticator reports invalid credentials. This shouldn't really happen in the context of a pre-authenticated request as we 100% entrust the process to Shibboleth, but we'll fill this in with something reasonable anyway. Here we are simply replicating what start(...) does.

  9. onAuthenticationSuccess(...) - This method gets called when the credential checks out, which is all the time. We really don't have to do anything but to just let the request go through. Theoretically, this would be there we can bootstrap the token with certain roles depending on other Shibboleth headers present in the Request object, but we really don't need to do that in our application.

  10. supportsRememberMe(...) - We don't care about supporting "remember me" functionality, so no, thank you!

  11. onLogoutSuccess(...) - This is technically not part of the Guard authentication component, but to the logout authentication handler. You can see that our ShibbolethAuthenticator class also implements LogoutSuccessHandlerInterface which will allow us to register it as a listener to the logout process. This method will be responsible for clearing out Shibboleth authentication data after Symfony has cleared the user token from the system. To do this we just need to redirect the user to the proper Shibboleth logout URL, and seeding the return parameter to the nice logout page in the Shibboleth iDP instance.

Configuring the router: shib_login and shib_logout routes

We'll update app/config/routing.yml:



shib_login:
  path: /Shibboleth.sso/Login

shib_logout:
  path: /Shibboleth.sso/Logout

You maybe asking yourself why we even bother creating known routes for these while we can just as easily hard-code these values to our guard authenticator.

Great question! The answer is that we want to be able to configure these to point to an internal login form for local development purposes, where there is no value in actually authenticating with Shibboleth, if not impossible. This allows us to override the shib_login path to /login within routing_dev.yml so that the application will redirect us to the proper login URL in our dev environment.

We really can't point shib_logout to /logout, though, as it will result in an infinite redirection loop. What we do is override it in routing_dev.yml to go to a very simple controller-action that replicates Shibboleth's logout URL external behavior:



...

  public function mockShibbolethLogoutAction(Request $request)
  {
      $return = $request->get('return');

      if (!$return) {
          return new Response("`return` query parameter is required.", Response::HTTP_BAD_REQUEST);
      }

      return $this->redirect($return);
  }
}

Configuring the firewall

This is the last piece of the puzzle; putting all these things together.







services:
  app.shibboleth_authenticator:
    class: AppBundle\Security\Http\ShibbolethAuthenticator
    arguments:
      - '@router'
      - '%shibboleth_idp_url%'
      - '%shibboleth_remote_user_var%'

---






imports:
  - { resources: config.yml }
  - { resources: security.yml }

---

imports:
  - { resources: config.yml }
  - { resources: security_dev.yml } 

---






security:
  firewall:
    main:
      stateless: true
      guard:
        authenticators:
          - app.shibboleth_authenticator

      logout:
        path: /logout
        success_handler: app.shibboleth_authenticator

---





security:
  firewall:
    main:
      stateless: false
      form_login:
        login_path: shib_login
        check_path: shib_login
        target_path_parameter: return

The star here is actually just what's in the security.yml file, specifically the guard section; that's how simple it is to support custom authentication via the Guard authentication component! It's just a matter of pointing it to the service and it will hook it up for us.

The logout configuration tells the application to allocate the /logout path to initiate the logout process which will eventually call our service to clean up after ourselves.

You also notice that we actually have security_dev.yml file here that config_dev.yml imports. This isn't how the Symfony 3 framework ships, but this allows us to override the firewall configuration specifically for dev environments. Here, we add the form_login authentication scheme to support logging in via an in-memory user-provider (not shown). The authentication guard will redirect us to the in-app login form instead of the Shibboleth iDP during development.

Also note the stateless configuration difference between prod and dev: We want to keep the firewall in production environments stateless; this just means that our guard authenticator will get consulted in all requests. This ensures that users will actually be logged out from the application whenever they are logged out of the Shibboleth iDP i.e. when they quit the web browser, etc. However we need to configure the firewall to be stateful during development, otherwise the form_login authentication will not work as expected.

Conclusion

I hope I was able to illustrate how versatile the Guard authentication component in Symfony is. What used to require multiple classes to be written and wired together now only requires a single class to implement, and its very trivial to configure. The Symfony community has really done a great job at improving the Developer Experience (DX).

Setting pre-authenticated requests via environment variables isn't just used by mod_shib, but also by other authentication modules as well, like mod_auth_kerb, mod_auth_gssapi, and mod_auth_cas. It's a well-adopted scheme that Symfony actually ships with a remote_user authentication listener starting 2.6 that makes it very easy to integrate with them. Check it out if your needs are simpler i.e. no custom authentication-starter/redirect logic, etc.

Mar 06 2017
Mar 06

In the modern world of web / application development, using package managers to pull in dependencies has become a de-facto standard. In fact, if you are developing enterprise software and you aren't leveraging package managers I would challenge you to ask yourself why not?

Drupal was very early to adopt this mindset of pulling in dependencies almost a decade ago when Dmitri Gaskin created an extension for Drush (the Drupal Shell) that added the ability to pull contributed modules by listing them in a make file (I think Dmitri was 12 years old when he wrote the Drush extension, pretty amazing!). Since that time, the make extension has been added to Drush core.

Composer is the current standard for putting together PHP applications, which is why Drupal 8 has gone this direction, so why not use Composer to put together Drupal 7 applications?

First off, I want to clarify what I'm not talking about in this post. I am not advocating that we ditch Drush all together, I still find value in other aspects of what Drush can do. I am specifically referring to the Make aspect of Drush. Is Drush Make still necessary?

This post is also not about Drupal Console vs Drush, both CLI tools add tremendous value to development workflow, and there isn't 100% overlap with these tools [yet]. I think we still need both tools.

This post is about how I came to see the benefit of switching to Composer from Drush Make. I recommend making this move for Drupal 7 and Drupal 8. This Drupal Composer workflow is not new, it has been around for a while. I just never saw a good reason to make the jump from Drush Make to this new process, until now. We have been asked in the comments on previous posts, "Why haven't you adopted the Composer process?" I now have a good reason to change our process and fully jump on board with Composer building Drupal 7 applications. We appreciate all the comments we get on our blog, it sharpens everyone involved!

We have blogged about the Composer workflow in a previous post on our [Drupal 8 build process]({% post_url 2016-07-13-drupal-8-development-in-docker-redux %}) in the past, but the main motivation there was to be proactive about where PHP application development is going [already is]. We didn't have a real use case for the switch to Composer until now. This post will review how I came to that revelation.

Dependency Managers

I want to make one more point before I make the case for Composer. There are many reasons to use package managers to pull in dependencies. I'll save the details for another blog post. The main reason developers use package managers is so that your project repository does not include libraries and modules that you do not maintain. That is why tools like Composer, npm, Yarn, Bower, and Bundler exist. Hook up your RSS reader to our blog, I'll explain in more detail in a future post, but for now I'll leave this link to the Composer site explaining why committing dependencies is a bad idea, in your project repo.

Version Numbers

The #1 reason to make the switch to Composer is the ability to manage version numbers. You may be asking "What's the big deal, Drush Make handles version numbers as well?" let me give you a little context of why using Composer version numbers are a better approach.

The Back Story

Recently in a strategy meeting with one of our enterprise clients, we were discussing how to approach launching 100's of sites on one Drupal Core utilizing multiple installation profiles on top of Acquia Site Factory. Our goal was to figure out how we could sanely manage updating potentially dozens of installation profiles without explicitly defining each version number of the profile being updated. This type of Drupal architecture is also a topic for a future blog post, but for now read Acquia's explanation of why architecting with profiles is a good idea.

As a developer, it is common place to lock down versions to a very specific version so that we know exactly what versions we are using / deploying. This is the reason composer.lock, Gemfile.lock, yarn.lock, and npm shrinkwrap exist. We have experienced the pain of unexpected defects in applications due to an obscure dependency changing deep in the dependency tree. Most dependency managers have a very explicit command for updating dependencies, i.e. composer update, bundle update, yarn upgrade respectively, which in turn update the lock file.

A release manager does not need to know explicitly which version of a dependency (installation profile, module, etc), to release next, she simply wants the latest stable release.

Herein lies the problem with Drush Make. There are practices that exist that solve both the developer problem and release manager problem that do not exist in Drush Make, but do exist in Composer and other application development environments. It's a common pattern that has been around for a while, it's called semantic versioning.

Semantic Versioning

If you haven't heard of semantic versioning (semver), go check it out now. Pretty much every package manager I have dealt with has adopted semver. Adopting semver gives the developer, or release manager, the choice of how to update dependencies within their app. There are very distinct numbers in semver for introducing breaking changes, new features, and bug fixes. How does this play into what problem use cases I mentioned above?

A developer has the ability to specify in the composer.json file specific versions, while leaving the version number flexible to pull in new bug fixes and feature improvements (patch and minor releases). Look at the example below:

{
  "name": "My Drupal Platform",
  ...
  "require": {
	...
    "drupal/drupal": "~7.53.0",
    "drupal/views": "^3.14.0"
  },
  ...
}

The tilde ~ and caret ^ symbols have special meanings when specifying version numbers. The tilde matches the most recent minor version (updates patch release number, the last number), the caret will update you to the most recent major version (updates minor release number, the middle number).

The above example basically says, use the views module at version 3.14, and when version 3.15 comes out, update me to that version when I run composer update.

Breaking changes should only be introduced when you update the first number, the major release. Of course, if you completely trust the developer writing the contributed code this system would be enough, but not all developers follow best practice, which is why the lock file was created and the need to explicitly run composer update.

With this system in place, a release manager now only needs to worry about running one command to get the latest stable release of all dependencies. This command could also be hidden behind a nice UI (a CI Server) so all she has to do is push one button to grab all the latest dependencies and push to a testing site for verification.

Understanding everyones needs

In the past, I didn't have a good reason to move away from Drush Make, because it did the job, and Drush is so much more than Drush Make. The strategy session we had was eye opening. Understanding the needs from an operations perspective, while not jeopardizing the integrity of the application led us down a path to see a problem that the wider development community at large has already solved (not just the PHP community). It's very rewarding to solve problems like this, especially when you come to the conclusion that someone has already solved the problem! "We just had to find the path to the water! (--A.W.)"

What do you think about using Drush Make vs Composer for pulling together a Drupal Application? Leave us your thoughts in the comments.

Jul 30 2016
Jul 30

On a recent project, we had to create multiple sitemaps for each of the domains that we have setup on the site. We came across some problems that we had to resolve because of the nature of our pURL setup.

##Goals##

  • We want all of the front pages from each subdomain to be added to the sitemap and we are able to set the rules for them on the XMLSitemap settings page.

  • We want to make sure that the URLs that we are adding to the other pages no longer show up in the main domain's sitemap.

##Problems## 1) Only On The Primary Domain

The XML sitemap module only creates one sitemap based on the primary domain.

2) Prefixes not Distinguished

Our URLs for nodes are setup so that nodes can be prefixed with our subdomain (pURL modifier) and XMLSitemap doesn't see our prefixes as being different sites. At this point, all nodes are added to every single domain's sitemap.

3) URL Formats

Our URLs are not in the correct format when being added to the sitemap. Our URLs should look like http://subdomain.domain.org/*, however, because we are prefixing them, they show up as http://domain.org/subdomain/*. We want our URLs to look like they are from the right sub-domain and not all coming from the base domain.

##Solution##

We were able to add the ability to create sitemaps for each of the 15 domains by adding the XMLSitemap domain module. The XLMSitemap domain module allows us to define a domain for each sitemap, generate a sitemap and serve it on the correct domain.

We added xmlsitemap-dont-write-empty-element-in-xml-sitemap-file-2545050-3.patch to prevent empty elements from being added to the sitemap.

Then we used a xmlsitemap_element_alter inside of our own custom module that looks something like this:




function hook_xmlsitemap_element_alter(array &$element, array $link, $sitemap) {
  $domain = $sitemap->uri['options']['base_url'];
  $url_parts = explode('//', $domain);
  $parts = explode('.', $url_parts[1]);
  $subdomain = array_shift($parts);

  $current_parts = explode('/', $link['loc']);
  $current_prefix = array_shift($current_parts);

  $modifiers = _get_core_modifiers();

  
  if (in_array($subdomain, array_keys($modifiers))) {
    
    
    if ($current_prefix != $subdomain && $current_prefix != '') {
      
      $element = array();
        return $element;
      }
    else {
      
      $pattern = $current_prefix . '/';
      $element['loc'] = $domain . str_replace($pattern, '', $link['loc']);
    }
  }
  else {
    
    
    if (in_array($current_prefix, array_keys($modifiers))) {
      $element = array();
      return $element;
    }
  }
}


function _get_core_modifiers() {
  if (!$cache = cache_get('subdomains')) {
    $result = db_query("SELECT id, value FROM {purl} WHERE provider = 'og_purl_provider'")->fetchAllAssoc('value');
    cache_set('subdomains', $result, 'cache', time() + 86400);
    return $result;
  }
  else {
    return $cache->data;
  }
?>

If you have any questions, suggestions, feel free to drop a comment below!

Jul 14 2016
Jul 14

Back in December, Tom Friedhof shared how we set up our Drupal 8 development and build process utilizing Docker. It has been working well in the several months we have used it and worked within its framework. Within the time-span however, we experienced a few issues here and there which led me to come up with an alternative process which keeps the good things we like and getting rid of/resolving the issues we encountered.

First, I'll list some improvements that we'd like to see:

  1. Solve file-syncing issues

    One issue that I keep running into when working with our development process is that the file-syncing stops working when the host machine powers off in the interim. Even though Vagrant's rsync-auto can still detect changes on the host file-system and initiates an rsync to propel files up into the containers via a mounted volume, the changes do not really appear within the containers themselves. I had a tough time debugging this issue, and the only resolution in sight was to do a vagrant reload -- it's a time-consuming process as it rebuilds every image and running them again. Having to do this every morning when I turn on my laptop at work was no fun.

  2. Performant access to Drupal's root

    Previously, we had to mount Drupal's document root to our host machine using sshfs to explore in it, but it's not exactly performant. For example, performing a grep or ag to search within files contents under Drupal 8's core takes ~10 seconds or more. Colleagues using PhpStorm report that mounting the Drupal root unto the host system brings the IDE to a crawl while it indexes the files.

  3. Levarage Docker Compose

    Docker Compose is a great tool for managing the life-cycle of Docker containers, especially if you are running multiple applications. I felt that it comes with useful features that we were missing out because we were just using Vagrant's built-in Docker provider. Also with the expectation that Docker for Mac Beta will become stable in the not-so-distant future, I'd like the switch to a native Docker development environment as smooth as possible. For me, introducing Docker Compose into the equation is the logical first-step.

    dlite just got into my attention quite recently which could fulfill the role of Docker for Mac before its stable release, but haven't gotten the chance to try it yet.

  4. Use Composer as the first-class package manager

    Our previous build primarily uses Drush to build the Drupal 8 site and download dependencies and relegating the resolution of some Composer dependencies to Composer Manager. Drush worked really well for us in the past and there is no pressing reason why we should abandon it, but considering that Composer Manager is deprecated for Drupal 8.x and that there is already a Composer project for Drupal sites, I thought it would be a good idea to be more proactive and rethink the way we have been doing Drupal builds and adopt the de-facto way of putting together a PHP application. At the moment, Composer is where it's at.

  5. Faster and more efficient builds

    Our previous build utilizes a Jenkins server (also ran as a container) to perform the necessary steps to deploy changes to Pantheon. Since we were mostly deploying from our local machines anyway, I always thought that perhaps running the build steps via docker run ... would probably suffice (and it doesn't incur the overhead of a running Jenkins instance). Ultimately, we decided to explore Platform.sh as our deployment target, so basing our build in Composer became almost imperative as Drupal 8 support (via Drush) on Platform.sh is still in beta.

With these in mind, I'd like to share our new development environment & build process.

1. File & directory structure

Here is a high-level tree-view of the file structure of the project:

/<project_root>
├── Vagrantfile
├── Makefile
├── .platform/ 
│   └── routes.yaml
├── bin/ 
│   ├── drupal*
│   ├── drush*
│   └── sync-host*
├── docker-compose.yml 
├── environment 
├── src/ 
│   ├── .gitignore
│   ├── .platform.app.yaml 
│   ├── Dockerfile
│   ├── LICENSE
│   ├── bin/ 
│   │   ├── drupal-portal*
│   │   └── drush-portal*
│   ├── composer.json
│   ├── composer.lock
│   ├── custom/
│   ├── phpunit.xml.dist
│   ├── scripts/
│   ├── vendor/
│   └── web/ 
└── zsh/ 
    ├── zshrc
    ├── async.zsh
    └── pure.zsh

2. The Vagrantfile


Vagrant.configure("2") do |config|

  config.vm.box = "debian/jessie64"
  config.vm.network "private_network", ip: "192.168.100.47"

  config.vm.hostname = 'activelamp.dev'

  config.vm.provider :virtualbox do |vb|
    vb.name = "activelamp.com"
    vb.memory = 2048
  end

  config.ssh.forward_agent = true

  config.vm.provision "shell",
    inline: "apt-get install -y zsh && sudo chsh -s /usr/bin/zsh vagrant",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /home/vagrant/.zshrc ] && echo '' || ln -s /vagrant/zsh/zshrc /home/vagrant/.zshrc",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/prompt_pure_setup ] && echo '' || ln -s /vagrant/zsh/pure.zsh /usr/local/share/zsh/site-functions/prompt_pure_setup",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/async ] && echo '' || ln -s /vagrant/zsh/async.zsh /usr/local/share/zsh/site-functions/async",
    run: "once"

  if ENV['GITHUB_OAUTH_TOKEN']
    config.vm.provision "shell",
      inline: "sudo sed -i '/^GITHUB_OAUTH_TOKEN=/d' /etc/environment  && sudo bash -c 'echo GITHUB_OAUTH_TOKEN=#{ENV['GITHUB_OAUTH_TOKEN']} >> /etc/environment'"
  end

  
  config.vm.provision :docker

  config.vm.provision :docker_compose, yml: "/vagrant/docker-compose.yml", run: "always", compose_version: "1.7.1"

  config.vm.synced_folder ".", "/vagrant", type: "nfs"
  config.vm.synced_folder "./src", "/mnt/code", type: "rsync", rsync__exclude: [".git/", "src/vendor"]
end

Compare this new manifest to the old one and you will notice that we reduce Vagrant's involvement in defining and managing Docker containers. We are simply using this virtual machine as the Docker host, using the vagrant-docker-compose plugin to provision it with the Docker Compose executable and having it (re)build the images during provisiong stage and (re)start the containers on vagrant up.

We are also setting up Vagrant to sync file changes on src/ to /mnt/code/ in the VM via rsync. This directory in the VM will be mounted into the container as you'll see later.

We are also setting up zsh as the login shell for the vagrant user for an improved experience when operating within the virtual machine.

3. The Drupal 8 Build

For now let's zoom in to where the main action happens: the Drupal 8 installation. Let's remove Docker from our thoughts for now and focus on how the Drupal 8 build works.

The src/ directory cotains all files that constitute a Drupal 8 Composer project:


/src/
├── composer.json
├── composer.lock
├── phpunit.xml.dist
├── scripts/
│   └── composer/
├── vendor/ # Composer dependencies
│   └── ...
└── web/ # Web root
    ├── .htaccess
    ├── autoload.php
    ├── core/ # Drupal 8 Core
    ├── drush/
    ├── index.php
    ├── modules/
    ├── profiles/
    ├── robots.txt
    ├── sites/
    │   ├── default/
    │   │   ├── .env
    │   │   ├── config/ # Configuration export files
    │   │   │   ├── system.site.yml
    │   │   │   └── ...
    │   │   ├── default.services.yml
    │   │   ├── default.settings.php
    │   │   ├── files/
    │   │   │   └── ...
    │   │   ├── services.yml
    │   │   ├── settings.local.php.dist
    │   │   ├── settings.php
    │   │   └── settings.platform.php
    │   └── development.services.yml
    ├── themes/
    ├── update.php
    └── web.config

The first step of the build is simply executing composer install within src/. Doing so will download all dependencies defined in composer.lock and scaffold files and folders necessary for the Drupal installation to work. You can head over to the Drupal 8 Composer project repository and look through the code to see in depth how the scaffolding works.

3.1 Defining Composer dependencies from custom installation profiles & modules

Since we cannot use the Composer Manager module anymore, we need a different way of letting Composer know that we may have other dependencies defined in other areas in the project. For this let's look at composer.json:

{
    ...
    "require": {
        ...
        "wikimedia/composer-merge-plugin": "^1.3",
        "activelamp/sync_uuids": "dev-8.x-1.x"
    },
    "extra": {
        ...
        "merge-plugin": {
          "include": [
            "web/profiles/activelamp_com/composer.json",
            "web/profiles/activelamp_com/modules/custom/*/composer.json"
          ]
        }
    }
}

We are requiring the wikimedia/composer-merge-plugin and configuring it in the extra section to also read the installation profile's composer.json and one's that are in custom modules within it.

We can define the contrib modules that we need for our site from within the installation profile.

src/web/profiles/activelamp_com/composer.json:

{
  "name": "activelamp/activelamp-com-profile",
  "require": {
    "drupal/admin_toolbar": "^8.1",
    "drupal/ds": "^8.2",
    "drupal/page_manager": "^8.1@alpha",
    "drupal/panels": "~8.0",
    "drupal/pathauto": "~8.0",
    "drupal/redirect": "~8.0",
    "drupal/coffee": "~8.0"
  }
}

As we create custom modules for the site, any Composer dependencies in them will be picked up everytime we run composer update. This replicates what Composer Manager allowed us to do in Drupal 7. Note however that unlike Composer Manager, Composer does not care if a module is enabled or not -- it will always read its Composer dependencies and resolve them.

3.2 Drupal configuration

3.2.1 Settings file

Let's peek at what's inside src/web/settings.php:




$settings['container_yamls'][] = __DIR__ . '/services.yml';

$config_directories[CONFIG_SYNC_DIRECTORY] = __DIR__ . '/config';


include __DIR__ . "/settings.platform.php";

$update_free_access = FALSE;
$drupal_hash_salt = '';

$local_settings = __DIR__ . '/settings.local.php';

if (file_exists($local_settings)) {
  require_once($local_settings);
}

$settings['install_profile'] = 'activelamp_com';
$settings['hash_salt'] = $drupal_hash_salt;

Next, let's look at settings.platform.php:



if (!getenv('PLATFORM_ENVIRONMENT')) {
    return;
}

$relationships = json_decode(base64_decode(getenv('PLATFORM_RELATIONSHIPS')), true);

$database_creds = $relationships['database'][0];

$databases['default']['default'] = [
    'database' => $database_creds['path'],
    'username' => $database_creds['username'],
    'password' => $database_creds['password'],
    'host' => $database_creds['host'],
    'port' => $database_creds['port'],
    'driver' => 'mysql',
    'prefix' => '',
    'collation' => 'utf8mb4_general_ci',
];

We return early from this file if PLATFORM_ENVIRONMENT is not set. Otherwise, we'll parse the PLATFORM_RELATIONSHIPS data and extract the database credentials from it.

For our development environment however, we'll do something different in settings.local.php.dist:



$databases['default']['default'] = array(
    'database' => getenv('MYSQL_DATABASE'),
    'username' => getenv('MYSQL_USER'),
    'password' => getenv('MYSQL_PASSWORD'),
    'host' => getenv('DRUPAL_MYSQL_HOST'),
    'driver' => 'mysql',
    'port' => 3306,
    'prefix' => '',
);

We are pulling the database values from the environment, as this is how we'll pass data in a Docker run-time. We also append .dist to the file-name because we don't actually want settings.local.php in version control (otherwise, it will mess up the configuration in non-development environments). We will simply rename this file as part of the development workflow. More on this later.

3.2.2 Staged configuration

src/web/sites/default/config/ contains YAML files that constitute the desired Drupal 8 configuration. These files will be used to seed a fresh Drupal 8 installation with configuration specific for the site. As we develop features, we will continually export the configuration entities and place them into this folder so that they are also versioned via Git.

Configuration entities in Drupal 8 are assigned a universally unique ID (a.k.a UUID). Because of this, configuration files are typically only meant to be imported into the same (or a clone of the) Drupal site they were imported from. The proper approach is usually getting hold of a database dump of the Drupal site and use that to seed a Drupal 8 installation which you plan to import the configuration files into. To streamline the process during development, we wrote the drush command sync-uuids that updates the UUIDs of the active configuration entities of a non-clone site (i.e. a freshly installed Drupal instance) to match those found in the staged configuration. We packaged it as Composer package named activelamp/sync_uuids.

The complete steps for the Drupal 8 build is the following:

$ cd src
$ composer install
$ [ -f web/sites/default/settings.local.php ] && : || cp web/sites/default/settings.local.php.dist web/sites/default/settings.local.php
$ drush site-install activelamp_com --account-pass=default-pass -y
$ drush pm-enable config sync_uuids -y
$ drush sync-uuids -y
$ drush config-import -y

These build steps will result a fresh Drupal 8 installation based on the activelamp_com installation profile and will have the proper configuration entities from web/sites/default/config. This will be similar to any site that is built from the same code-base minus any of the actual content. Sometimes that is all that you need.

Now let's look at the development workflow utilizing Docker. Let's start with the src/Dockerfile:



FROM php:7.0-apache

RUN apt-get update && apt-get install -y \
  vim \
  git \
  unzip \
  wget \
  curl \
  libmcrypt-dev \
  libgd2-dev \
  libgd2-xpm-dev \
  libcurl4-openssl-dev \
  mysql-client

ENV PHP_TIMEZONE America/Los_Angeles


RUN docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
 && docker-php-ext-install -j$(nproc) gd pdo_mysql curl mbstring opcache


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN echo 'export PATH="$PATH:/root/.composer/vendor/bin"' >> $HOME/.bashrc


RUN composer global require drush/drush:8.1.2 drupal/console:0.11.3
RUN $HOME/.composer/vendor/bin/drupal init
RUN echo source '$HOME/.console/console.rc' >> $HOME/.bashrc


RUN echo "date.timezone = \"$PHP_TIMEZONE\"" > /usr/local/etc/php/conf.d/timezone.ini
ARG github_oauth_token

RUN [ -n $github_oauth_token ] && composer config -g github-oauth.github.com $github_oauth_token || echo ''

RUN [ -e /etc/apache2/sites-enabled/000-default.conf ] && sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/sites-enabled/000-default.conf || sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/apache2.conf


COPY bin/drush-portal /usr/bin/drush-portal
COPY bin/drupal-portal /usr/bin/drupal-portal

COPY . /var/www/
WORKDIR /var/www/

RUN composer --working-dir=/var/www install

The majority of the Dockerfile should be self-explanatory. The important bits are the provisioning of a GitHub OAuth token & adding of the {drupal,drush}-portal executables which are essential for the bin/{drush,drupal} pass-through scripts.

Provisioning a GitHub OAuth token

Sometimes it is necessary to configure Composer to use an OAuth token to authenticate on GitHub's API when resolving dependencies. These tokens must remain private and should not be committed into version control. We declare that our Docker build will take github_oauth_token as a build argument. If present, it will configure Composer to authenticate using it to get around API rate limits. More on this later.

DrupalConsole and Drush pass-through scripts

Our previous build involved opening up an SSH port on the container running Drupal so that we can execute Drush commands remotely. However, we should already be able to run Drush commands inside the container without having SSH access by utilizing docker run. However the commands can get too lengthy. In fact, they will be extra lengthy because we also need to execute this from within the Vagrant machine using vagrant ssh.

Here are a bunch of scripts that makes it easier to execute drush and drupal commands from the host machine:

Here are the contents of bin/drush and bin/drupal:

#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drupal-portal $@"
vagrant ssh -c "$cmd"
#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drush-portal $@"
vagrant ssh -c "$cmd"

This allow us to do bin/drush to run Drush commands and bin/drupal ... to run DrupalConsole commands, and the arguments will be pass over to the executables in the container.

Here are the contents of src/bin/drupal-portal and src/bin/drush-portal:

#!/usr/bin/env bash
/root/.composer/vendor/bin/drupal --root=/var/www/web $@
#!/usr/bin/env bash
/root/.composer/vendor/bin/drush --root=/var/www/web $@

The above scripts are added to the container and is essential to making sure drush and drupal commands are applied to the correct directory.

In order for this to work, we actually have to remove Drush and DrupalConsole from the project's composer.json file. This is easily done via the composer remove command.

The docker-compose.yml file

To tie everything together, we have this Compose file:

version: '2'
services:
  server:
    build:
      context: ./src
      args:
        github_oauth_token: ${GITHUB_OAUTH_TOKEN}
    volumes:
      - /mnt/code:/var/www
      - composer-cache:/root/.composer/cache
    env_file: environment
    links:
      - mysql:mysql
    ports:
      - 80:80
  mysql:
    image: 'mysql:5.7.9'
    env_file: environment
    volumes:
      - database:/var/lib/mysql

volumes:
  database: {}
  composer-cache: {}

There are four things of note:

  1. github_oauth_token: ${GITHUB_OAUTH_TOKEN}

    This tells Docker Compose to use the environment variable GITHUB_OAUTH_TOKEN as the github_oauth_token build argument. This, if not empty, will effectively provision the Composer with an OAuth token. If you go back to the Vagrantfile, you will see that this environment variable is set in the virtual machine (because docker-compose is run under it) by appending it to the /etc/environment file. All it needs is that the environment variable is present in the host environment (OS X) during the provisioning step.

    For example, it can be provisioned via: GITHUB_OAUTH_TOKEN= vagrant provision

  2. composer-cache:/root/.composer/cache

    This tells Docker to mount a volume on /root/.composer/cache so that we can persist the contents of this directory between restarts. This will ensure that composer install and composer update is fast and would not require re-downloading packages from the web every time we run. This will drastically imrpove the build speeds.

  3. database:/var/lib/mysql

    This will tell Docker to persist the MySQL data between builds as well. This is so that we don't end up with an empty database whenever we restart the containers.

  4. env_file: environment

    This let us define all environment variables in a single file, for example:

    MYSQL_USER=activelamp
    MYSQL_ROOT_PASSWORD=root
    MYSQL_PASSWORD=some-secret-passphrase
    MYSQL_DATABASE=activelamp
    DRUPAL_MYSQL_HOST=mysql

    We just configure each service to read environment variables from the same file as they both need these values.

We employ rsync to sync files from the host machine to the VM since it offers by far the fastest file I/O compared to the built-in alternatives in Vagrant + VirtualBox. In the Vagrantfile we specified that we sync src/ to /mnt/code/ in the VM. Following this we configured Docker Compose to mount this directory into the server container. This means that any file changes we make on OS X will get synced up to /mnt/code, and ultimately into /var/www/web in the container. However, this only covers changes that originate from the host machine.

To sync changes that originates from the container -- files that were scaffolded by drupal generate:*, Composer dependencies, and Drupal 8 core itself -- we'll use the fact that our project root is also available at /vagrant as a mount in the VM. We can use rsync to sync files the other way -- rsyncing from /mnt/code to /vagrant/src will bring file changes back up to the host machine.

Here is a script I wrote that does an rsync but will ask for confirmation before doing so to avoid overwriting potentially uncommitted work:

#!/usr/bin/env bash

echo "Dry-run..."

args=$@

diffs="$(vagrant ssh -- rsync --dry-run --itemize-changes $args | grep '^[>

We are keeping this generic and not bake in the paths because we might want to sync arbitrary files to arbitrary destinations.

We can use this script like so:

$ bin/sync-host --recursive --progress --verbose --exclude=".git/" --delete-after /mnt/code/ /vagrant/src/

If the rsync will result in file changes on the host machine, it will bring up a summary of the changes and will ask if you want to proceed or not.

Makefile

We are using make as our task-runner just like in the previous build. This is really useful for encapsulating operations that are common in our workflow:



sync-host:
	bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

sync:
	vagrant rsync-auto

sync-once:
	vagrant rsync

docker-rebuild:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml build

docker-restart:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml up -d

composer-install:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www install

composer-update:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www update --no-interaction



lock-file:
	@vagrant ssh -- cat /mnt/code/composer.lock

install-drupal: composer-install
	vagrant ssh -- '[ -f /mnt/code/web/sites/default/settings.local.php ] && echo '' || cp /mnt/code/web/sites/default/settings.local.php.dist /mnt/code/web/sites/default/settings.local.php'
	-bin/drush si activelamp_com --account-pass=secret -y
	-bin/drush en config sync_uuids -y
	bin/drush sync-uuids -y
	[ $(ls -l src/web/sites/default/config/*.yml | wc -l) -gt 0  ] && bin/drush cim -y || echo "Config is empty. Skipping import..."

init: install-drupal
	yes | bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

platform-ssh:
	ssh ></span>@ssh.us.platform.sh

The Drupal 8 build steps are simply translated to use bin/drush and the actual paths within the virtual machine in the install-drupal task. After cloning the repository for the first time, a developer should just be able to execute make init, sit back with a cup of coffee and wait until the task is complete.

Try it out yourself!

I wrote the docker-drupal-8 Yeoman generator so that you can easily give this a spin. Feel free to use it to look around and see it in action, or even to start off your Drupal 8 sites in the future:

$ npm install -g yo generator-docker-drupal-8
$ mkdir myd8
$ cd myd8
$ yo docker-drupal-8

Just follow through the instructions, and once complete, run vagrant up && make docker-restart && make init to get it up & running.

If you have any questions, suggestions, anything, feel free to drop a comment below!

Jun 15 2016
Jun 15

The web development community can have a long list of requirements, languages, frameworks, constructs and tools that most companies or bosses want you to know.

This list may not include everything you need to know including PHP, HTML, CSS, responsive web development principles, and Drupalisms. Here is the list of some of the important skills, concepts, and tools that we think you should know as a beginner Drupal developer.

####1. Version Control

Every developer should have some experience with version control and versioning. Version control is an essential part of the Drupal community. Versioning allows for Drupal projects to be easily managed, maintained and contributed in a uniform manner. Version control will also most likely be used in-house to manage each client project as well.

####2. Command Line Interface (CLI)

It isn't necessary to be a CLI Ninja, however being able to work comfortably using a CLI is very important. One of the advantages to using a CLI is the ability to be more productive. You can quickly automate repetitive tasks, perform tasks without jumping from application to application, and the ability to use tools like Drush to perform tasks that would normally require you to navigate 3 or more mouse clicks to accomplish.

####3. Package Managers

Using package managers is important to the installation of Drupal. Whether it is installing Sass or Bootstrap from node or Drush from composer, it is important to know how package managers work and exactly what you are running before running commands on your computer.

####4. Contributing Back

An important part of the Drupal community is contributing back to projects and core. When you find an issue, such as something that just doesn't seem to work correctly, or you would like to implement a functionality to Drupal, you should think about giving back to the community. If you find an issue on an existing project or core, check to see if there is an existing ticket on that project. If there isn't, you can create one, and if you can debug it and resolve the issue you can contribute a patch to that issue. If you don't know exactly how to debug the issue you can have an open conversation with other developers and maintainers to help resolve the issue. Contributing and interacting in the community moves Drupal forward.

####5. CSS Preprocessors

Within the last couple of years, there has been a movement to CSS preprocessors to add a programmatic feel to CSS2 and CSS3. There are some that are against preprocessors because it adds a little more overhead to a project. Whether you use them or not, you may have a client or framework that uses one that you might need to be familiar with how to use a preprocessor.

####6. A Framework

Within the Drupal community, there is often talk of headless Drupal. We have seen some interesting ideas come from the adopters of headless Drupal. Headless Drupal setups usually use a framework for the front-end. It may be Angular, Angular 2, Backbone, Ember or something different, however, most of the frameworks have two things in common, they are often written in Javascript and almost always make use of templating.

####7. Templating

It is important to know the principles of templating so that you can easily pick up and learn new frameworks. Whether it is Mustache, Twig, Jade, or the templating syntax from within Angular, there are similarities between the syntax and the principles can be applied to each of the languages that will allow you to quickly step from one to the next with a smaller learning curve.

####8. Basic Debugging

Debugging a problem correctly can save you valuable time by getting you directly to the cause of an issue instead of looking over each line of code one by one. It is essential to know how to do basic debugging when working with Drupal. Sometimes the error messages can give you enough information, other times it is necessary to step into Devel or XDebug and step through the project to find the exact location where the code is not working correctly so that you can start to solve the problem.

####9. Unit Testing / Code Testing

Testing your own code is important. When it comes to code testing you have many options, from TTD and BDD you can write unit tests to cover your classes, linting to make sure you are writing "good", standardized code. Linting can be helpful for writing code that others can easily navigate and sets up some best practices for you to follow.

####10. A CMS

When starting with Drupal, it might be good to have familiarity with a CMS platform before jumping in. There are some advantages to knowing the constructs of other CMS platforms and being familiar with how to work within a platform. However, when working with Drupal it is important to think about the way Drupal works and not be stuck in the way other CMS platforms accomplish goals.

####Conclusion As a web developer, it is important to know many concepts and technologies. Many companies will not require you to know everything, do everything and be a jack-of-all-trades. In technology, there are so many new tools, frameworks, and languages coming out daily that it is impossible to stay on top of them all. It is far better to get a good base understanding of core web concepts that can be applied to multiple languages, tools, and technologies and then specialize.

Did I miss something you feel is important? Is there something you would like to have seen on the list? Leave a comment below.

Jun 07 2016
Jun 07

Continuing from Evan's blog post on building pages with Paragraphs and writing custom blocks of content as fields, I will walk you through how to create a custom field-formatter in Drupal 8 by example.

A field-formatter is the last piece of code to go with the field-type and the field-widget that Evan wrote about in the previous blog post. While the field-type tells Drupal about what data comprises a field, the field-formatter is responsible for telling Drupal how to display the data stored in the field.

To recap, we defined a hashtag_search field type in the previous blog post whose instances will be composed of two items: the hashtag to search for, and the number of items to display. We want to convert this data into a list of the most recent n tweets with the specified hashtag.

A field-formatter is a Drupal plugin, just like its respective field-type and field-widget. They live in /src/Plugin/Field/FieldFormatter/ and are namespaced appropriately: Drupal\\Plugin\Field\FieldFormatter.




namespace Drupal\my_module\Plugin\Field\FieldFormatter;


use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\FormatterBase;
use Drupal\Core\Form\FormStateInterface;


class HashtagFormatter extends FormatterBase
{

    public function viewElements(FieldItemListInterface $items, $langcode)
    {
        return array();
    }
}

We tell Drupal important details about our new field-formatter using a @FieldFormatter class annotation. We declare its unique id; a human-readable, translatable label; and a list of field_types that it supports.

The most important method in a field-formatter is the viewElements method. It's responsibility is returning a render array based on field data being passed as $items\Core\Field\FieldItemListInterface>.

Let's look at the code:




use Drupal\my_module\Twitter\TwitterClient;
use Drupal\my_module\Twitter\TweetFormatter;

...

    
    protected $twitter;

    
    protected $formatter;

    ...

    public function viewElements(FieldItemListInterface $items, $langcode)
    {
        $element = [];

        
        foreach ($items as $delta => $item) {

            try {

                
                $results = $this->twitter->search($item->hashtag_search, $item->count);

                
                
                
                $statuses = array_map(function ($s) {
                    $s['formatted_text'] = $this->formatter->toHtml($s['text'], $s['entities']);
                    return $s;
                }, $results['statuses']);

                
                if (!empty($statuses)) {
                    $element[$delta]['header'] = [
                        '#markup' => '

#'</span> . $item->hashtag_search . '

'
]; } foreach ($statuses as $status) { $element[$delta]['status'][] = [ '#theme' => 'my_module_status', '#status' => $status ]; } } catch (\Exception $e) { $this->logger->error('[:exception]: %message', [ ':exception' => get_class($e), '%message' => $e->getMessage(), ]); continue; } } $element['#attached']['library'][] = 'my_module/twitter_intents'; return $element; } ...

See https://github.com/bezhermoso/tweet-to-html-php for how TweetFormatter works. Also, you can find the source-code for the basic Twitter HTTP client here: https://gist.github.com/bezhermoso/5a04e03cedbc77f6662c03d774f784c5

Custom theme renderer

As shown above, each individual tweets are using the my_module_status render theme. We'll define it in the my_module.module file:




function my_module_theme($existing, $type, $theme, $path) {
  $theme = [];
  $theme['my_module_status'] = array(
    'variables' => array(
      'status' => NULL
    ),
    'template' => 'twitter-status',
    'render element' => 'element',
    'path' => $path . '/templates'
  );

  return $theme;
}

With this, we are telling Drupal to use the template file modules/my_module/templates/twitter-status.twig.html for any render array using my_module_status as its theme.

Render caching

Drupal 8 does a good job caching content: typically any field formatter is only called once and the resulting collective render arrays are cached for subsequent page loads until the Drupal cache is cleared. We don't really want our Twitter block to be cached for that long. Since it is always great practice to keep caching enabled, we can define how caching is to be applied to our Twitter blocks. This is done by adding cache definitions in the render array before we return it:



      public function viewElements(...)
      {

        ...

        $element['#attached']['library'][] = 'my_module/twitter_intents';
        
        $element['#cache']['max-age'] = 60 * 5;

        return $element;
      }

Here we are telling Drupal to keep the render array in cache for 5 minutes. Drupal will still cache the rest of the page's elements how they want to be cached, but will call our field formatter again -- which pulls fresh data from Twitter -- if 5 minutes has passed since the last time it was called.

Jun 04 2016
Jun 04

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Jun 03 2016
Jun 03

On a recent project we had to create a section that is basically a Twitter search for a hashtag. It needed to be usuable in different sections of the layout and work the same. Also, we were using the Paragraphs module and came up with a pretty nifty (we think) solution of creating a custom field that solved this particular problem for us. I will walk you through how to create a custom field/widget/formatter for Drupal 8. There are Drupal console commands for generating boilerplate code for this... which I will list before going through each of the methods for the components.

Field Type creation

The first thing to do is create a custom field. In a custom module (here as "my_module") either run drupal:generate:fieldtype or create a file called HashTagSearchItem.php in src/Plugin/Field/FieldType. The basic structure for the class will be:



namespace Drupal\my_module\Plugin\Field\FieldType;

use Drupal\Core\Field\FieldItemBase;
use Drupal\Core\Field\FieldStorageDefinitionInterface;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Language\LanguageInterface;
use Drupal\Core\TypedData\DataDefinition;


class HashtagSearchItem extends FieldItemBase {



}

Next, implement a few methods that will tell Drupal how our field will be structured. Provide a default field settings for the field that will be the count for the amount of tweets to pull. This will return of default settings keyed by the setting's name.



  
  public static function defaultFieldSettings() {
    return [
      'count' => 6
    ] + parent::defaultFieldSettings();
  }

Then provide the field item's properties. In this case there will be an input for hashtag and a count. Each property will be keyed by the property name and be a DataDefinition defining what the properties will hold.


  
  public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) {
    $properties = [];
    $properties['hashtag_search'] = DataDefinition::create('string')
      ->setLabel(t('The hashtag to search for.'));
    $properties['count'] = DataDefinition::create('integer')
      ->setLabel(t('The count of twitter items to pull.'));
    return $properties;
  }

Then provide a schema for the field. This will be the properties that we have created above.


  
  public static function schema(FieldStorageDefinitionInterface $field_definition) {
    return [
      'columns' => [
        'hashtag_search' => [
          'type' => 'varchar',
          'length' => 32,
        ],
        'count' => [
          'type' => 'int',
          'default' => 6
        ]
      ]
    ];
  }

Field widget creation

Next create the widget for the field, which is the actual form element and it's settings. Either drupal:generate:fieldwidget or create a file in src/Plugin/Field/FieldWidget/ called HashtagSearchWidget.php. This is the class' skeleton:



namespace Drupal\my_module\Plugin\Field\FieldWidget;

use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\WidgetBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Render\Element;



class HashtagSearchWidget extends WidgetBase {
  
}

Then implement several methods. Provide a default count of tweets to pull for new fields and the settings form for the field item:


  
  public static function defaultSettings() {
    return [
      'default_count' => 6,
    ] + parent::defaultSettings();
  }

  
  public function settingsForm(array $form, FormStateInterface $form_state) {
    $elements = [];
    $elements['default_count'] = [
      '#type' => 'number',
      '#title' => $this->t('Default count'),
      '#default_value' => $this->getSetting('default_count'),
      '#empty_value' => '',
      '#min' => 1
    ];

    return $elements;
  }

  
  public function settingsSummary() {
    $summary = [];
    $summary[] = t('Default count: !count', array('!count' => $this->getSetting('default_count')));

    return $summary;
  }

Then create the actual form element. Add the hashtag textfield and count number field and wrap it in a fieldset for a better experience:


  
  public function formElement(FieldItemListInterface $items, $delta, array $element, array &$form, FormStateInterface $form_state) {
    $item = $items[$delta];

    $element['hashtag_search'] = [
      '#type' => 'textfield',
      '#title' => $this->t('Hashtag'),
      '#required' => FALSE,
      '#size' => 60,
      '#default_value' => (!$item->isEmpty()) ? $item->hashtag_search : NULL,
    ];

    $element['count'] = [
      '#type' => 'number',
      '#title' => $this->t('Pull count'),
      '#default_value' => $this->getSetting('default_count'),
      '#size' => 2
    ];

    $element += [
      '#type' => 'fieldset',
    ];

    return $element;
  }

In part 2, Bez will show you how to pull the tweets and create a field formatter for the display of the tweets. You can read that post here!

May 14 2016
May 14

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

May 07 2016
May 07

Drupal 8 has greatly improved editor experience out-of-the-box. It comes shipped with CKEditor for WYSIWYG editing. Although, D8 ships with a custom build of CKEditor and it may not have the plugins that you would like to have or that your client wants to have. I will show you how to add new plugins into the CKEditor that comes with Drupal 8.

Adding plugins with a button

First, create a bare-bones custom module called editor_experience. Files will be added here that will tell Drupal that there is a new CKEditor plugin. Find a plugin to actually install... for the first example I will use bootstrap buttons ckeditor plugin. Place the downloaded plugin inside libraries directory at the root of the Drupal installation; or use a make file to place it there. Also make sure you have the libraries module installed drupal module:download libraries.

Create a file inside of the editor_experience module inside of src/Plugin/CKEditorPlugin called BtButton.php. Add the name space and the two use statements shown below.





namespace Drupal\editor_experience\Plugin\CKEditorPlugin;

use Drupal\ckeditor\CKEditorPluginBase;
use Drupal\editor\Entity\Editor;


class BtButton extends CKEditorPluginBase {

  ... 

}

The annotation @CKEditorPlugin tells Drupal there is a plugin for CKEditor to load. For the id, use the name of the plugin as defined in the plugin.js file that came with the btbutton download. Now we add several methods to our BtButton class.

First method will return false since it is not part of the internal CKEditor build.





public function isInternal() {
  return FALSE;
}

Next method will get the plugin's javascript file.





public function getFile() {
  return libraries_get_path('btbutton') . '/plugin.js';
}

Let Drupal know where your button is. Be sure that the key is set to the name of the plugin. In this case btbutton.





  public function getButtons() {
    return [
      'btbutton' => [
        'label' => t('Bootstrap Buttons'),
        'image' => libraries_get_path('btbutton') . '/icons/btbutton.png'
      ]
    ];
  }

Also implement getConfig() and return an empty array since this plugin has no configurations.

Then go to admin/config/content/formats/manage/basic_html or whatever format you have that uses the CKEditor and pull the Bootstrap button icon down into the toolbar.

Now the button is available for use on the CKEditor!

Adding plugins without a button (CKEditor font)

Some plugins do not come with a button png that allows users to drag the tool into the configuration, so what then?

In order to get a plugin into Drupal that does not have a button, the implementation of getButtons() is a little different. For example to add the Font/Font size dropdowns use image_alternative like below:





 public function getButtons() {
   return [
     'Font' => [
       'label' => t('Font'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '{{ font }}',
         '#context' => [
           'font' => t('Font'),
         ],
       ],
     ],
     'FontSize' => [
       'label' => t('Font Size'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '{{ font }}',
         '#context' => [
           'font' => t('Font Size'),
         ],
       ],
     ],
   ];
}

Then pull in the dropdown the same way the Bootstrap button plugin was added! Have any questions? Comment below or tweet us @activelamp.

Apr 09 2016
Apr 09

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Apr 04 2016
Apr 04

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Mar 01 2016
Mar 01

The biggest thing that got me excited with Drupal 8 is the first-class use of services & dependency-injection throughout the entire system. From aspects like routing, templating, managing configuration, querying and persisting data, you name it -- everything is done with services. This is a great thing, because it grants developers a level of flexibility in extending Drupal that is far greater than what Drupal 7 was able to.

I'll walk you through a few strategies of extending existing functionality, leveraging the power of Symfony's DependencyInjection component.

Example 1: Inheritance

Since everything in Drupal 8 can be traced back to a method call on an object within the service container, using inheritance to modify core behavior is a valid strategy.

For example, say we want to add the ability to specify the hostname when generating absolute URLs using Drupal\Core\Url::fromRoute. A quick look at this method will tell you that all the work is actually done by the @url_generator service (a lot of Drupal 8's API is set up like this -- a static method that simply delegates work to a service from the container somehow).

drupal/core/core.services.yml can tell us a lot about what makes up the @url_generator service:

services:
...
url_generator:
class: 'Drupal\Core\Render\MetadataBubblingUrlGenerator'
arguments: ['@url_generator.non_bubbling', '@renderer']
calls: - [setContext, ['@?router.request_context']]

url_generator.non_bubbling:
class: 'Drupal\Core\Routing\UrlGenerator'
arguments: ['@router.route_provider', '@path_processor_manager', '@route_processor_manager', '@request_stack', '%filter_protocols%']
public: false
calls: - [setContext, ['@?router.request_context']]
...

A peek at Drupal\Core\Render\MetadataBubblingUrlGenerator will actually show that it just delegates the core work of constructing a URL to the @url_generator.non_bubbling service.

To add the desired ability to the URL generator, we will have to write a new class that handles the extra logic:



namespace Drupal\foo\Routing;

use Drupal\Core\Routing\UrlGenerator;

class HostOverridingUrlGenerator extends UrlGenerator
{
    public function generateFromRoute(
      $name,
      $parameters = array(),
      $options = array(),
      $collected_bubbleable_metadata = NULL
    ) {

        
        $hasHostOverride = array_key_exists('host', $options) && $options['host'];

        if ($hasHostOverride) {
            
            $originalHost = $this->context->getHost();
            $this->context->setHost((string) $options['host']);
            $options['absolute'] = true;
        }

        $result = parent::generateFromRoute($name, $parameters, $options, $collected_bubbleable_metadata);

        if ($hasHostOverride) {
            
            $this->context->setHost($originalHost);
        }

        return $result;
    }
}

We now have a new type that is capable of overriding the hosts in absolute URLs that are generated. The next step is to tell Drupal use this in favor of the original. We can do that by manipulating the existing definition through a service provider.

Service Providers

Drupal will look for a service provider in your module directory and will hand it the container-builder for it to be manipulated. Telling Drupal to use our new class is not done by editing drupal/core/core.services.yml but by modifying the definition through the service provider:






namespace Drupal\foo;

use Drupal\Core\DependencyInjection\ServiceProviderInterface;
use Drupal\Core\DependencyInjection\ContainerBuilder;

class FooServiceProvider  implements ServiceProviderInterface
{
    public function register(ContainerBuilder $container)
    {
        $urlGenerator = $container->getDefinition('url_generator.non_bubbling');
        $urlGenerator->setClass(__NAMESPACE__ . '\Routing\HostOverridingUrlGenerator');
    }
}

Done! Once the foo module is installed, your service provider will now have the chance to modify the service container as it see fit.

Example 2: Decorators

In a nutshell, a decorator modifies the functionality of an existing object not by extending its type but by wrapping the object and putting logic around the existing functionality.

This distinction between extending the object's type versus wrapping it seems trivial, but in practice it can be very powerful. It means you can change an object's behavior at run-time, with the added benefit of not having to care what the object's type is. An update to Drupal core could change the type (a.k.a. the class) of a service at any point, as long as the substitute object still respect the agreed contract imposed by an interface, then the decorator would still work.

Say another module declared a service named @twitter_feed and we want to cache the result of some expensive method call:



use Drupal\foo\Twitter;

use Drupal\some_twitter_module\Api\TwitterFeedInterface;
use Drupal\Core\Cache\CacheBackendInterface;

class CachingTwitterFeed implements TwitterFeedInterface
{
      public function __construct(TwitterFeedInterface $feed, CacheBackendInterface $cache)
      {
          $this->feed = $feed;
          $this->cache = $cache;
      }

      public function getLatestTweet($handle)
      {

          
          if ($this->cache->has($handle)) {
            return $this->cache->get($handle);
          }

          
          
          $tweet = $this->feed->getLatestTweet($handle);
          $this->cache->set($handle, $tweet, 60 * 60 * 5); 

          return $tweet;
      }

      public function setAuthenticationToken($token)
      {
          
          return $this->feed->setAuthenticationToken($token);
      }
}

To tell Drupal to decorate a service, you can do so in YAML notation:



services:
cached.twitter_feed:
class: 'Drupal\foo\Twitter\CachingTwitterFeed'
decorates: 'twitter_feed'
arguments: ['@twitter_feed.inner', '@cache.twittter_feed']

With this in place, all references and services requiring @twitter_feed will get our instance that does caching instead.

The original @twitter_feed service will be renamed to @twitter_feed.inner by convention.

Decorators vs sub-classes

Decorators are perfect for when you need to add logic around existing ones. One beauty behind decorators is that it doesn't need to know the actual type of the object it tries to change. It only needs to know what methods it responds to i.e. it only cares about the objects interface, and not much else.

Another beautiful thing is that you can effectively modify the object's behavior at run-time:




$feed = new TwitterFeed();
$feed->setAuthenticationToken($token);

if ($isProduction) { 
  $feed = new CachingTwitterFeed($feed, $cache);
}

$feed->getLastTweet(...);

or:



$feed = new TwitterFeed();
$cachedFeed = new CachingTwitterFeed($feed, $cache);

$feed->setAuthenticationToken($token);

$feed->getLastTweet(...) 
$cachedFeed->getLastTweet(...) 

$feed->setAuthenticateToken($newToken); 

Compare that to if you have the caching version as a sub-class, then you'll need to instantiate two objects, and (re)authenticate both.

And lastly, one cool thing about decorators is you can layer them with greater flexibility:



$feed = new TwitterFeed();

$feed = new CachingTwitterFeed($feed, $cache); 
$feed = new LoggingTwitterFeed($feed); 
$feed = new CachingTwitterFeed(new LoggingTwitterFeed($feed), $cache); 

These are contrived examples but I hope you get the gist.

However there are cases where using decorators just wouldn't cut it (for example, if you need to access a protected property or method, which you can't do with decorators). I'd say that if you can accomplish the necessary modifications using only an object's public API, think about achieving it using decorator(s) instead and see if it's advantageous.

The HostOverridingUrlGenerator can actually be written as a decorator, as we can achieve the required operations using the objects public API only -- instead of using $this->context, we can use $this->inner->getContext() instead, etc.

In fact, the @url_generator service, an instance of Drupal\Core\Render\MetadataBubblingUrlGenerator > is a decorator in itself. The host override behavior can be modelled as:

new MetadataBubblingUrlGenerator(new HostOverridingUrlGenerator(new UrlGenerator(...)), ...)

One down-side of using decorators is you will end up with a bunch of boilerplate logic of simply passing parameters to the inner object without doing much else. It will also break if there are any changes to the interface, although this shouldn't happen until a next major version bump.

Composites

You might want to create a new service whose functionality (or part thereof) involves the application of multiple objects that it manages.

For example:




use Drupal\foo\Processor;

class CompositeProcessor implements ProcessorInterface
{
    
    protected $processors = array();

    public function process($value)
    {
        
        foreach ($this->processors as $processor) {
            $value = $processor->process($value);
        }

        return $value;
    }

    public function addProcessor(ProcessorInterface $processor)
    {
        $this->processors[] = $processor;
    }
}

Composite objects like this are quite common, and there are a bunch of them in Drupal 8 as well. Traditionally, an object that wants to be added to the collection must be declared as tagged service. They are then gathered together during a compiler pass and added to the composite object's definition.

In Drupal 8, you don't need to code the compiler pass logic anymore. You can just tag your composite service as a service_collector, like so:




services:
the_processor:
class: 'Drupal\foo\Processor\CompositeProcessor'
tags: - { name: 'service_collector', tag: 'awesome_processor', call: 'addProcessor' }

    bar_processor:
      class: 'Drupal\foo\Processor\BarProcessor'
      arguments: [ '@bar' ]
      tags:
        - { name: 'awesome_processor' }

    foo_processor:
      class: 'Drupal\foo\Processor\FooProcessor'
      arguments: [ '@foo' ]
      tags:
        - { name: 'awesome_processor' }

With this configuration, the service container will make sure that @bar_processor and @foo_processor are injected into the @the_processor service whenever you ask for it. This also allows other modules to hook into your service by tagging their service with awesome_processor, which is great.

Conclusion

These are just a few OOP techniques that the addition of a dependency injection component has opened up to Drupal 8 development. These are things that PHP developers using Symfony2, Laravel, ZF2 (using its own DI component, Zend\Di), and many others have enjoyed in the recent years, and they are now ripe for the taking by the Drupal community.

For more info on Symfony's DependencyInjection component, head to http://symfony.com/doc/2.7/components/dependency_injection/index.html. I urge you to read up on manipulating the container-builder in code as there are a bunch of things you can do in service providers that you can't achieve by using the YAML notation.

If you have any questions, comments, criticisms, or some insights to share, feel free to leave a comment! Happy coding!f you have any questions, comments, criticisms, etc feel free to leave a comment! Happy coding!

Jan 27 2016
Jan 27

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Jan 20 2016
Jan 20

This post is part 4 in the series ["Hashing out a docker workflow"]({% post_url 2015-06-04-hashing-out-docker-workflow %}). For background, checkout my previous posts.

My previous posts talked about getting your local environment setup using the Drupal Docker image with Vagrant. It's now time to bake a Docker image with our custom application code within the container, so that we can deploy containers implementing the immutable server pattern. One of the main reasons we starting venturing down the Docker path was to achieve deployable fully baked containers that are ready to run in whatever environment you put them in, similar to what we've done in the past with Packer, as I've mentioned in a previous post.

Review

The instructions in this post are assumming you followed my [previous post]({% post_url 2015-09-22-local-docker-development-with-vagrant %}) to get a Drupal environment setup with the custom "myprofile" profile. In that post we brought up a Drupal environment by just referencing the already built Drupal image on DockerHub. We are going to use that same Docker image, and add our custom application to that.

All the code that I'm going to show below can be found in this repo on Github.

Putting the custom code into the container

We need to create our own image, create a Dockerfile in our project that extends the Drupal image that we are pulling down.

Create a file called Dockerfile in the root of your project that looks like the following:

FROM drupal:7.41

ADD drupal/profiles/myprofile /var/www/html/profiles/myprofile

We are basically using everything from the Drupal image, and adding our installation profile to the profiles directory of the document root.

This is a very simplistic approach, typically there are more steps than just copying files over. In more complex scenarios, you will likely run some sort of build within the Dockerfile as well, such as Gulp, Composer, or Drush Make.

Setting up Jenkins

We now need to setup a Jenkins server that will checkout our code and run docker build and docker push. Let's setup a local jenkins container on our Docker host to do this.

Open up the main Vagrantfile in the project root and add another container to the file like the following:






Vagrant.configure(2) do |config|

config.vm.define "jenkins" do |v|
v.vm.provider "docker" do |d|
d.vagrant_vagrantfile = "./host/Vagrantfile"
d.build_dir = "./Dockerfiles/jenkins"
d.create_args = ['--privileged']
d.remains_running = true
d.ports = ["8080:8080"]
d.name = "jenkins-container"
end
end

config.vm.define "drupal" do |v|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
end

Two things to notice from the jenkins container definition, 1) The Dockerfile for this container is in the Dockerfiles/jenkins directory, and 2) we are passing the --privileged argument when the container is run so that our container has all the capabilities of the docker host. We need special access to be able to run Docker within Docker.

Lets create the Dockerfile:

$ mkdir -p Dockerfiles/jenkins
$ cd !$
$ touch Dockerfile

Now open up that Dockerfile and install Docker onto this Jenkins container:

FROM jenkins:1.625.2

USER root


RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D


RUN echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list

VOLUME /var/lib/docker

RUN apt-get update && \
  apt-get -y install \
    docker-engine

ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh

ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/dockerjenkins.sh" ]

We are using a little script that is found in The Docker Book as our entry point to start the docker daemon, as well as Jenkins. It also does some stuff on the filesystem to ensure cgroups are mounted correctly. If you want to read more about running Docker in Docker, go check out this article

Boot up the new container

Before we boot this container up, edit your host Vagrantfile and setup the port forward so that 8080 points to 8080:






Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.network :forwarded_port, guest: 8080, host: 8080
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

Now bring up the new container:

$ vagrant up jenkins

or if you've already brought it up once before, you may just need to run reload:

\$ vagrant reload jenkins

You should now be able to hit Jenkins at the URL http://localhost:8080

Jenkins Dashboard

Install the git plugins for Jenkins

Now that you have Jenkins up and running, we need to install the git plugins. Click on the "Manage Jenkins" link in the left navigation, then click "Manage Plugins" in the list given to you, and then click on the "Available" Tab. Filter the list with the phrase "git client" in the filter box. Check the two boxes to install plugins, then hit "Download now and install after restart".

Jenkins Plugin Install

On the following screen, check the box to Restart Jenkins when installation is complete.

Setup the Jenkins job

It's time to setup Jenkins. If you've never setup a Jenkins job, here is a quick crash course.

  1. Click the New Item link in the left navigation. Name your build job, and choose Freestyle project. Click Ok. New Build Job
  2. Configure the git repo. We are going to configure Jenkins to pull code directly from your repository and build the Docker image from that. Add the git Repo
  3. Add the build steps. Scroll down toward the bottom of the screen and click the arrow next to Add build step and choose Execute Shell. We are going to add three build steps as shown below. First we build the Docker image with docker build -t="tomfriedhof/docker_blog_post" . (notice the trailing dot) and give it a name with the -t parameter, then we login to DockerHub, and finally push the newly created image that was created to DockerHub. Jenkins Build Steps
  4. Hit Save, then on the next screen hit the button that says Build Now

If everything went as planned, you should have a new Docker image posted on DockerHub: https://hub.docker.com/r/tomfriedhof/docker_blog_post/

Wrapping it up

There you have it, we now have an automated build that will automatically create and push Docker images to DockerHub. You can add on to this Jenkins job so that it polls your Github Repository so that it automatically runs this build anytime something changes in the tracking repo.

As another option, if you don't want to go through all the trouble of setting up your own Jenkins server just to do what I just showed you, DockerHub can do this for you. Go checkout their article on how to setup automated builds with Docker.

Now that we have a baked container with our application code within it, the next step is to deploy the container. That is the next post in this series. Stay tuned!

Dec 15 2015
Dec 15

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Dec 02 2015
Dec 02

Now that the release of Drupal 8 is finally here, it is time to adapt our Drupal 7 build process to Drupal 8, while utilizing Docker. This post will take you through how we construct sites on Drupal 8 using dependency managers on top of Docker with Vagrant.

Keep a clean upstream repo

Over the past 3 or 4 years developing websites has changed dramatically with the increasing popularity of dependency management such as Composer, Bundler, npm, Bower, etc... amongst other tools. Drupal even has it's own system that can handle dependencies called Drush, albiet it is more than just a dependency manager for Drupal.

With all of these tools at our disposal, it makes it very easy to include code from other projects in our application while not storing any of that code in the application code repository. This concept dramatically changes how you would typically maintain a Drupal site, since the typical way to manage a Drupal codebase is to have the entire Drupal Docroot, including all dependencies, in the application code repository. Having everything in the docroot is fine, but you gain so much more power using dependency managers. You also lighten up the actual application codebase when you utilize dependency managers, because your repo only contains code that you wrote. There are tons of advantages to building applications this way, but I have digressed, this post is about how we utilize these tools to build Drupal sites, not an exhaustive list of why this is a good idea. Leave a comment if you want to discuss the advantages / disadvantages of this approach.

Application Code Repository

We've got a lot going on in this repository. We won't dive too deep into the weeds looking at every single file, but I will give a high level overview of how things are put together.

Installation Automation (begin with the end in mind)

The simplicity in this process is that when a new developer needs to get a local development environment for this project, they only have to execute two commands:

$ vagrant up --no-parallel
$ make install

Within minutes a new development environment is constructed with Virtualbox and Docker on the developers machine, so that they can immediately start contributing to the project. The first command boots up 3 Docker containers -- a webserver, mysql server, and jenkins server. The second command invokes Drush to build the document root within the webserver container and then installs Drupal.

We also utilize one more command to keep running within a seperate terminal window, to keep files synced from our host machine to the Drupal 8 container.

$ vagrant rsync-auto drupal8

Breaking down the two installation commands

vagrant up --no-parallel

If you've read any of [my]({% post_url 2015-06-04-hashing-out-docker-workflow %}) [previous]({% post_url 2015-07-19-docker-with-vagrant %}) [posts]({% post_url 2015-09-22-local-docker-development-with-vagrant %}), I'm a fan of using Vagrant with Docker. I won't go into detail about how the environment is getting set up. You can read my previous posts on how we used Docker with Vagrant. For completeness, here is the Vagrantfile and Dockerfile that vagrant up reads to setup the environment.

Vagrantfile






require 'fileutils'

MYSQL_ROOT_PASSWORD="root"

unless File.exists?("keys")
Dir.mkdir("keys")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
File.open("keys/id_rsa.pub", 'w') { |file| file.write(ssh_pub_key) }
end

unless File.exists?("Dockerfiles/jenkins/keys")
Dir.mkdir("Dockerfiles/jenkins/keys")
FileUtils.copy("#{Dir.home}/.ssh/id_rsa", "Dockerfiles/jenkins/keys/id_rsa")
end

Vagrant.configure("2") do |config|

config.vm.define "mysql" do |v|

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.image = "mysql:5.7.9"
      d.env = { :MYSQL_ROOT_PASSWORD => MYSQL_ROOT_PASSWORD }
      d.name = "mysql-container"
      d.remains_running = true
      d.ports = [
        "3306:3306"
      ]
    end

end

config.vm.define "jenkins" do |v|

    v.vm.synced_folder ".", "/srv", type: "rsync",
        rsync__exclude: get_ignored_files(),
        rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"]

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "./Dockerfiles/jenkins"
      d.name = "jenkins-container"
      
      d.volumes = [
          "/home/rancher/.composer:/root/.composer",
          "/home/rancher/.drush:/root/.drush"
      ]
      d.remains_running = true
      d.ports = [
          "8080:8080"
      ]
    end

end

config.vm.define "drupal8" do |v|

    v.vm.synced_folder ".", "/srv/app", type: "rsync",
      rsync__exclude: get_ignored_files(),
      rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"],
      rsync__chown: false

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "."
      d.name = "drupal8-container"
      d.remains_running = true
      
      d.volumes = [
        "/home/rancher/.composer:/root/.composer",
        "/home/rancher/.drush:/root/.drush"
      ]
      d.ports = [
        "80:80",
        "2222:22"
      ]
      d.link("mysql-container:mysql")
    end

end

end

def get_ignored_files()
ignore_file = ".rsyncignore"
ignore_array = []

if File.exists? ignore_file and File.readable? ignore_file
File.read(ignore_file).each_line do |line|
ignore_array << line.chomp
end
end

ignore_array
end

One of the cool things to point out that we are doing in this Vagrantfile is setting up a VOLUME for the composer and drush cache that should persist beyond the life of the container. When our application container is rebuilt we don't want to download 100MB of composer dependencies every time. By utilizing a Docker VOLUME, that folder is mounted to the actual Docker host.

Dockerfile (drupal8-container)

FROM ubuntu:trusty


ENV PROJECT_ROOT /srv/app
ENV DOCUMENT_ROOT /var/www/html
ENV DRUPAL_PROFILE=apa_profile


RUN apt-get update
RUN apt-get install -y \
	vim \
	git \
	apache2 \
	php-apc \
	php5-fpm \
	php5-cli \
	php5-mysql \
	php5-gd \
	php5-curl \
	libapache2-mod-php5 \
	curl \
	mysql-client \
	openssh-server \
	phpmyadmin \
	wget \
	unzip \
	supervisor
RUN apt-get clean


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer


RUN mkdir /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN mkdir /var/run/sshd && chmod 0755 /var/run/sshd
RUN mkdir -p /root/.ssh
COPY keys/id_rsa.pub /root/.ssh/authorized_keys
RUN chmod 600 /root/.ssh/authorized_keys
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd


RUN composer global require drush/drush:8.0.0-rc3
RUN ln -nsf /root/.composer/vendor/bin/drush /usr/local/bin/drush


RUN mv /root/.composer /tmp/


RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/apache2/php.ini
RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/cli/php.ini


RUN sed -i 's/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf
RUN a2enmod rewrite


RUN echo '[program:apache2]\ncommand=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"\nautorestart=true\n\n' >> /etc/supervisor/supervisord.conf
RUN echo '[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n' >> /etc/supervisor/supervisord.conf





WORKDIR $PROJECT_ROOT
EXPOSE 80 22
CMD exec supervisord -n

We have xdebug commented out in the Dockerfile, but it can easily be uncommented if you need to step through code. Simply uncomment the two RUN commands and run vagrant reload drupal8

make install

We utilize a Makefile in all of our projects whether it be Drupal, nodejs, or Laravel. This is so that we have a similar way to install applications, regardless of the underlying technology that is being executed. In this case make install is executing a drush command. Below is the contents of our Makefile for this project:

all: init install

init:
vagrant up --no-parallel

install:
bin/drush @dev install.sh

rebuild:
bin/drush @dev rebuild.sh

clean:
vagrant destroy drupal8
vagrant destroy mysql

mnt:
sshfs -C -p 2222 [email protected]:/var/www/html docroot

What this commmand does is ssh into the drupal8-container, utilizing drush aliases and drush shell aliases.

install.sh

The make install command executes a file, within the drupal8-container, that looks like this:

#!/usr/bin/env bash

echo "Moving the contents of composer cache into place..."
mv /tmp/.composer/* /root/.composer/

PROJECT_ROOT=$PROJECT_ROOT DOCUMENT_ROOT=$DOCUMENT_ROOT $PROJECT_ROOT/bin/rebuild.sh

echo "Installing Drupal..."
cd $DOCUMENT_ROOT && drush si $DRUPAL_PROFILE --account-pass=admin -y
chgrp -R www-data sites/default/files
rm -rf ~/.drush/files && cp -R sites/default/files ~/.drush/

echo "Importing config from sync directory"
drush cim -y

You can see on line 6 of install.sh file that it executes a rebuild.sh file to actually build the Drupal document root utilizing Drush Make. The reason for separating the build from the install is so that you can run make rebuild without completely reinstalling the Drupal database. After the document root is built, the drush site-install apa_profile command is run to actually install the site. Notice that we are utilizing Installation Profiles for Drupal.

We utilize installation profiles so that we can define modules available for the site, as well as specify default configuration to be installed with the site.

We work hard to achieve the ability to have Drupal install with all the necessary configuration in place out of the gate. We don't want to be passing around a database to get up and running with a new site.

We utilize the Devel Generate module to create the initial content for sites while developing.

rebuild.sh

The rebuild.sh file is responsible for building the Drupal docroot:

#!/usr/bin/env bash

if [ -d "$DOCUMENT_ROOT/sites/default/files" ]
then
echo "Moving files to ~/.drush/..."
mv \$DOCUMENT_ROOT/sites/default/files /root/.drush/
fi

echo "Deleting Drupal and rebuilding..."
rm -rf \$DOCUMENT_ROOT

echo "Downloading contributed modules..."
drush make -y $PROJECT_ROOT/drupal/make/dev.make $DOCUMENT_ROOT

echo "Symlink profile..."
ln -nsf $PROJECT_ROOT/drupal/profiles/apa_profile $DOCUMENT_ROOT/profiles/apa_profile

echo "Downloading Composer Dependencies..."
cd $DOCUMENT_ROOT && php $DOCUMENT_ROOT/modules/contrib/composer_manager/scripts/init.php && composer drupal-update

echo "Moving settings.php file to $DOCUMENT_ROOT/sites/default/..."
rm -f $DOCUMENT_ROOT/sites/default/settings\*
cp $PROJECT_ROOT/drupal/config/settings.php $DOCUMENT_ROOT/sites/default/
cp $PROJECT_ROOT/drupal/config/settings.local.php $DOCUMENT_ROOT/sites/default/
ln -nsf $PROJECT_ROOT/drupal/config/sync $DOCUMENT_ROOT/sites/default/config
chown -R www-data \$PROJECT_ROOT/drupal/config/sync

if [ -d "/root/.drush/files" ]
then
cp -Rf /root/.drush/files $DOCUMENT_ROOT/sites/default/
    chmod -R g+w $DOCUMENT_ROOT/sites/default/files
chgrp -R www-data sites/default/files
fi

This file essentially downloads Drupal using the dev.make drush make file. It then runs composer drupal-update to download any composer dependencies in any of the modules. We use the composer manager module to help with composer dependencies within the Drupal application.

Running the drush make dev.make includes two other Drush Make files, apa-cms.make (the application make file) and drupal-core.make. Only dev dependencies should go in dev.make. Application dependencies go into apa-cms.make. Any core patches that need to be applied go into drupal-core.make.

Our Jenkins server builds the prod.make file, instead of dev.make. Any production specific modules would go in prod.make file.

Our make files for this project look like this so far:

dev.make

core: "8.x"

api: 2

defaults:
  projects:
    subdir: "contrib"

includes:
  - "apa-cms.make"

projects:
  devel:
    version: "1.x-dev"

apa-cms.make

core: "8.x"

api: 2

defaults:
projects:
subdir: "contrib"

includes:

- drupal-core.make

projects:
address:
version: "1.0-beta2"

composer_manager:
version: "1.0-rc1"

config_update:
version: "1.x-dev"

ctools:
version: "3.0-alpha17"

draggableviews:
version: "1.x-dev"

ds:
version: "2.0"

features:
version: "3.0-alpha4"

field_collection:
version: "1.x-dev"

field_group:
version: "1.0-rc3"

juicebox:
version: "2.0-beta1"

layout_plugin:
version: "1.0-alpha19"

libraries:
version: "3.x-dev"

menu_link_attributes:
version: "1.0-beta1"

page_manager:
version: "1.0-alpha19"

pathauto:
type: "module"
download:
branch: "8.x-1.x"
type: "git"
url: "http://github.com/md-systems/pathauto.git"

panels:
version: "3.0-alpha19"

token:
version: "1.x-dev"

zurb_foundation:
version: "5.0-beta1"
type: "theme"

libraries:
juicebox:
download:
type: "file"
url: "https://www.dropbox.com/s/hrthl8t1r9cei5k/juicebox.zip?dl=1"

(once this project goes live, we will pin the version numbers)

drupal-core.make

core: "8.x"

api: 2

projects:
  drupal:
    version: 8.0.0
    patch:
      - https://www.drupal.org/files/issues/2611758-2.patch

prod.make

core: "8.x"

api: 2

includes:

- "apa-cms.make"

projects:
apa_profile:
type: "profile"
subdir: "."
download:
type: "copy"
url: "file://./drupal/profiles/apa_profile"

At the root of our project we also have a Gemfile, specifically to install the compass compiler along with various sass libraries. We install these tools on the host machine, and "watch" those directories from the host. vagrant rsync-auto watches any changed files and rsyncs them to the drupal8-container.

bundler

From the project root, installing these dependencies and running a compass watch is simple:

$ bundle
$ bundle exec compass watch path/to/theme

bower

We pull in any 3rd party front-end libraries such as Foundation, Font Awesome, etc... using Bower. From within the theme directory:

\$ bower install

There are a few things we do not commit to the application repo, as a result of the above commands.

  • The CSS directory
  • Bower Components directory

Deploy process

As I stated earlier, we utilize Jenkins CI to build an artifact that we can deploy. Within the jenkins job that handles deploys, each of the above steps is executed, to create a document root that can be deployed. Projects that we build to work on Acquia or Pantheon actually have a build step to also push the updated artifact to their respected repositories at the host, to take advantage of the automation that Pantheon and Acquia provide.

Conclusion

Although this wasn't an exhaustive walk thru of how we structure and build sites using Drupal, it should give you a general idea of how we do it. If you have specific questions as to why we go through this entire build process just to setup Drupal, please leave a comment. I would love to continue the conversation.

Look out for a video on this topic in the next coming weeks. I covered a lot in this post, without going into much detail. The intent of this post was to give a 10,000 foot view of the process. The upcoming video on this process will get much closer to the Tarmac!

As an aside, one caveat that we did run into with setting up default configuration in our Installation Profile was with Configuration Management UUID's. You can only sync configuration between sites that are clones. We have overcome this limitation with a workaround in our installation profile. I'll leave that topic for my next blog post in a few weeks.

Nov 14 2015
Nov 14

Shoov.io is a nifty website testing tool created by Gizra. We at ActiveLAMP were first introduced to Shoov.io at DrupalCon LA, in fact, Shoov.io is built on, you guessed it, Drupal 7 and it is an open source visual regression toolkit.

Shoov.io uses webdrivercss, graphicsmagick, and a few other libraries to compare images. Once the images are compared you can visually see the changes in the Shoov.io online app. When installing Shoov you can choose to install it directly into your project directory/repository or you can use a separate directory/repository to house all of your tests and screenshots. Initially when testing Shoov we had it contained in a separate directory but with our most recent project, we opted to install Shoov directly into our project with the hopes to have it run on a commit or pull request basis using Travis CI and SauceLabs.

[embedded content]

##Installation

To get Shoov installed into your project, I will, for this install, assume that you want to install it into your project, navigate into your project using the terminal.

Install the Yeoman Shoov generator globally (may have to sudo)

npm install -g mocha yo generator-shoov

Make sure you have Composer installed globally

curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer

Make sure you have Brew installed (MacOSX)

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install GraphicsMagick (MacOSX)

brew install graphicsmagick

Next you will need to install dependencies if you don't have them already.

npm install -g yo npm install -g file-type

Now we can build the test suite using the Yeoman generator*

yo shoov --base-url=http://activelamp.com

*running this command may give you more dependencies you need to install first.

This generator will scaffold all of the directories that you need to start writing tests for your project. It will also give you some that you may not need at the moment, such as behat. You will find the example test within the directory test in the visual-monitor directory named test.js. We like to split our tests into multiple files so we might rename our test to homepage.js. Here is what the homepage.js file looks like when you first open it.

'use strict'

var shoovWebdrivercss = require('shoov-webdrivercss')






var capsConfig = {
  chrome: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    resolution: '1024x768',
  },
  ie11: {
    browser: 'IE',
    browser_version: '11.0',
    os: 'Windows',
    os_version: '7',
    resolution: '1024x768',
  },
  iphone5: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    chromeOptions: {
      mobileEmulation: {
        deviceName: 'Apple iPhone 5',
      },
    },
  },
}

var selectedCaps = process.env.SELECTED_CAPS || undefined
var caps = selectedCaps ? capsConfig[selectedCaps] : undefined

var providerPrefix = process.env.PROVIDER_PREFIX
  ? process.env.PROVIDER_PREFIX + '-'
  : ''
var testName = selectedCaps
  ? providerPrefix + selectedCaps
  : providerPrefix + 'default'

var baseUrl = process.env.BASE_URL
  ? process.env.BASE_URL
  : 'http://activelamp.com'

var resultsCallback = process.env.DEBUG
  ? console.log
  : shoovWebdrivercss.processResults

describe('Visual monitor testing', function() {
  this.timeout(99999999)
  var client = {}

  before(function(done) {
    client = shoovWebdrivercss.before(done, caps)
  })

  after(function(done) {
    shoovWebdrivercss.after(done)
  })

  it('should show the home page', function(done) {
    client
      .url(baseUrl)
      .webdrivercss(
        testName + '.homepage',
        {
          name: '1',
          exclude: [],
          remove: [],
          hide: [],
          screenWidth: selectedCaps == 'chrome' ? [640, 960, 1200] : undefined,
        },
        resultsCallback
      )
      .call(done)
  })
})

##Modifications We prefer not to repeat configuration in our projects. We move the configuration setup to a file outside of the test folder and require it. We make this file by copying and removing the config from the above file and adding module.exports for each of the variables. Our config file looks like this

var shoovWebdrivercss = require('shoov-webdrivercss')






var capsConfig = {
  chrome: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    resolution: '1024x768',
  },
  ie11: {
    browser: 'IE',
    browser_version: '11.0',
    os: 'Windows',
    os_version: '7',
    resolution: '1024x768',
  },
  iphone5: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    chromeOptions: {
      mobileEmulation: {
        deviceName: 'Apple iPhone 5',
      },
    },
  },
}

var selectedCaps = process.env.SELECTED_CAPS || undefined
var caps = selectedCaps ? capsConfig[selectedCaps] : undefined

var providerPrefix = process.env.PROVIDER_PREFIX
  ? process.env.PROVIDER_PREFIX + '-'
  : ''
var testName = selectedCaps
  ? providerPrefix + selectedCaps
  : providerPrefix + 'default'

var baseUrl = process.env.BASE_URL
  ? process.env.BASE_URL
  : 'http://activelamp.com'

var resultsCallback = process.env.DEBUG
  ? console.log
  : shoovWebdrivercss.processResults

module.exports = {
  caps: caps,
  selectedCaps: selectedCaps,
  testName: testName,
  baseUrl: baseUrl,
  resultsCallback: resultsCallback,
}

Once we have this setup, we need to require it into our test and rewrite the variables from our test to make it work with the new configuration file. That file now looks like this.

'use strict'

var shoovWebdrivercss = require('shoov-webdrivercss')
var config = require('../configuration.js')

describe('Visual monitor testing', function() {
  this.timeout(99999999)
  var client = {}

  before(function(done) {
    client = shoovWebdrivercss.before(done, config.caps)
  })

  after(function(done) {
    shoovWebdrivercss.after(done)
  })

  it('should show the home page', function(done) {
    client
      .url(config.baseUrl)
      .webdrivercss(
        config.testName + '.homepage',
        {
          name: '1',
          exclude: [],
          remove: [],
          hide: [],
          screenWidth:
            config.selectedCaps == 'chrome' ? [640, 960, 1200] : undefined,
        },
        config.resultsCallback
      )
      .call(done)
  })
})

##Running the test Now we can run our test. For initial testing, if you don't have a BrowserStack account or SauceLabs, you can test using phantom js

Note: You must have a repository or the test will fail.

In another terminal window run:

phantomjs --webdriver=4444

Return to the original terminal window and run:

SELECTED_CAPS=chrome mocha

This will run the tests specified for "chrome" in the configuration file and the screenWidths from within each test as specified by the default test.

Once the test runs you should see that it has passed. Of course, our test passed because we didn't have anything to compare it to. This test will create your initial baseline images. You will want to review these images in the webdrivercss directory and decide if you need to fix your site, your tests, or both. You may have to remove, exclude or hide elements from your tests. Removing an element will completely rip it from the dom for the test and will shift your site around. Excluding will create a black box over the content that you want to not show up, this is great for areas that you want to keep a consistent layout and the item is a fixed size. Hiding an element will hide the element from view, works similar to remove but works better with child elements outside of the parent. Once you review the baseline images you may want to take the time to commit and push the new images to GitHub (this commit will be the one that appears in the interface later)

##Comparing the Regressions Once you modify your test or site you can test it against the baseline that exists. Now that you probably have a regression you can go to the Shoov interface. From within the interface, you will select Visual Regression. The commit from your project will appear in a list and you will click the commit to be able to view the regressions and take action on any other issues that exist or you can save your new baseline. Only images with a regression will show up in the interface and only tests with regressions will show up on the list.

##What's Next You can view the standalone GitHub repository here.

This is just the tip of the iceberg for us with Visual Regression testing. We hope to share more about our testing process and how we are using Shoov for our projects. Don't forget to share or comment if you like this post.

Oct 17 2015
Oct 17

A little over a year ago the ActiveLAMP website had undergone a major change -- we made the huge decision of moving away from using Drupal to manage its content in favor of building it as a static HTML site using Jekyll, hosted on Amazon S3. Not only did this extremely simplify our development stack, it also trimmed down our server requirements to the very bare minimum. Now, we are just hosting everything on a file storage server like it's 1993.

A few months ago we identified the need to restructure our URL schemes as part of an ongoing SEO campaign. As easy as that sounds, this, however, necessitates the implementation of 301 redirects from the older URL scheme to their newer, more SEO-friendly versions.

I'm gonna detail how I managed to (1) implement these redirects quite easily using an nginx service acting as a proxy, and (2) achieve parity between our local and production environments while keeping everything light-weight with the help of Docker.

Nginx vs Amazon S3 Redirects

S3 is a file storage service offered by AWS that not only allows you to store files but also allows you to host static websites in conjunction with Route 53. Although S3 gives you the ability to specify redirects, you'll need to use S3-specific configuration and routines. This alone wouldn't be ideal because not only would it tie us to S3 by the hips, but it is not a methodology that we could apply to any other environment (i.e. testing and dev environments on our local network and machines). For these reasons, I opted to use nginx as a very thin reverse proxy to accomplish the job.

Configuring Nginx

Rather than compiling the list of redirects manually, I wrote a tiny Jekyll plugin that can do it faster and more reliably. The plugin allows me to specify certain things within the main Jekyll configuration file and it will generate the proxy.conf file for me:



nginx:
    proxy_host: ></span>;
    proxy_port: 80
    from_format: "/:year/:month/:day/:title/"
    
    redirects:
        - { from: "^/splash(.*)", to: "/$1" type: redirect }

With this in place, I am able to generate the proxy.conf by simply issuing this command:

> jekyll nginx_config > proxy.conf

This command will produce a proxy.conf file which will look like this:




rewrite ^/2008/09/21/drupalcampla\-revision\-control\-presentation/?(\?.*)?$ /blog/drupal/drupalcampla-revision-control-presentation$1 permanent;




rewrite ^/blog/development/aysnchronous\-php\-with\-message\-queues/?(\?.*)?$ /blog/development/asynchronous-php-with-message-queues/$1 permanent;


location / {
	proxy_set_header Host ;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_pass http://:80;
}

You probably noticed that this is not a complete nginx configuration. However all that is need to be done is to define a server directive which will import this file:



server {
    listen 80;
    include /etc/nginx/proxy.conf;
}

Here we have an nginx proxy listening on port 80 that knows how to redirect old URLs to the new ones and pass on any request to the S3 bucket on AWS.

From this point, I was able to change the DNS records on our domain to point to the nginx proxy service instead of pointing directly at S3.

Check out the documentation for more ways of specifying redirects and more advanced usage.

Docker

Spinning up the proxy locally is a breeze with the help of Docker. Doing this in Vagrant and a provisioner would require a lot of boilerplate and code. With Docker, everything (with the exception of the config file that is automatically generated), is under 10 lines!

The Dockerfile

Here I am using the official nginx image straight from DockerHub but added some minor modifications:



FROM nginx


RUN rm /etc/nginx/conf.d/default.conf


ADD server.conf /etc/nginx/conf.d/
ADD proxy.conf /etc/nginx/

The build/nginx directory will contain everything the nginx proxy will need: the server.conf that you saw from the previous section, and the proxy.conf file which was generated by the jekyll nginx_config command.

Automating it with Grunt

Since we are using generator-jekyllrb, a Yeoman generator for Jekyll sites which uses Grunt to run a gamut of various tasks, I just had to write a grunt proxy task which does all the needed work when invoked:



...

grunt.initConfig({
    ...
    local_ip: process.env.LOCAL_IP,
    shell: {
        nginx_config: {
            command: 'jekyll nginx_config --proxy_host=192.168.0.3
            --proxy_port=8080 --config=_config.yml,_config.build.yml --source=app > build/nginx/proxy.conf'
        }
        docker_build: {
            command: 'docker build -t jekyll-proxy build/nginx'
        },
        docker_run: {
            command: 'docker run -d -p 80:80 jekyll-proxy'
        }
    },
    ...
});

...

grunt.registerTask('proxy', [
      'shell:nginx_config',
      'shell:docker_build',
      'shell:docker_run'
]);

This requires grunt-shell

With this in place, running grunt proxy will prepare the configuration, build the image, and run the proxy on http://192.168.99.100 where 192.168.99.100 is the address to the Docker host VM on my machine.

Note that this is a very simplified version of the actual Grunt task config that we actually use which just serves to illustrate the meager list of commands that is required to get the proxy configured and running.

I have set up a GitHub repository that replicates this set-up plus the actual Grunt task configuration we use that adds more logic around things like an auto-rebuilding the Docker image, cleaning up of stale Docker processes, configuration for different build parameters for use in production, etc. You can find it here: bezhermoso/jekyll-nginx-proxy-with-docker-demo.

Oct 10 2015
Oct 10

Running headless Drupal with a separate javascript framework on the front-end can provide amazing user experiences and easy theming. Although, working with content editors with this separation can prove to be a tricky situation.

Problem

User story As part of the content team, I need to be able to see a preview of the article on a separate front-end (Ember) application without ever saving the node.

As I started reading this user story I was tasked with, the wheels started turning as to how I would complete this. I had to get information not saved anywhere to an Ember front-end application with no POST abilities. I love challenges that take a lot of thought before jumping in. I talked with some of the other developers here and came up with a pretty nifty solution to the problem, which come to find out was just a mis-communication on the story... and when I was done the client was still excited at the possibilities it could open up; but keep in mind this was still in it's early stages and there are still a lot of holes to fill.

Solution

The first thing I did was add a Preview link and attach a javascript file to the node (article type) edit page via a simple hook_form_FORM_ID_alter().



 function mymodule_form_article_node_form_alter(&$form, &$form_state, $form_id) {
	$form['#attached']['js'][] = drupal_get_path('module', 'mymodule') . '/js/preview.js'
	$form['actions']['preview_new'] = [
		'#weight' => '40',
		'#markup' => 'Preview on front-end'
	];
}

Pretty simple. Since it was just the beginning stages I just threw it in an anchor tag to get the link there. So it wasn't pretty. Now to start gathering the node's data since it won't be saved anywhere.

Next step: Gather node info in js

So obviously the next step was to gather up the node's information from the node edit screen to prepare for shipping to the front-end application. Most time spent here was just trying to mimic the api that the Ember front-end was already expecting.


(function ($, Drupal) {
  Drupal.behaviors.openNewSite = {
    attach : function(context, settings) {
      viewInNewSite.init();
    }
  }
})(jQuery, Drupal);

var viewInNewSite = (function($) {

  
  var getMedia = function() {...};

  
  var getMessage = function() {...};

 
  var attachPreviewClick = function(link) {
    var $link = $(link);
    $link.on('click', function() {
      var newWindow = window.open('http://the-front-end.com/articles/preview/preview-article');
      var message = getMessage();
      
      setTimeout(function() {
        newWindow.postMessage(JSON.stringify(message), 'http://the-front-end.com');
      }, 1000);
    });
  };

  
  var init = function() {
    attachPreviewClick('#preview-new');
  };

  return {
    init: init
  }
})(jQuery);

Using window.open() to open what will be the route in the Ember application returns the newWindow for us to post a message to. (I used a setTimeout() here because the opening application took a while to get started and the message would get missed... this held me up for a while since I knew the message should be getting sent.) Then using postMessage() on newWindow to ship off a message (our json object) to the opening window, regardless of if it is another domain. Insert security concerns here... but now we're ready to setup the front end Ember application route.

To Ember!

The next step was to set up the Ember front-end application to listen for the message from the original window. Set up the basic route:


this.resource('articles', function() {
  this.route('preview', { path: '/preview/article-preview' })
  this.route('article', { path: '/:article_id' })
})

The application that I was previewing articles into already had a way to view articles by id as you see in the above code. So I didn't want to have to duplicate anything... I wanted to use the same template and model for articles that were already developed. Since that was taken care of for me, it was just time to create a model for this page and make sure that I use the correct template. So start by creating the ArticlesPreviewRoute:

App.ArticlesPreviewRoute = (function() {
 return Ember.Route.extend({

renderTemplate: function() {
this.render('articles.article');
},
controllerName: 'articles.article',

setupController: function(controller, model) {
controller.set('model', model);
},
model: function (params) {
return getArticle();
}
});

/\*\*

- Adds event listener to the window for incoming messages.
-
- @return {Promise}
  \*/
  var getArticle = function() {
  return new Promise(function(resolve, reject) {
  window.addEventListener('message', function(event) {
  if (event.origin.indexOf('the-back-end.com') !== -1) {
  var data = JSON.parse(event.data);
  resolve(data.articles[0]);
  }
  }, false);
  });
  };
  })();

The getArticle() function above returns a new Promise and adds an event listener that verifies that the message is coming from the correct origin. Clicking the link from Drupal would now take content to a new tab and load up the article. There would be some concerns that need to be resolved such as security measures and if a user visits the path directly.

To cover the latter concern, a second promise would either resolve the promise or reject it if the set amount of time has passed without a message coming from the origin window.

App.ArticlesPreviewRoute = (function() {
  return Ember.Route.extend({
    ...
    model: function (params) {
	  var self = this;
      var article = previewWait(10, getArticle()).catch(function() {
        self.transitionTo('not-found');
      });
      return article;
    }
  });

  var getArticle = function() {
	return new Promise(function(resolve, reject) {...}
  };

  
  var previewWait = function(seconds, promise) {
    return new Promise(function(resolve, reject) {
      setTimeout(function() {
        reject({'Bad': 'No data Found'});
      }, seconds * 1000);
      promise.then(function(data) {
        resolve(data);
      }).catch(reject);
    });
  };
})();

There you have it! A way to preview an article from a Drupal backend to an Ember front-end application without ever saving the article. A similar approach could be used for any of your favorite Javascript frameworks. Plus, this can be advanced even further into an "almost live" updating front-end that constantly checks the state of the fields on the Drupal backend. There have been thoughts of turning this into a Drupal module with some extra bells and whistles for configuring the way the json object is structured to fit any front-end framework... Is there a need for this? Let us know in the comments below or tweet us! Now, go watch the video!

Sep 23 2015
Sep 23

This post is part 3 in the series ["Hashing out a docker workflow"]({% post_url 2015-06-04-hashing-out-docker-workflow %}). For background, checkout my previous posts.

Now that I've laid the ground work for the approach that I want to take with local environment development with Docker, it's time to explore how to make the local environment "workable". In this post we will we will build on top of what we did in my last post, [Docker and Vagrant]({% post_url 2015-07-19-docker-with-vagrant %}), and create a working local copy that automatically updates the code inside the container running Drupal.

Requirements of a local dev environment

Before we get started, it is always a good idea to define what we expect to get out of our local development environment and define some requirements. You can define these requirements however you like, but since ActiveLAMP is an agile shop, I'll define our requirements as users stories.

User Stories

As a developer, I want my local development environment setup to be easy and automatic, so that I don't have to spend the entire day following a list of instructions. The fewer the commands, the better.

As a developer, my local development environment should run the same exact OS configuration as stage and prod environments, so that we don't run into "works on my machine" scenario's.

As a developer, I want the ability to log into the local dev server / container, so that I can debug things if necessary.

As a developer, I want to work on files local to my host filesystem, so that the IDE I am working in is as fast as possible.

As a developer, I want the files that I change on my localhost to automatically sync to the guest filesystem that is running my development environment, so that I do not have to manually push or pull files to the local server.

Now that we know what done looks like, let's start fulfilling these user stories.

Things we get for free with Vagrant

We have all worked on projects that have a README file with a long list of steps just to setup a working local copy. To fulfill the first user story, we need to encapsulate all steps, as much as possible, into one command:

\$ vagrant up

We got a good start on our one command setup in [my last blog post]({% post_url 2015-07-19-docker-with-vagrant %}). If you haven't read that post yet, go check it out now. We are going to be building on that in this post. My last post essentially resolves the first three stories in our user story list. This is the essence of using Vagrant, to aid in setting up virtual environments with very little effort, and dispose them when no longer needed with vagrant up and vagrant destroy, respectively.

Since we will be defining Docker images and/or using existing docker containers from DockerHub, user story #2 is fulfilled as well.

For user story #3, it's not as straight forward to log into your docker host. Typically with vagrant you would type vagrant ssh to get into the virtual machine, but since our host machine's Vagrantfile is in a subdirectory called /host, you have to change directory into that directory first.

$ cd host
$ vagrant ssh

Another way you can do this is by using the vagrant global-status command. You can execute that command from anywhere and it will provide a list of all known virtual machines with a short hash in the first column. To ssh into any of these machines just type:

\$ vagrant ssh <short-hash>

Replace with the actual hash of the machine.

Connecting into a container

Most containers run a single process and may not have an SSH daemon running. You can use the docker attach command to connect to any running container, but beware if you didn't start the container with a STDIN and STDOUT you won't get very far.

Another option you have for connecting is using docker exec to start an interactive process inside the container. For example, to connect to the drupal-container that we created in my last post, you can start an interactive shell using the following command:

$ sudo docker exec -t -i drupal-container /bin/bash

This will return an interactive shell on the drupal-container that you will be able to poke around on. Once you disconnect from that shell, the process will end inside the container.

Getting files from host to app container

Our next two user stories have to do with working on files native to the localhost. When developing our application, we don't want to bake the source code into a docker image. Code is always changing and we don't want to rebuild the image every time we make a change during the development process. For production, we do want to bake the source code into the image, to achieve the immutable server pattern. However in development, we need a way to share files between our host development machine and the container running the code.

We've probably tried every approach available to us when it comes to working on shared files with vagrant. Virtualbox shared files is just way too slow. NFS shared files was a little bit faster, but still really slow. We've used sshfs to connect the remote filesystem directly to the localhost, which created a huge performance increase in terms of how the app responded, but was a pain in the neck in terms of how we used VCS as well as it caused performance issues with the IDE. PHPStorm had to index files over a network connection, albiet a local network connection, but still noticebly slower when working on large codebases like Drupal.

The solution that we use to date is rsync, specifically vagrant-gatling-rsync. You can checkout the vagrant gatling rsync plugin on github, or just install it by typing:

\$ vagrant plugin install vagrant-gatling-rsync

Syncing files from host to container

To achieve getting files from our localhost to the container we must first get our working files to the docker host. Using the host Vagrantfile that we built in my last blog post, this can be achieved by adding one line:

config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'

Your Vagrantfile within the /host directory should now look like this:






Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

We are syncing a drupal profile from a within the drupal directory off of the project root to a the /srv/myprofile directory within the docker host.

Now it's time to add an argument to run when docker run is executed by Vagrant. To do this we can specify the create_args parameter in the container Vagrant file. Add the following line into the container Vagrantfile:

docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']

This file should now look like:






Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

This parameter that we are passing maps the directory we are rsyncing to on the docker host to the profiles directory within the Drupal installation that was included in the Drupal docker image from DockerHub.

Create the installation profile

This blog post doesn't intend to go into how to create a Drupal install profile, but if you aren't using profiles for building Drupal sites, you should definitely have a look. If you have questions regarding why using Drupal profiles are a good idea, leave a comment.

Lets create our simple profile. Drupal requires two files to create a profile. From the project root, type the following:

$ mkdir -p drupal/profiles/myprofile
$ touch drupal/profiles/myprofile/{myprofile.info,myprofile.profile}

Now edit each file that you just created with the minimum information that you need.

myprofile.info

name = Custom Profile
description = My custom profile
core = 7.x

myprofile.profile



function myprofile_install_tasks() {
  
}

Start everything up

We now have everything we need in place to just type vagrant up and also have a working copy. Go to the project root and run:

\$ vagrant up

This will build your docker host as well as create your drupal container. As I mentioned in a previous post, starting up the container sometimes requires me to run vagrant up a second time. I'm still not sure what's going on there.

After everything is up and running, you will want to run the rsync-auto command for the docker host, so that any changes you make locally traverses down to the docker host and then to the container. The easiest way to do this is:

$ cd host
$ vagrant gatling-rsync-auto

Now visit the URL to your running container at http://localhost:4567 and you should see the new profile that you've added.

Custom Install Profile

Conclusion

We covered a lot of ground in this blog post. We were able to accomplish all of the stated requirements above with just a little tweaking of a couple Vagrantfiles. We now have files that are shared from the host machine all the way down to the container that our app is run on, utilizing features built into Vagrant and Docker. Any files we change in our installation profile on our host immediately syncs to the drupal-container on the docker host.

At ActiveLAMP, we use a much more robust approach to build out installation profiles, utilizing Drush Make, which is out of scope for this post. This blog post simply lays out the general idea of how to accomplish getting a working copy of your code downstream using Vagrant and Docker.

In my next post, I'll continue to build on what I did here today, and introduce automation to automatically bake a Docker image with our source code baked in, utilizing Jenkins. This will allow us to release any number of containers easily to scale our app out as needed.

Aug 14 2015
Aug 14

I recently had time to install and take a look at Drupal 8. I am going to share my first take on Drupal 8 and some of the hang-ups that I came across. I read a few other blog posts that mentioned not to rely too heavily on one source for D8 documentation with the rapid changing pace of D8 the information has become outdated rather quickly.

##Getting Started My first instinct was to run over to drupal.org and grab a copy of the code base and set it up on MAMP. Then I saw an advertisement for running Drupal 8 on Acquia Cloud Free and decided that would probably be a great starting point. Running through the setup for Acquia took only about eight minutes. This was great, having used Acquia and Pantheon before this was an easy way to get started.

Next, I decided to pull down the code and get it running locally so that I could start testing out adding my own theme. Well... What took 8 minutes for Acquia took relatively longer for me.

##Troubleshooting and Upgrading The first roadblock that I ran into was that my MAMP was not running the required version of PHP ( 5.5.9 or higher) and I decided to upgrade to MAMP 3 to make life a little bit nicer. After setting up MAMP from scratch and making sure the other projects that I had installed with MAMP still work correctly I was able to continue with the site install.

The second roadblock that I came across was not having Drush 7+ installed. It doesn't come out and say in the command line that you need to upgrade Drush (it does in the docs on drupal.org and if you search the error it is one of the first results). It just spits out this error:

Fatal error: Class 'Drupal\Core\Session\AccountInterface' not found in .../docroot/core/includes/bootstrap.inc on line 64
Drush command terminated abnormally due to an unrecoverable error.

The next roadblock was that I was trying to clear cache with Drush and didn't bother to read the documentation on drupal.org that outlined that drush cc all no longer exists and is replaced by drush cr. Drush now uses the cache-rebuild command. However, this is not exactly clear given that if you run drush cc all you get the same exact error as the one above.

Finally everything was setup and working properly. I decided to look around for a good guide to jumpstart getting the theme setup and landed here at this Lullabot article. For the most part, the article was straight forward. Some parts didn't work and I skipped and others didn't work and I tried to figure out why. Here is the list of things that I couldn't figure out:

  • Drush wouldn't change the default theme (complained about bootstrap level even though I was in my docroot)
  • Stylesheets-remove didn't work inside of my theme.info.yml file
  • Specifying my CSS in my theme.libraries.yml file seemed to be problematic but got it working after some time. (probably user error)

##Conclusion Drupal 8 looks clean and feels sleek and slimmed down. I'm really excited for the direction that Drupal is headed. Overall the interface within Drupal hasn't changed too drastically (maybe some naming conventions ie. extend over modules). It looks like one of our current sites running Panopoly which has a great look and feel over out-of-the-box D7. I really like the simplicity of separating out yml files for specific needs and setting up the theme.

I look forward to writing more blog posts about Drupal 8 and maybe some tutorials and insights. Let us know your thoughts and ideas in the comments section below.

Jul 19 2015
Jul 19

This post is part 2 in a series of Docker posts hashing out a new docker workflow for our team. To gain background of what I want to accomplish with docker, checkout my previous post [hashing out a docker workflow]({% post_url 2015-06-04-hashing-out-docker-workflow %}).

In this post, we will venture into setting up docker locally, in the same repeatable way from developer to developer, by using Vagrant. By the end of this post, we'll have Drupal running in a container, using Docker. This post is focused on hashing out a Docker workflow with Vagrant, less about Drupal itself. I want to give a shout out to the folks that maintain the Drupal Docker image in the Docker Registry. It's definitely worth checking out, and using that as a base to build FROM for your custom images.

Running Docker Locally

There are several ways to go about setting up Docker locally. I don't intend to walk you through how to install Docker, you can find step-by-step installation instructions based on your OS in the Docker documentation. However, I am leaning toward taking an unconventional approach to installing Docker locally, not mentioned in the Docker documentation. Let me tell you why.

Running a Docker host

For background, we are specifically an all Mac OS X shop at ActiveLAMP, so I'll be speaking from this context.

Unfortunately you can't run Docker natively on OS X, Docker needs to run in a Virtual Machine with an operating system such as Linux. If you read the Docker OS X installation docs, you see there are two options for running Docker on Mac OS X, Boot2Docker or Kitematic.

Running either of these two packages looks appealing to get Docker up locally very quickly (and you should use one of these packages if you're just trying out Docker), but thinking big picture and how we plan to use Docker in production, it seems that we should take a different approach locally. Let me tell you why I think you shouldn't use Boot2Docker or Kitematic locally, but first a rabbit trail.

Thinking ahead (the rabbit trail)

My opinion may change after gaining more real world experience with Docker in production, but the mindset that I'm coming from is that in production our Docker hosts will be managed via Chef.

Our team has extensive experience using Chef to manage infrastructure at scale. It doesn't seem quite right to completely abandon Chef yet, since Docker still needs a machine to run the Docker host. Chef is great for machine level provisioning.

My thought is that we would use Chef to manage the various Docker hosts that we deploy containers to and use the Dockerfile with Docker Compose to manage the actual app container configuration. Chef would be used in a much more limited capacity, only managing configuration on a system level not an application level. One thing to mention is that we have yet to dive into the Docker specific hosts such as AWS ECS, dotCloud, or Tutum. If we end up adopting a service like one of these, we may end up dropping Chef all together, but we're not ready to let go of those reigns yet.

One step at a time for us. The initial goal is to get application infrastructure into immutable containers managed by Docker. Not ready to make a decision on what is managing Docker or where we are hosting Docker, that comes next.

Manage your own Docker Host

The main reason I was turned off from using Boot2Docker or Kitematic is that it creates a Virtual Machine in Virtualbox or VMWare from a default box / image that you can't easily manage with configuration management. I want control of the host machine that Docker is run on, locally and in production. This is where Chef comes into play in conjunction with Vagrant.

Local Docker Host in Vagrant

As I mentioned in my last post, we are no stranger to Vagrant. Vagrant is great for managing virtual machines. If Boot2Docker or Kitematic are going to spin up a virtual machine behind the scenes in order to use Docker, then why not spin up a virtual machine with Vagrant? This way I can manage the configuration with a provisioner, such as Chef. This is the reason I've decided to go down the Vagrant with Docker route, instead of Boot2Docker or Kitematic.

The latest version of Vagrant ships with a Docker provider built-in, so that you can manage Docker containers via the Vagrantfile. The Vagrant Docker integration was a turn off to me initially because it didn't seem it was very Docker-esque. It seemed Vagrant was just abstracting established Docker workflows (specifically Docker Compose), but in a Vagrant syntax. However within the container Vagrantfile, I saw you can also build images from a Dockerfile, and launch those images into a container. It didn't feel so distant from Docker any more.

It seems that there might be a little overlap in areas between what Vagrant and Docker does, but at the end of the day it's a matter of figuring out the right combination of using the tools together. The boundary being that Vagrant should be used for "orchestration" and Docker for application infrastructure.

When all is setup we will have two Vagrantfiles to manage, one to define containers and one to define the host machine.

Setting up the Docker Host with Vagrant

The first thing to do is to define the Vagrantfile for your host machine. We will be referencing this Vagrantfile from the container Vagrantfile. The easiest way to do this is to just type the following in an empty directory (your project root):

\$ vagrant init ubuntu/trusty64

You can configure that Vagrantfile however you like. Typically you would also use a tool like Chef solo, Puppet, or Ansible to provision the machine as well. For now, just to get Docker installed on the box we'll add to the Vagrantfile a provision statement. We will also give the Docker host a hostname and a port mapping too, since we know we'll be creating a Drupal container that should EXPOSE port 80. Open up your Vagrantfile and add the following:

config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567

This ensures that Docker is installed on the host when you run vagrant up, as well as maps port 4567 on your local machine to port 80 on the Docker host (guest machine). Your Vagrantfile should look something like this (with all the comments removed):






Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
end

Note: This post is not intended to walk through the fundamentals of Vagrant, for further resources on how to configure the Vagrantfile check out the docs.

As I mentioned earlier, we are going to end up with two Vagrantfiles in our setup. I also mentioned the container Vagrantfile will reference the host Vagrantfile. This means the container Vagrantfile is the configuration file we want used when vagrant up is run. We need to move the host Vagrantfile to another directory within the project, out of the project root directory. Create a host directory and move the file there:

$ mkdir host
$ mv Vagrantfile !\$

Bonus Tip: The !$ destination that I used when moving the file is a shell shortcut to use the last argument from the previous command.

Define the containers

Now that we have the host Vagrantfile defined, lets create the container Vagrantfile. Create a Vagrantfile in the project root directory with the following contents:






Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

To summarize the configuration above, we are using the Vagrant Docker provider, we have specified the path to the Docker host Vagrant configuration that we setup earlier, and we defined a container using the Drupal image from the Docker Registry along with exposing some ports on the Docker host.

Start containers on vagrant up

Now it's time to start up the container. It should be as easy as going to your project root directory and typing vagrant up. It's almost that easy. For some reason after running vagrant up I get the following error:

A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.

Command: "docker" "ps" "-a" "-q" "--no-trunc"

Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

Stdout:

I've gotten around this is by just running vagrant up again. If anyone has ideas what is causing that error, please feel free to leave a comment.

Drupal in a Docker Container

You should now be able to navigate to http://localhost:4567 to see the installation screen of Drupal. Go ahead and install Drupal using an sqlite database (we didn't setup a mysql container) to see that everything is working. Pretty cool stuff!

Development Environment

There are other things I want to accomplish with our local Vagrant environment to make it easy to develop on, such as setting up synced folders and using the vagrant rsync-auto tool. I also want to customize our Drupal builds with Drush Make, to make developing on Drupal much more efficient when adding modules, updating core, etc... I'll leave those details for another post, this post has become very long.

Conclusion

As you can see, you don't have to use Boot2Docker or Kitematic to run Docker locally. I would advise that if you just want to figure out how Docker works, then you should use one of these packages. Thinking longer term, your local Docker Host should be managed the same way your production Docker Host(s) are managed. Using Vagrant, instead of Boot2Docker or Kitematic, allows me to manage my local Docker Host similar to how I would manage production Docker Hosts using tools such as Chef, Puppet, or Ansible.

In my next post, I'll build on what we did here today, and get our Vagrant environment into a working local development environment.

Jul 03 2015
Jul 03

The Picture module is a backport of Drupal 8 Responsive Image module. It allows you to select different images to be loaded for different devices and resolutions using media queries and Drupal’s image styles. You can also use the Image Replace module to specify a different image to load at certain breakpoints.

The Picture module gives you the correct formatting for an HTML5 “” element and includes a polyfill library for backwards compatibility with unsupported browsers. Unfortunately, this includes IE, Edge, Safari, iOS Safari, and Opera Mini. For more information, Can I Use.

The picture module works great with views. It supports WYSIWYG and creditor. It can be implemented directly with code if necessary.

##Installation Picture has two important dependencies

  1. Breakpoints module
  2. Ctools module

You can install the Picture module using Drush.

You will also want the Media module and Image Replace module.

##Setting Up Once you have the modules installed and enabled you can start to configure your theme.

###1. Set up your breakpoints

You can add breakpoint settings directly in Drupal under Configuration > Media > Breakpoints. Or if you prefer you can add them to the .info file of your theme. We added ours to our theme like this.

breakpoints[xxlarge] = (min-width: 120.063em)
breakpoints[xlarge]  = (min-width: 90.063em) and (max-width: 120em)
breakpoints[large]   = (min-width: 64.063em) and (max-width: 90em)
breakpoints[medium]  = (min-width: 40.063em) and (max-width: 64em)
breakpoints[small]   = (min-width: 0em) and (max-width: 40em)

###2. Add Responsive Styles From within the Drupal UI, you will want to go to Configuration > Media > Breakpoints

  1. Click “Add Responsive Style”.
  2. Select which breakpoints you want to use for this specific style.
  3. Then you will choose an existing image style from the drop-down list that this style will clone.
  4. Finally you give it a base name for your new image styles. I would recommend naming it something logical and ending it with “_”.

You can now edit your image styles, you can do your normal scale crop for different sizes or you can setup image replace.

###3. Setting up Picture Mappings Once you have your image styles you have to create a mapping for them to associate with if you go to Configuration > Media > Picture Mappings

  1. Click Add to add a new picture mapping
  2. Select your breakpoint group.
  3. Give your mapping an Administrative title (something you can pick out of a select list)
  4. Then you will select "use image style" and select each of the corresponding image styles that you setup from the select list that appears.
Once you have that created you can now select that mapping and use it in different areas of your site. For example, you can use it in the Display of a node using the Manage Display section and formatting it as picture.
You can use it in views by selecting picture format and selecting you picture mapping.

###4. Setting up Image Replace Click edit on your desired image style and as an effect you can add the Replace Image setting. Replace can be combined with other styles as well so if you need to scale and crop you can do that too

Once you have your style setup you need to specify a field to replace with the image with a specific dimensions. We added a secondary field to our structure. We named them with horizontal and vertical because it made sense for us because we only use the vertical field when we are at smaller widths and the content stretches move vertically. You can use whatever naming convention will work best for you.

On the main image that you added to your view or display you edit the field settings and there is a section called Image Replace. Find the image style you set Replace image on and select your field you want to replace your current field.

##Finished Product Once that is done you are all set. If you have any questions or comments be sure to leave your comment in the comments section below.

Here is an example of how we used image replace.

May 01 2015
May 01

In the [previous blog post]({% post_url 2015-01-21-divide-and-conquer-part-1 %}) we shared how we implemented the first part of our problem in Drupal and how we decided that splitting our project into discrete parts was a good idea. I'll pick up where we left off and discuss why and how we used Symfony to build our web service instead of implementing it in Drupal.

RESTful services with Symfony

Symfony being a web application framework, did not really provide built-in user-facing features that we were able to use immediately, but it gave us development tools and a development framework that expedite the implementation of various functionality for our web service. Even though much of the work was cut out for us, the framework took care of the most common problems and the usual plumbing that goes with building web applications. This enabled us to tackle the web service problem with a more focused approach.

Other than my familiarity and proficiency with the framework, the reason we chose Symfony over other web application frameworks is that there is already a well-established ecosystem of Symfony bundles (akin to modules in Drupal) that are centered around building a RESTful web service: FOSRestBundle provided us with a little framework for defining and implementing our RESTful endpoints, and does all the content-type negotiation and other REST-related plumbing for us. JMSSerializerBundle took care of the complexities of representing our objects into JSON which our clients to consume. We also wrote our own little bundle with which we use Swagger UI to provide a beautiful documentation to our API. Any changes to our code-base that affects the API will automatically update the documentation, thanks to NelmioApiDocBundle in which we contributed the support for generating Swagger-compliant API specifications.

We managed to encapsulate all the complexities behind our search engine within our API: not only do we index content sent over by Drupal, but we also had to index thousands of data that we are pulling from a partner at a daily basis. On top of that, the API also appends search results from another search API provided by one other partner should we run out of data to provide. Our consumers doesn't know this and neither should Drupal -- we let it worry about content management and sending us the data, that's it. In fact,

In fact, Drupal never talks to Elasticsearch directly. It only talks to our API and authenticating itself should it need to write or delete anything. This also means we can deploy the API on another server without Drupal breaking because it can no longer talk to a firewalled search index. This way we keep everything discrete and secure.

In the end, we have an REST API with three endpoints:

  1. a secured endpoint which receives content which are then validated and indexed, which is used by Drupal,
  2. a secured endpoint which is used to delete content from the index, which is also used by Drupal, and finally;
  3. a public endpoint used searching for content that matches the specifications provided via GET parameters, which will be used in Drupal and by other consumers.

Symfony and Drupal 8

Symfony is not just a web application framework, but is also a collection of stand-alone libraries that can be used by themselves. In fact, the next major iteration of Drupal will use Symfony components to modernize its implementations of routing & URL dispatch, request and response handling, templating, data persistence and other internals like organizing the Drupal API. This change will definitely enhance the experience of developing Drupal extensions as well as bring new paradigms to Drupal development, especially with the introduction of services and dependency-injection.

Gluing it together with AngularJS

Given that we have a functioning web service, we then used AngularJS to implement the rich search tools on our Drupal 7 site.

AngularJS is a front-end web framework which we use to create rich web applications straight on the browser with no hard dependencies on specific back-ends. This actually helped us prototype our search tools and the search functionality faster outside of Drupal 7. We made sure that everything we wrote in AngularJS are as self-contained as possible, in which case we can just drop them into Drupal 7 and have them running with almost zero extra work. It was just a matter of putting our custom AngularJS directives and/or mini-applications into a panel, which in turn we put into Drupal pages. We have done this AngularJS-as-Drupal-panels before in other projects and it has been really effective and fun to do.

To complete the integration, it was just a matter of hooking into Drupal's internal mechanism in order to pass along authored content into our indexing API when they are approved, or deleting them when they are unpublished.

Headless Drupal comes to Drupal 8!

The popularity of front-end web frameworks has increased the demand for data to be available via APIs as templating and other display-oriented tasks has rightfully entered into the domain of client-side languages and out of back-end systems. It's exciting to see that Drupal has taken initiative and has made the content it manage available through APIs out-of-the-box. This means it will be easier to build single-page applications on top of content managed in Drupal. It is something that we at ActiveLAMP are actually itching to try.

Also, now that Google has added support for crawling Javascript-driven sites for SEO, I think single-page applications will soon rise from being just "experimental" and become a real choice for content-driven websites.

Using Composer dependencies in Drupal modules

We used the Guzzle HTTP client library in one of our modules to communicate with the API in the background. We pulled the library into our Drupal installation by defining it as a project dependency via the Composer Manager module. It was as simple as putting a bare-minimum composer.json file in the root directory of one of our modules:

{
  "require" : {
    "guzzlehttp/guzzle" : "4.\*"
  }
}

...and running these Drush commands during build:

$ drush composer-rebuild
$ drush composer-manager install

The first command collects all defined dependency information in all composer.json files found in modules, and the second command finally downloads them into Drupal's library directory.

Composer is awesome. Learn more about it here.

How a design deicision saved us from a potentially costly mistake

One hard lesson we learned is that its not ideal to use Elasticsearch as the primary and sole data persistence layer in our API.

During the early stages of developing our web service, we treated Elasticsearch as the sole data store by removing Doctrine2 from our Symfony application and doing away with MySQL completely from our REST API stack. However we still employed the Repository Pattern and wrote classes to store and retrieve from Elasticsearch using the elasticsearch-php library. These classes also hide away the details on how objects are transformed into their JSON representation, and vice-versa. We used the jms-serializer library for the data transformations; its an excellent package that takes care of the complexities behind data serialization from PHP objects to JSON or XML. (We use the same library for delivering objects through our search API which could be a topic for a future blog post.)

This setup worked just fine, until we had to explicitly define date-time fields in our documents. Since we used UNIX timestamps for our date-time fields in the beginning, Elasticsearch mistakenly inferred them to be float fields. The explicit schema conflicted with the inferred schema and we were forced to flush out all existing documents before the update can be applied. This prompted us to use a real data store which we treat as the Single Version of Truth and relegate Elasticsearch as just an index lest we lose real data in the future, which would be a disaster.

Making this change was easy and almost painless, though, thanks to the level of abstraction that the Repository Pattern provides. We just implemented new repository classes with the help of Doctrine which talk to MySQL, and dropped them in places where we used their Elasticsearch counter-part. We then hooked into Doctrine's event system to get our data automatically indexed as they are written in and out of the database:



use ActiveLAMP\AppBundle\Entity\Indexable;
use ActiveLAMP\AppBundle\Model\ElasticsearchRepository;
use Doctrine\Common\EventSubscriber;
use Doctrine\ORM\Event\LifecycleEventArgs;
use Doctrine\ORM\Events;
use Elasticsearch\Common\Exceptions\Missing404Exception;

class IndexEntities implements EventSubscriber
{

    protected $elastic;

    public function __construct(ElasticsearchRepository $repository)
    {
        $this->elastic = $repository;
    }

    public function getSubscribedEvents()
    {
        return array(
            Events::postPersist,
            Events::postUpdate,
            Events::preRemove,
        );
    }

    public function postPersist(LifecycleEventArgs $args)
    {
        $entity = $args->getEntity();

        if (!$entity instanceof Indexable) {
            return;
        }

        $this->elastic->save($entity);
    }

    public function postUpdate(LifecycleEventArgs $args)
    {
        $entity = $args->getEntity();

        if (!$entity instanceof Indexable) {
            return;
        }

        $this->elastic->save($entity);
    }

    public function preRemove(LifecycleEventArgs $args)
    {
        $entity = $args->getEntity();

        if (!$entity instanceof Indexable) {
            return;
        }

        try {
            $this->elastic->delete($entity);
        } catch (Missing404Exception $e) {
            
        }
    }
}

Thanks, Repository Pattern!

Overall, I really enjoyed building out the app using the tools we decided to use and I personally like how we put the many parts together. We observed some tenets behind service-oriented architectures by splitting the project into multiple discrete problems and solving each with different technologies. We handed Drupal the problems that Drupal knows best, and used more suitable solutions for the rest.

Another benefit we reaped is that developers within ActiveLAMP can focus in on their own domain of expertise: our Drupal guys take care of Drupal work that non-Drupal guys like me aren't the best fit for, while I can knock out Symfony work which is right up my alley. I think we at ActiveLAMP has seen the value of solving big problems through divide-and-conquer being diversified in the technologies we use.

Jan 22 2015
Jan 22

It isn't just about Drupal here at ActiveLAMP -- when the right project comes along that diverges from the usual demands of content management, we get to use other cool technologies to satisfy more exotic requirements. Last year we had a project that presented us with the opportunity to broaden our arsenal beyond the Drupal toolbox. Basically, we had to build a website which handles a growing amount of vetted content coming in from the site's community and 2 external sources, and the whole catalog is available through the use of a rich search tool and also through a RESTful web service which other of our client's partners can use to search for content to display on their respective websites.

Drupal 7 -- more than just a CMS

We love Drupal and we recognize its power in managing content of varying types and complexity. We at ActiveLAMP have solved a lot of problems with it in the past, and have seen how potent it can be. We were able to map out many of the project's requirements to Drupal functionality and we grew confident that it is the right tool for the project.

We pretty much implemented the majority of the site's content-management, user-management, and access-control functionality with Drupal, from content creation, revision, display, and for printing. We relied heavily on built-in functionality to tie things together. Did I mention that the site and content-base and theme components are bi-lingual? Yeah, the wide foray of i18n modules took care of that.

One huge reason we love Drupal is because of its striving community which drives to make it better and more powerful every day. We leveraged open-sourced modules that the community has produced over the years to satisfy project requirements that Drupal does not provide out-of-the-box.

For starters, we based our project on the Panopoly distribution of Drupal which bundles a wide selection of modules that gave us great flexibility in structuring our pages and saving us precious time in site-building and theming. We leveraged a lot of modules to solve more specialized problems. For example, we used the Workbench suite of modules to take care of the implementation of the review-publish-reject workflow that was essential to maintain the site's integrity and quality. We also used the ZURB Foundation starter theme as the foundation for our site pages.

What vanilla Drupal and the community modules cannot provide us we wrote ourselves, thanks to Drupal's uber-powerful "plug-and-play" architecture which easily allowed us to write custom modules to tell Drupal exactly what we need it to do. The amount of work that can be accomplished by the architecture's hook system is phenomenal, and it elevates Drupal from being just a content management system to a content management framework. Whatever your problem, there most probably is a Drupal module for it.

Flexible indexing and searching with Elasticsearch

A large aspect to our project is that the content we handle should be subject to a search tool available on the site. The criterias for searching do not only demand the support for full-text searches, but also filtering by date-range, categorizations ("taxonomies" in Drupal), and most importantly, geo-location queries and sorting by distance (e.g., within n miles from a given location, etc.) It was readily apparent that SQL LIKE expressions or full-text search queries with the MyISAM engine for MySQL just wouldn't cut it.

We needed a full-pledged full-text search engine that also supports geo-spatial operations. And surprise! -- there is a Drupal module for that (A confession: not really a surprise). The Apache Solr Search modules readily provide us the ability to index all our content straight from Drupal and into Apache Solr, an open-source search platform built on top of the famous Apache Lucene engine.

Despite the comfort that the module provided, I evaluated other options which eventually led us to Elasticsearch, which we ended up using over Solr.

Elasticsearch advertises itself as:

“a powerful open source search and analytics engine that makes data easy to explore”

...and we really found this to be true. Since it is basically a wrapper around Lucene and exposing its features through a RESTful API, it is readily available to any apps no matter which language it is written in. Given the wide proliferation and usage of REST APIs in web development, it puts a familiar face on a not-so-common technology. As long as you speak HTTP, the lingua franca of the Web, you are in business.

Writing/indexing documents into Elasticsearch is straight-forward: represent your content as a JSON object and POST it up into the appropriate endpoints. If you wish to retrieve it on its own, simply issue a GET request together with its unique ID which Elasticsearch assigned it and gave back during indexing. Updating it is also a PUT request away. Its all RESTful and nice.

Making searches is also done through API calls, too. Here is an example of a query which contains a Lucene-like text search (grouping conditions with parentheses and ANDs and ORs), a negation filter, a basic geo-location filtering, and with results sorted by distance from a given location:


POST /volsearch/toolkit_opportunity/_search HTTP/1.1
Host: localhost:9200
{
  "from":0,
  "size":10,
  "query":{
    "filtered":{
      "filter":{
        "bool":{
          "must":[
            {
              "geo_distance":{
                "distance":"100mi",
                "location.coordinates":{
                  "lat":34.493311,
                  "lon":-117.30288
                }
              }
            }
          ],
          "must_not":[
            {
              "term":{
                "partner":"Mentor Up"
              }
            }
          ]
        }
      },
      "query":{
        "query_string":{
          "fields":[
            "title",
            "body"
          ],
          "query":"hunger AND (financial OR finance)",
          "use_dis_max":true
        }
      }
    }
  },
  "sort":[
    {
      "_geo_distance":{
        "location.coordinates":[
          34.493311,
          -117.30288
        ],
        "order":"asc",
        "unit":"mi",
        "distance_type":"plane"
      }
    }
  ]
}

Queries are written following Elasticsearch's own DSL (domain-specific language) which are in the form of JSON objects. The fact that queries are represented as tree of search specifications in the form of dictionaries (or “associative arrays” in PHP parlance) makes them a lot easier to understand, traverse, and manipulate as needed without the need of third-party query builders that Lucene's query syntax leaves to be desired. It is this syntactic sugar that helped convinced us to use Elasticsearch.

What makes Elasticsearch flexible is that it is at some degree schema-less. It really made it quite quick for us to get started and get things done. We just hand it with documents with no pre-defined schema and it just does it job at trying to guess the field types, inferring from the data we provided. We can specify new text fields and filter against them on-the-go. If you decide to start using richer queries like geo-spatial and date-ranges, then you should explicitly declare fields as having richer types like dates, date-ranges, and geo-points to tell Elasticsearch how to index the data accordingly.

To be clear, Apache Solr also exposes Lucene through a web service. However we think Elasticsearch API design is more modern and much easier to use. Elasticsearch also provides a suite of features that lends it to easier scalability. Visualizing the data is also really nifty with the use of Kibana.

The Search API

Because of the lack of built-in access control in Elasticsearch, we cannot just expose it to third-parties who wish to consume our data. Anyone who can see the Elasticsearch server will invariably have the ability to write and delete content from it. We needed a layer that firewalls our search index away from public. Not only that, it will also have to enforce our own simplified query DSL that the API consumers will use.

This is another aspect that we looked beyond Drupal. Building web services isn't exactly within Drupal's purview, although it can be accomplished with the help of third-party modules. However, our major concern was in regards to the operational cost of involving it in the web service solution in general: we felt that the overhead of Drupal's bootstrap process is just too much for responding to API requests. It would be akin to swatting a fruit fly with a sledge-hammer. We decided to implement all search functionality and the search API itself in a separate application and writing it with Symfony.

More details on how we introduced Symfony into the equation and how we integrated together will be the subject of my next blog post. For now we just like to say that we are happy with our decision to split the project's scope into smaller discrete sub-problems because it allowed us to target each one of them with more focused solutions and expand our horizon.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web