Sep 13 2018
Sep 13

We are building a 12 factor app with Drupal. This is part two in our series, building a 12 factor app. Today I’m talking about Factor Two: Dependency Management.

What’s up internet? Tom Friedhof here, Solutions Architect here at activelamp. If you didn’t catch my last video on factor one, go check it out. I’ll walk you through how to set up a local development environment on Docker. Today, I’m picking up where we left off and continuing to build on top of that app that we started from factor one. Factor two states Explicitly Declare and Isolate Dependencies. What are your dependencies in a Drupal app? Well, if you’re building a Drupal site, one of your dependencies is Drupal core. If you’re using any contributed modules, those are also dependencies. Basically, you can think of a dependency as any code that you’re not maintaining yourself.

These dependencies you don’t want in your working repository, keep these out of your working repository. How do you bring those dependencies into your app? That’s what a dependency manager is for. I’m going to show you today, how to use composer and how to pull down Drupal core and any contributed modules that we want to use.

One of the other benefits that you get with keeping your dependencies out of your working repository as it makes upgrading your dependencies a piece of cake. Basically, if you want to update Drupal core, it’s a matter of changing a version number in a composer.json file and then rerunning your build. Composer will bring down the dependency that you asked for and put it into its place where it’s supposed to go. Only your custom code unique to your application should be in your repository. Let’s jump back into a demo with that Drupal app that we started in the last video and start defining some dependencies with composer.

All right, so here we are back in our 12-factor-demo app that we started in the last video. And just as a reminder, if you want to go grab this off of GitHub, you can go to the activelamp repository and grab it at 12 factor demo. Now, I have added one commit since the last video. And so if we go take a look at that, essentially all I did was update to the latest Docker sync there and remove some settings that we no longer need due to the upgraded Docker sync. All right, so let’s go ahead and boot this app up. And so if we head over to our terminal and type in Make Start, that’ll actually start the stack up and just to kind of remind you of what that command actually did is we have a make file over here that has the different commands that we can execute using the new Make Command and so make start essentially executes this bundle exec docker-sync-start command.

What that did over here is that started up our docker stack with docker sync. You can see we have three containers that started up here, then we have DB container, the PHP container, and the Nginx container. There was also a sync container that was started as well called Drupal-sync. All of that was covered in the first video. If you’re not familiar with what I’m talking about, definitely go check out that first video because that’s where we covered all this information. And just real quick, if I go into this Docker compose file, you’ll see there’s where those containers are defined as well.

All right, so in this video, we’re going to introduce the composer dependency manager in PHP. So if we hop over to a browser and type in Composer, you’ll see the first like in Google is this Dependency Manager for PHP. And so this is how we’re going to pull in our dependencies for Drupal. As I was mentioning earlier, when you’re building a Drupal site, Drupal core is actually a dependency, and so we’re going to pull in Drupal core with Composer. Now, in the previous video, the way we were actually pulling in Drupal core was using this Drupal image from Docker Hub, and that included Drupal core in there. We’re going to continue to use this image but we’re not going to use any of the code in that image.

Now, you could just switch this over to a PHP image, but for simplicity sake, I’m just going to keep this particular image and then just pull in our own code so that we’re not tackling any Docker configuration in this video. We just want to mess with the Composer Dependency Manager with PHP. There’s a composer template that you can go use to do this. If you just Google for Composer Template Drupal, you’ll see that there are some docs here to actually create a Drupal composer project. That’s exactly what we’re going to do in our code base. And so let’s grab this command, and we’ll head back into our terminal here. Let’s actually get to the correct directory.

All right. Right now, the way we have this directory structure set up on our local file system is in the source directory we have this web directory and inside that web directory we have the Profiles Directory which just has our custom code in it. We need to pull our custom code out so that we can use the Composer Template here, and so our custom code lives in this activelamp directory. Let’s for now grab that activelamp directory and move it out of the source directory. We’ll just say move web profiles activelamp and let’s move that up one directory, and then we’re going to get rid of the source directory completely.

All right, so now we can paste that Composer command that we got from the Composer Template file. Instead of creating this project in a directory called [inaudible 00:07:02], we’re going to create this project and a directory called source. Basically, what this is doing, is this is pulling out the, it’s checking out the Drupal Composer, Drupal Project Template, and it’s putting that into this source directory. Now, with the composer template, you get quite a bit of functionality. This is probably a topic for another video, we really won’t get into everything that this composer template gives you, but this README file is definitely worth checking out on the Composer Dump Template for Drupal projects on GitHub. Let’s see how far this has gotten.

All right, so now we have a Drupal core in our source directory, so if we head back to our code editor now, this should have Drupal core in our code editor. Like I was saying earlier, the dependencies are the files that you don’t manage yourself, you don’t want to have in your repository. And so let’s see what actually is in our repository. Now, if we type in git status, these are all the files that are going to get committed to our repository. Now, some of these files come from what’s called the Drupal scaffold. If I go back over to this Composer Template for Drupal projects, the Composer Template uses something called the Drupal scaffold which basically creates files like the index.php or the update.php file for you when you run a composer update.

If I click into this project here, this will tell us which files are provided by this scaffold, and so we can actually add these files to our .gitignore file as well so that we’re not actually adding this to our composer. Excuse me, our project repository. Let’s go into the .gitignore file here and let’s go down to the bottom and we’ll just say ignore scaffold files and then let’s paste that there and then this, that, and that aren’t file names, so we’ll remove that.

Now, let’s go ahead and save this and then look at, see what a terminal looks like, see if it looks a little bit better. Okay, so there’s fewer files that we’re adding now, but these files are actually files that we do want to add in. Some of these are empty directories like the modules directory in the themes directory, if we look in there. Those are empty directories that we need to actually have in place, so that Composer has a directory to place modules when we’re asking for a module dependency from drupal.org. This activelamp directory right here, we need to actually move that into the profiles directory, so let’s do that real quick.

This is going to go inside the source web profiles directory. All right. And so now this looks like this can be ready to be committed. We’re not going to commit it just yet, we’re gonna finish setting up our stack here. So now that we have the entire Drupal root inside this source directory, we actually need to update our Drupal or excuse me, our Docker sync config. files. Let’s go in there because before Docker sync was just syncing over the web profiles into the container. And so now we actually want to sync all of source into the container. I’m just going to delete that, so basically this whole directory is going to get synced in to the Docker container.

Let me go to our Docker compose dev file, so that sync container is going to be mounted in our PHP container here. We actually don’t want it to be in HTML profiles anymore. We just want it to be at var/www. On the Docker compose file, let’s see. We are sharing a volume with the Nginx container, so we’re sharing val/www/html with the Nginx container. We actually need to have this say web now because the Composer Template actually creates a web directory and not an HTML directory. Let’s go ahead and change that real quick. I’m not going to change it in the dockercompose.yemo file. I’m gonna change it in the dockercompose.dev file because this is really for dev environments. When we create these containers to be shipping containers that we can push to production, we’ll spend a little bit more time on this Docker compose yemo file. But for now I’m just going to create the volume and the Docker compose Dev yemo file so that we at least have the volume set up in our dev environment.

Let’s change this to Nginx and back this up like so. All right, so now in our Nginx container, we’ll get the same mount that the PHP container has to that Drupal-sync container that’s created by Docker sync. All right. So that Composer Template sets up the Drupal doc route in a web directory. We previously had our Nginx Server configured to look at the HTML directory, and so just we go in here. We can see we’re putting in a configuration file on the internet server that’s coming from our config directory Nginx site.com. If I come in here, we have a root here specified as var/www/html. We need to change this to web here so that it’ll actually pick up the correct directory that we’re syncing in.

That should be all that we need to do to get the stack working again. I’m going to go back to the terminal and I’m going to stop this by hitting control C, and then I’m going to run Make Clean. This will basically clean out the containers and the volumes and let us start from a fresh slate. I can type in Make Start and this will boot up the containers again from scratch, create the Docker-sync container and sync the files again. And once that’s done, we should be able to hit the browser and see the latest version of Drupal that we pulled down with Composer.

All right. Now that our stack is back up and fully booted, let’s hit the browser and see if we can get Drupal to load and just for review the Nginx servers listening on port 7080. Let’s hit that and see what happens. I open up a new tab here within a local host, and we’ll go to 7080, and there we go. There is our Drupal install. We’re running the 8.5.6 version of Drupal. And again, we are in that act of lamp profile. So it’s picking up that profile that we put into that directory, and now we can install this as we did before.

One thing that the Drupal Composer Template does for you is it puts your configuration sync directory outside of your files directory, and so you need to actually create this config sync directory. For now what I’m going to do is create this manually. When you’ve got this on your web server, you want to make sure that the web server has access to be able to do this itself, but I’m just going to do this right now manually. I’m going to go into this source directory and create a config. directory. And right now, I’m just going to make it world readable, but really, this should only be read in readable by your web server.

Now if I go back to the Install screen, I should be able to hit try again, and there we go. Now I can actually install this is. Our credentials are Drupal, and this is on a server called [inaudible 00:16:35].

All right, so there’s our freshly installed site, and so just to show that it did install the profile, I look for the 12 factor module that we created in video one. It is, in fact, here. Here’s where Composer really becomes a huge benefit when you’re using the Composer Template for Drupal is when you want to add new modules, all you have to do is just run a composer require statement and it will run out to Drupal and download that dependency, put it in your composer that JSON file and place it in the correct spot in your Drupal install. Let’s go ahead and give that a shot. I go back to our terminal here. I’m in the source directory of the code base. I can run a Composer Require. Let’s pull down the Drupal JSON API module.

What this did is it went out to Drupal.org and grabbed the latest version of the JSON API module and it put it in our Drupal site. Now, if we go back to our code base, and we go inside of our web directory and into our modules directory, into control, we can see there’s the JSON API module. So let’s verify that we actually see this in our code base. Let’s go back to our terminal. I’ll just to show you that when we add new files, the sync is automatically watching, and so that synced over to our code base.

Let’s get into the site again, hit the Refresh button here. If I type in JSON API, that should show up. And so there, in fact, it is. Another benefit that we get with Composer as well as if we need to make version changes, it’s real simple to make version changes because it’s all handled in a manifest file called composer.json. Let me go back to the code base and show you what that looks like. In our source directory, we have a composer.json file that shows us all of our dependencies. Most of these dependencies actually pretty much all the dependencies came from the Composer Template that we were looking at earlier.

If we want to specify a specific version of Drupal, we can come over here and update this Drupal core version and put in whatever version we want. If for example we wanted the latest version of 8.4, we can come in here and type in 8.4. If we wanted to require a specific version of 8.5, say for example we wanted 8.5.5, we can basically take away this Tilda and then run the Composer update.

You can see here that it took Druple 8.5.6 and moved it down to 8.5.5. And then the scaffolding that I was showing a little bit earlier downloaded the 8.5.5 versions of those files and put them in the correct spot. Now if we go back into the Drupal site, and then come under the reports, so the database is already updated for 8.5.6, this isn’t really good example. Let’s go ahead and re populate that database so I can finish demonstrating that this did in fact work.

Now again, if you’re moving forward, this is likely not going to be an issue but because there were likely schema changes going from 8.5.5 to 8.5.6, that’s why this is likely happening. Let’s try something here.

Let me just delete the database so that we can reinstall this. I’m going to clear the stack and then I’m gonna do a docker volume ls. Do a docker volume ls and grep 12. That doesn’t work. Let’s do factor. There we go. I’m gonna delete the data here. All right. Delete the container that’s using that data and then delete the data. And so now if we start this again, it’ll recreate that my SQL database and that we should be able to install 8.5.5 from scratch. Let’s see what happens.

There we go. There’s 8.5.5. As you can see, using a dependency manager makes managing your dependencies much easier than not doing it. If you weren’t using a dependency manager, you would actually have to move those files in and out of where they’re supposed to go manually. Also, one of the benefits that we get is all of these dependencies like the JSON API and Drupal core aren’t actually going to be committed to your repository. If we look at this repository, we can see the only things that are going to get committed are these files, we don’t see the core, Drupal core, we don’t see the JSON API module in there. It’s just the files that we really need to manage. Let me go ahead and add those files to our repo, and push that up to our repo in GitHub.

As you can see, we’ve got a lean code base that we’re managing in our repository. And we’ve got an easy way to pull an updated dependencies with Composer. Not only that, we’ve even specified our infrastructure as a dependency of our app. Your code needs something to run on, right? But executing a few commands, we’re able to build our app and have it served by Docker containers that will eventually ship to production. That’s all I’ve got for you today. If you liked this video, make sure to give us a thumbs up and hit that Subscribe button. See you next time.

Tom Friedhof Tom Friedhof Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Please enable JavaScript to view the comments powered by Disqus.

Aug 27 2018
Aug 27

This post is part 5 in the series “Hashing out a docker workflow”. I have resurrected this series from over a year ago, but if you want to checkout the previous posts, you can find the first post here. Although the beginning of this blog series pre-dates Docker Machine, Docker for Mac, or Docker for Window’s. The Docker concepts still apply, just not using it with Vagrant any more. Instead, check out the Docker Toolbox. There isn’t a need to use Vagrant any longer.

We are going to take the Drupal image that I created from my last post “Creating a deployable Docker image with Jenkins” and deploy it. You can find the image that we created last time up on Docker Hub, that is where we pushed the image last time. You have several options on how to deploy Docker images to production, whether that be manually, using a service like AWS ECS, or OpenShift, etc… Today, I’m going to walk you through a deployment process using Kubernetes also known as simply k8s.

Why use Kubernetes?

There are an abundance of options out there to deploy Docker containers to the cloud easily. Most of the options provide a nice UI with a form wizard that will take you through deploying your containers. So why use k8s? The biggest advantage in my opinion is that Kubernetes is agnostic of the cloud that you are deploying on. This means if/when you decide you no longer want to host your application on AWS, or whatever cloud you happen to be on, and instead want to move to Google Cloud or Azure, you can pick up your entire cluster configuration and move it very easily to another cloud provider.

Obviously there is the trade-off of needing to learn yet another technology (Kubernetes) to get your app deployed, but you also won’t have the vendor lock-in when it is time to move your application to a different cloud. Some of the other benefits to mention about K8s is the large community, all the add-ons, and the ability to have all of your cluster/deployment configuration in code. I don’t want to turn this post into the benefits of Kubernetes over others, so lets jump into some hands-on and start setting things up.

Setup a local cluster.

Instead of spinning up servers in a cloud provider and paying for the cost of those servers while we explore k8s, we are going to setup a cluster locally and configure Kubernetes without paying a dime out of our pocket. Setting up a local cluster is super simple with a tool called Minikube. Head over to the Kubernetes website and get that installed. Once you have Minikube installed, boot it up by typing minkube start. You should see something similar to what is shown below:

$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
 160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

This command setup a virtual machine on your computer, likely using Virtualbox. If you want to double check, pop open the Virtualbox UI to see a new VM created there. This virtual machine has loaded on it all the necessary components to run a Kubernetes cluster. In K8s speak, each virtual machine is called a node. If you want to log in to the node to explore a bit, type minikube ssh. Below I have ssh’d into the machine and ran docker ps. You’ll notice that this vm has quite a few Docker containers running to make this cluster.

$ minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
aa766ccc69e2        k8s.gcr.io/k8s-dns-sidecar-amd64           "/sidecar --v=2 --lo…"   5 minutes ago       Up 5 minutes                            k8s_sidecar_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
6dc978b31b0d        k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     "/dnsmasq-nanny -v=2…"   5 minutes ago       Up 5 minutes                            k8s_dnsmasq_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
0c08805e8068        k8s.gcr.io/kubernetes-dashboard-amd64      "/dashboard --insecu…"   5 minutes ago       Up 5 minutes                            k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
f5d725b1c96a        gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   6 minutes ago       Up 6 minutes                            k8s_storage-provisioner_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
3bab9f953f14        k8s.gcr.io/k8s-dns-kube-dns-amd64          "/kube-dns --domain=…"   6 minutes ago       Up 6 minutes                            k8s_kubedns_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
9b8306dbaab7        k8s.gcr.io/kube-proxy-amd64                "/usr/local/bin/kube…"   6 minutes ago       Up 6 minutes                            k8s_kube-proxy_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
5446ddd71cf5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
17907c340c66        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
71ed3f405944        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
daf1cac5a9a5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
9d00a680eac4        k8s.gcr.io/kube-scheduler-amd64            "kube-scheduler --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
4d545d0f4298        k8s.gcr.io/kube-apiserver-amd64            "kube-apiserver --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-apiserver_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
66589606f12d        k8s.gcr.io/kube-controller-manager-amd64   "kube-controller-man…"   8 minutes ago       Up 8 minutes                            k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
1054b57bf3bf        k8s.gcr.io/etcd-amd64                      "etcd --data-dir=/da…"   8 minutes ago       Up 8 minutes                            k8s_etcd_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
bb5a121078e8        k8s.gcr.io/kube-addon-manager              "/opt/kube-addons.sh"    9 minutes ago       Up 9 minutes                            k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
04e262a1f675        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
25a86a334555        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
e1f0bd797091        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
0db163f8c68d        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
4badf1309a58        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0

When you’re done snooping around the inside the node, log out of the session by typing Ctrl+D. This should take you back to a session on your local machine.

Interacting with the cluster

Kubernetes is managed via a REST API, however you will find yourself interacting with the cluster mainly with a CLI tool called kubectl. With kubectl, we will issue it commands and the tool will generate the necessary Create, Read, Update, and Delete requests for us, and execute those requests against the API. It’s time to install the CLI tool, go checkout the docs here to install on your OS.

Once you have the command line tool installed, it should be automatically configured to interface with the cluster that you just setup with minikube. To verify, run a command to see all of the nodes in the cluster kubectl get nodes.

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    6m        v1.10.0

We have one node in the cluster! Lets deploy our app using the Docker image that we created last time.

Writing Config Files

With the kubectl cli tool, you can define all of your Kubernetes objects directly, but I like to create config files that I can commit in a repository and mange changes as we expand the cluster. For this deployment, I’ll take you through creating 3 different K8s objects. We will explicitly create a Deployment object, which will implicitly create a Pod object, and we will create a Service object. For details on what these 3 objects are, check out the Kubernetes docs.

In a nutshell, a Pod is a wrapper around a Docker container, a Service is a way to expose a Pod, or several Pods, on a specific port to the outside world. Pods are only accessible inside the Kubernetes cluster, the only way to access any services in a Pod is to expose the Pod with a Service. A Deployment is an object that manages Pod’s, and ensures that Pod’s are healthy and are up. If you configure a deployment to have 2 replicas, then the deployment will ensure 2 Pods are always up, and if one crashes, Kubernetes will spin up another Pod to match the Deployment definition.

deployment.yml

Head over to the API reference and grab the example config file https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api.... We will modify the config file from the docs to our needs. Change the template to look like below (I changed the image, app, and name properties in the yml below):

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  # Unique key of the Deployment instance
  name: deployment-example
spec:
  # 3 Pods should exist at all times.
  replicas: 3
  template:
    metadata:
      labels:
        # Apply this label to pods and default
        # the Deployment label selector to this value
        app: drupal
    spec:
      containers:
      - name: drupal
        # Run this image
        image: tomfriedhof/docker_blog_post

Now it’s time to feed that config file into the Kubernetes API, we will use the CLI tool for this:

$ kubectl create -f deployment.yml

You can check the status of that deployment by asking the k8s for all Pod and Deployment objects:

$ kubectl get deploy,po

Once everything is up and running you should see something like this:

$ kubectl get deploy,po
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           3m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          3m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          3m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          3m

service.yml

We have no way of accessing any of those Pods in the deployment. We need to expose the Pods using a Kubernetes Service. To do this, grab the example file from the docs again and change it to the following: https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api...

kind: Service
apiVersion: v1
metadata:
  # Unique key of the Service instance
  name: service-example
spec:
  ports:
    # Accept traffic sent to port 80
    - name: http
      port: 80
      targetPort: 80
  selector:
    # Loadbalance traffic across Pods matching
    # this label selector
    app: drupal
  # Create an HA proxy in the cloud provider
  # with an External IP address - *Only supported
  # by some cloud providers*
  type: LoadBalancer

Create this service object using the CLI tool again:

$ kubectl create -f service.yml

You can now ask Kubernetes to show you all 3 objects that you created by typing the following:

$ kubectl get deploy,po,svc
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           7m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          7m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          7m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          7m

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
svc/kubernetes        ClusterIP      10.96.0.1       <none>        443/TCP        1h
svc/service-example   LoadBalancer   10.96.176.233   <pending>     80:31337/TCP   13s

You can see under the services at the bottom that port 31337 was mapped to port 80 on the Pods. Now if we hit any node in the cluster, in our case it’s just the one VM, on port 31337 we should see the Drupal app that we built from the Docker image we created in the last post. Since we are using Minikube, there is a command to open a browser on the specific port of the service, type minikube service <name-of-the-service>:

$ minikube service service-example

This should open up a browser window and you should see the Installation screen for Drupal. You have successfully deployed the Docker image that we created to a production-like environment.

What is next?

We have just barely scratched the surface of what is possible with Kubernetes. I showed you the bare minimum to get a Docker image deployed on Kubernetes. The next step is to deploy your cluster to an actual cloud provider. For further reading on how to do that, definitely check-out the KOPS project.

If you have any questions, feel free to leave a comment below. If you want to see a demo of everything that I wrote about on the ActiveLAMP YouTube channel, let us know in the comments as well.

Aug 15 2017
Aug 15

So you just finished building an awesome new website on Drupal, but now you’ve run into a new dilemma. How do optimize the site for search engines? Search engine optimization, or SEO, can be overwhelming, but don’t let that cause you to ignore certain things you can do to help drive traffic to your website. There’s nothing worse than spending countless hours to develop a web application, only to find out that users aren’t able to find your site. This can be extremely frustrating, as well as devastating if your company or business heavily relies on organic traffic.

Now there are countless philosophies of SEO, many of which are well-educated assumptions of what Google is looking for. The reality is that no one knows exactly how Google’s algorithm is calculated, and it doesn’t help when their algorithm is constantly being updated. Luckily, there are a few best practices that are accepted across the board, most of which have been confirmed by Google as being a contributing factor to search engine ranking. This blog is going to focus on a few of those best practices and which modules we have found to be helpful in both our Drupal 7 and Drupal 8 projects.

So, without further ado, here is our list of Drupal modules you should consider using on your site to help improve your SEO:

XML Sitemap Module

As the name suggests, XML Sitemap allows you to effortlessly generate a sitemap for your website. A sitemap allows for Google and other search engines like Bing and Yahoo, to be able to easily find and crawl pages on your site. Is a sitemap necessary? No. But if it helps the pages of your site to become easily discoverable, then why not reduce the risk of not having pages of your site indexed? This is especially important if you have a large site with thousands or even hundreds of pages. Having a sitemap also provides search engines with some valuable information, such as how often the page is updated and the level of significance compared to other pages on your site.

XML Sitemap allows you to generate a sitemap with a click of a button, and best of all you can configure it to periodically generate a new sitemap which will add any new pages you’ve published on your Drupal site. Once your website has a sitemap, it is recommended to submit that sitemap on Google Search Console, and if you haven’t claimed your website on Google Search Console yet, I would highly advise doing so as it will provide you with helpful insight such as indexing information, critical issues, and more.

Metatag Module

The next Drupal module is one that can really help boost your search engine ranking and visibility. Metatag is a powerful module that gives you the ability to update a large number of various meta tags on your site. A meta tag is an HTML tag which contains valuable information that search engines use to determine the relevance of a page when determining search ranking. The more information available to search engines such as Google, the better your chances will be that your pages will rank well. The Metatag module allows you to easily update some of the more popular tags, such as meta description, meta content type, title tag, viewport, and more.

Adding and/or updating your meta tags is the first step of best SEO practice. I’ve come across many sites who pay little to no attention to their meta tags. Luckily, the Metatag module for Drupal can help you easily boost your SEO, and even if you don’t have time to go through and update your meta tags manually (which is recommend), the module also has a feature to have your tags automatically generated.

Real-Time SEO for Drupal Module

The Real-Time SEO for Drupal module is a powerful tool on its own, but it is even better when paired with the Metatag module which we just finished discussing. This module takes into account many SEO best practices and gives you a real-time analysis, ensuring that your content is best optimized for search engines. It will inform you if your content is too short, how readable your posts are, and also provides you a snapshot of how your page will appear in Google. The other helpful information it provides is regarding missing or potentially weak tags, which is why I mentioned that this module and the Metatag module work extremely well together. Real-Time SEO for Drupal can let you know how to better improve your meta tags and by using the Metatags module, you can quickly update your tags and watch in real-time how the changes affect your SEO.

The Real-Time SEO for Drupal module is a simple, yet incredibly useful tool in helping you see the SEO health of your pages. If you are just getting into SEO, this is a great place to start, and even if you’re a seasoned pro this is a nice tool to have to remind you of any meta tags or keyword optimization opportunities you may be missing.

Google Analytics Module

The final module is the Google Analytics module. Google Analytics is by far the most widely used analytics platform. The invaluable information it provides, the numerous tools available, and the integrations it allows, make it a requirement for anyone looking to improve the SEO of their Drupal website. This Drupal module is extremely convenient, as it does not require a developer to have to mess with any of the site’s code. After installing the module all you have to do is enter the web property ID that is provided to you after you setup your account on Google Analytics.
From the Google Analytics module UI, you have a number of helpful options, such as what domains to track, which pages to exclude, adjusting page roles, tracking clicks and downloads, and more. The Google Analytics module for Drupal is another great tool to add to your tool belt when trying to best improve your SEO.

Final Thoughts

This list of helpful SEO modules for your Drupal 7 or 8 site could easily have been much longer, but these are a few key modules to help you get started. SEO is something that should not be ignored, as I mentioned in the beginning of the blog, it’s a shame to build a site only to find that no one is actually visiting it, but using these modules properly can definitely help prevent this issue. if you would like to learn of other great modules to help your SEO, please leave a comment below and I’ll write a follow-up blog.

Aug 02 2017
Aug 02

When migrating from Drupal 7 to Drupal 8, it is important to remember to migrate over the redirects as well. Without the migrations users will not find your content if for example: the redirect was shared on social media. Using the Migrate Plus module, it is quite simple to write a migration for the redirects. The Migrate Plus module contains some good examples on how to get started writing your custom migrations.

Write your node migrations

I am going to assume that you have written migration for some content types and have the group already written. Once those migrations have been written, in your database should now be a migrate_map_{name}_{type} table. This is where we will be able to find the imported node’s new id which will be necessary for importing the redirects.

Write the yml file for redirect migrations

For example, let’s say we have a module called blog_migrations. In that module we have a group for blog and a migration for a news and opinion content type. Inside the config/install directory add a new yml file called migrate_plus.migration.blog_redirect.yml where blog is the name of the group being migrated. This file will give an id, label, and the process to use for the migration.

id: blog_redirect
label: Path Redirect
migration_group: blog
migration_tags:
  - Drupal 7
source:
  # This is the id of the source we will add. That will live
  # in `/src/Plugin/migrate/source`.
  plugin: blog_redirect
  key: blog
process:
  rid: rid
  uid: uid
  redirect_source/path: source
  redirect_source/query:
   # `RedirectSourceQuery.php` is the process plugin to use.
   plugin: d7_redirect_source_query
   source: source_options
  redirect_redirect/uri:
    # `PathRedirect.php` is the process plugin to use.
    plugin: d7_path_redirect
    source:
      - redirect
      - redirect_options
  language:
    plugin: default_value
    source: language
    default_value: und
  status_code: status_code
destination:
  plugin: entity:redirect

Write the migrate source

Create the file BlogRedirect.php in the module’s src/Plugin/migrate/source folder.

<?php

namespace Drupal\apa_migrate\Plugin\migrate\source;

use Drupal\Core\Database\Database;
use Drupal\migrate\Row;
use Drupal\redirect\Plugin\migrate\source\d7\PathRedirect;

/**
 * Drupal 7 path redirect source from database.
 *
 * @MigrateSource(
 *  id = "blog_redirect"
 * )
 */
class BlogRedirect extends PathRedirect {

  /**
   * {@inheritdoc}
   */
  public function query() {
    // Select path redirects.
    $query = $this->select('redirect', 'p')->fields('p')
      ->condition('redirect', '%user%', 'NOT LIKE');

    return $query;
  }

  /**
   * {@inheritdoc}
   */
  public function prepareRow(Row $row) {
    // Get the current status code and set it.
    $current_status_code = $row->getSourceProperty('status_code');
    $status_code = $current_status_code != 0 ? $current_status_code : 301;
    $row->setSourceProperty('status_code', $status_code);

    $current_redirect = $row->getSourceProperty('redirect');
    $explode_current_redirect = explode("/", $current_redirect);

    $map_blog_array = array(
      'news',
      'opinion'
    );
   // Determine if the path is redirected to a /node/{id} path.
    if ($explode_current_redirect[0] == 'node') {
      // Determine the content type for the node.
      $resource_type = $this->getDatabase()
        ->select('node', 'n')
        ->fields('n', ['type'])
        ->condition('nid', $explode_current_redirect[1])
        ->execute()
        ->fetchField();

      // Check that the type is in the node types we want to migrate for.
      if (in_array($resource_type, $map_apa_array)) {
        // Gather the information about where the node is now.
        $new_node_id = Database::getConnection('default', 'default')
          ->select('migrate_map_apa_' . $resource_type, 'm')
          ->fields('m', ['destid1'])
          ->condition('sourceid1', $explode_current_redirect[1])
          ->execute()
          ->fetchField();

        // Set the new redirect.
        $new_redirect = 'node/' . $new_node_id;
        $row->setSourceProperty('redirect', $new_redirect);
      }
    }
  }
}

Run the migrations

Using the config_devel module, now import the configuration into active store to be able to run the migration using:

drush cdi1 /modules/custom/blog_migration/config/install/migrate_plus.migration.blog_redirect.yml

Then run the actual migration:

drush mi blog_redirect

After running that you should now have migrated the two content type’s redirects with the new node id they were given! Any questions, let us know in the comments below.

Jun 15 2017
Jun 15

Tom Friedhof: There’s a lot of hype around integrating Pattern Lab with your Drupal theme these days. Particularly because Drupal’s template engine is now using twig, which is one of the template engines Pattern Lab uses.  The holy grail of having a living style guide and component library is now a lot more feasible! But what about Drupal 7 sites? Twig doesn’t exist in Drupal 7.  Today I’m going to show you something we’re working on at active lamp to implement Pattern Lab templates in Drupal 7.

Hey guys, I’m Tom Friedhof, a solutions architect here at ActiveLAMP.  Let me first start off by defining what I mean when I a say living style and component library.  This idea can mean different things to different people. A living style guide and component library is the HTML, CSS, and Javascript that document the design of a user interface. The “living” part of that means that the style guide should be constantly in sync with the actual app that implements the interface as the design improves or changes.

How do you easily keep the real app and the style guide in constant sync?  That could be a lot of work, given that once the initial designs are done, the design iterations are typically done directly in the app being built, making the style guide obsolete and outdated.

That’s where the promise of Pattern Lab integration with Drupal comes in.  You can easily keep the style guide in sync, if your app depends on the style guide for all of it’s HTML, CSS, and Javascript. That’s why there is so much hype around building “Pattern Lab” themes in Drupal 8 right now. Drupal 8’s theme engine is a theme engine that Pattern Lab uses, and reusing the same twig templates that your UX Designer created in Pattern Lab within Drupal, is now an option.

Well, we’re still working on Drupal 7 sites, so how do we benefit from this approach in Drupal 7? To be honest, we’re still hashing out our approach to do this in Drupal 7.  We have the process built out enough, and we’re using it on a new theme we developing for a client, but we’re still constantly iterating on the process, and improving it as we run into things.

What I want to show you guys today is the direction that we’re going, and I’m hoping to get your feedback in the comments so that we can continually improve and iterate on this system. First off, we decided not to use the twig version of Pattern Lab.  We spent half a day trying to get twig working in Drupal 7 with the twig for drupal 7 module, and realized we’d be going down a pretty deep rabbit hole just to make twig work in D7.

Rather than fight Drupal 7 and twig, we decided to use a much simpler template engine called Mustache.  Mustache is a language agnostic template engine, and there is a really nice PHP implementation of it. With that said, we installed the gulp version of Pattern Lab, which uses Mustache templates in JavaScript.  We now have the ability to share templates.

I’m going to jump into a demo here in a second. However, I’m not going to do a deep dive of how Pattern Lab works or how Drupal and Panels work.  I’ll dive deeper in future videos with those details if you’re interested.  Leave a comment if you want to see that stuff and we’ll put it on our list of content to share. I’m going to give you guys a 10,000 foot view of how things are shaping up with our Drupal 7 integration to Pattern Lab process.

All right, so here we are in our Drupal seven install. This is pretty much a vanilla Drupal installation. If I jump over to the Drupal directory, you can see here within my sites all modules file or directory, here are all the modules that I need for this actual demo that I’m gonna use for today. We like to use panels and panels everywhere, so what I’m going be demoing today is with panels and panels everywhere, but the stuff that I’m gonna show does apply to just regular template files if you don’t want to use panels and just want to stick with the core TPL system within Drupal.

One of the the other things that we have in here is a theme called Hills, this is where all the magic actually happens. One thing that you’ll notice in this Hills theme is we have two directories called node modules and vendor. We’re actually pulling in dependencies from NPM and from Composer or from Packagist into this theme. If we open up our package.jason, which actually defines the NPM dependencies, you can see that we’re defining a dependency called hills-patternlab. This is basically the repo that holds our Pattern Lab instants, it’s the living style guide that the UX designer uses to actually update the patterns, update CSS and make any changes that need to be changed in the UI.

The composer.json file is requiring the Mustache PHP implementation. We’re using this library for obvious reasons to render the Mustache templates that we’re pulling in from Pattern Lab. This theme needs to be instantiated with an NPM install and a composer install to get these dependencies and once you’ve done that, then you’re ready to start working on the theme.

One other thing I want to do before I actually start building this our in Drupal is I want to show you the Pattern Lab instants. I am in our Hills directory, I can run NPM start and this should pull up our Pattern Lab instants. Here is our Pattern Lab instants, not going to go into the details of what Pattern Lab is, but essentially it’s all the components that make up a website. For example, you can see a page header looks like that, if we wanted to see what the header looks like, it looks like this and all of these templates are basically Mustache templates within our Patterns Lab. Let me open up node modules so you can actually see these templates real quick. The Pattern Lab directory structure looks like this, within the source directory inside of Patterns we can go into organisms and actually look at what a header looks like within Pattern Lab. This is including a couple other patterns from within Pattern Lab so let’s see what the navigation actually looks like by going in here. This is the HTML that make up a navigation.

This template includes other patterns within Pattern Lab, let me drill down to the primary links pattern. Here’s what our primary links look like, you can see that this is outputting variables, for example, href and name here and then it’s including yet another pattern within Pattern Lab, let me open that one as well. Here you can see that it’s outputting more variables, classes and name. These variables are actually defined within Pattern Lab’s data directory. I’m not going go into detail how that works, but let me just show you what that ends up rendering. You can see here’s our organism header, that primary links pattern is this here. This is basically rendering data from Pattern Lab’s data directory. If I go into the data directory just real quick, within the primary-links.json file you can see this is the actual data that it’s pulling in. If we wanted to say staff services, this is going to rebuild and we see staff services here. That’s essentially how Pattern Lab works with data in a nutshell. What I’m gonna show you guys is how we actually integrate this with Drupal. Eventually what’s going to happen is these Mustache templates are going to render variables from Drupal and not the data specified in this Pattern Lab data directory.

Let’s jump over to Drupal. Here’s our Drupal installation. First thing that I’m going do is I’m going to switch the theme to our Hills theme, our Hail to the Hills theme, so I’m gonna enable this and set this as the default. Now I’m gonna open up the homepage in a new tab and drag that over here. So now we can see here’s what we get out of the box with this Hail to the Hills theme, there’s really nothing in this theme yet. There is stuff in the theme, but I’ll get to that in a second, but this is what you’ll get once you enable it initially. We’re using Panels Everywhere with this theme so what I’m going to do is I’m going to go configure Panels Everywhere. Panels Everywhere gives you a site template by default, so I’m gonna come over here and edit it and I’m gonna add a new variant. We’ll just call this default and I’m going come over here and choose a layout within the PL Templates layout category and I’m going to hit full width one column, continue and then we’ll just work through this UI. Then I’m going to give it basically the basic content that we need to render so that you can actually see something on the page when you visit a page within the site. We’ll create the variant here, we’ll update and save that so now let’s look to see what our homepage looks like.

Our homepage is starting to look a little bit better, we’re basically hitting the default homepage for Drupal, which just shows the title and then a no front page content has been created yet. You noticed here in this layout tab, we had this category called PL Templates and it’s pulling in full width and main content. Let me show you where these are defined, if I jump back into our theme, within our theme info hook or info file, panels allows you to specify a directory where you’re going to define your layout plugins. The way you specify that is by using the string plugins panel layouts and then you just give it a path to your directory. Let me close node modules so this is a little bit easier to see. If I come into this layouts directory, you can see that I have two layouts specified here, we used the full width layout so I’m gonna jump into that first.

This isn’t a tutorial on how to create plugins, but essentially what we’re doing is we’re just creating a ctools layout plugin. We’ve called this the full width one column, we’ve said the theme function or implementation for this is panels full width and you can see that we have a panels full width template here. So when this layout is used, it’s going to actually use this template. If we just into that, all this template is doing is it’s printing out whatever is in the content area. This has nothing to do with Patten Lab yet, but this is how you essentially setup a default template with regions within panels everywhere. Let’s jump back to Drupal and go into our content area. You remember we have the default template set up now, but now let’s start to pull in some of the patterns from Pattern Lab. If I come over here, this pattern here called organisms header, let’s pull that into Drupal first. I’m gonna come in here and add content, I have this PL components over here and we have a pattern called header. I’m gonna click on that and this header is asking for four pieces of data, so I’m gonna give it the data that it needs. I’m gonna browse for a file, let’s look at that, that looks good, we’ll upload this, go to next, we’ll say logo, logo. Then we’ll give it a path and we’ll tell it what main menu to use, we’ll just say use this as the main menu and with help menu we’ll just tell it to use the user menu for now, and then finish. Let’s drag this up to the top, hit update and save and let’s go see what happened. Let’s go back to our homepage and voila, we’ve got a header pulled from Pattern Lab in here. You’ll notice that the menu is not the same menu that’s coming from Pattern Lab and why is that? It’s because it’s pulling the actual primary links from Drupal.

If we go into the menu system here, we chose the main menu to use, the main menu here and if we create another link here we can say another link, have this go to front. Then we come back to our homepage, you can see that this is actually pulling from Drupal. We have no sub-menus underneath that and that’s why it’s not showing anything underneath it. But you can see that we’re actually using our own data from within Drupal.

How did we actually pull in this whole header section from Pattern Lab? Let’s go back to where we actually pulled that in. I’m gonna go to structure, pages, back into our site template. We had this content type that we pulled in, this PL header content type. This is a ctools content type and you can define those with a content type plugin. Because this plugin only exists within this theme, we defined the ctools content type within this theme. The way we did that is within our info file, we’re specifying where content types for ctools should live and we’re saying those content types should live in the Pattern Lab directory, which is right here. This behavior isn’t default behavior in ctools, so we did have to patch the ctools modules so that we could do this. You can check out that patch here and leave any comments if you have any comments or suggestions regarding that patch. It’s a very small patch, but it basically allows us to define content types, ctools content types, within our theme and not have to define a module just for these content types.

Let’s look inside of this Pattern Lab directory and see what we have. The way ctools plugins work is it’ll traverse the directory that you’ve defined and look for any .inc files and read those in while it’s processing plugins. Within the organisms header directory, I have a file called organisms_header_inc. The content type system within ctools will pick up this file and it’ll read this variable that defines the actual plug in. You can specify other functions to be able to expose, for example, an edit form. Here you see we have a submit handler for that edit form. But here’s where all the magic happens, here’s a function that we’ve defined called preprocess. This is where we actually map the Drupal data into data that Pattern Lab understands and we pass this data to Mustache to actually render the pattern. Let me back up and show you what Pattern Lab is expecting to see within this content type. I’m going to open up a new PHP storm directory so that I don’t have to continue to scroll up. Go into sites, all, themes, hills, node_modules, hills-patternlab, open a new window, yes. Here is the Hills Pattern Lab directory within that theme, that Hills theme. What I’m going to do is go into the pattern that we’re actually pulling in and so it is here. This pattern is pulling in data from this data file and the hints that you can get from this data file is basically looking at what these patterns rely on. We’re not producing or we’re not outputting any data here, so what we need to do is we need to drill down to see where data actually is being output. If we go into navigation, we can see navigation still isn’t outputting any data, it’s still just including other atoms and molecules.

So let’s jump into the primary links. Now primary links is starting to actually output data. We have that href file there or href variable, we have a name variable there. But then you can also see that it’s pulling in yet another include. But this is where we want to start. We’re using data here in this primary links component, within Pattern Lab you can specify data in this data directory. We have this primary links Jason file and we basically specified an object with a primary links key. Now Pattern Lab’s going read into all these data files and essentially merge the objects so that you can reference them by whatever key is at the root of that object, they all get merged into the data json file. If we look here, primary links is being looped through and then the href and the name is being rendered out. If we look at this, primary links is an array with name and href. If we collapse these guys, you can see we have name, we have several links here. Staff services, work tools, news, administrative units, contact us, that all coincides with our pattern over here.

Within that pattern, coming back over here to primary links, you can see that it’s including molecules drop down and that is here. That’s also rendering or looping through the links and using the classes variable and the name variables. So if I come back into that data file and open one of these guys up, you can see here’s the links array and there’s the name and href that is being used. It looks like we’re only specifying classes on this very last button here. If we come back here, you’ll see that that class is actually specified there and that’s what makes that look a little bit different.

Essentially what we’re doing in Drupal is we’re just mapping data to the data that Pattern Lab expects. Let’s jump back over to Drupal and so now here’s our Drupal content type. You can see here, essentially we’re returning an array, nav bar brand. Here’s our primary links, so that’s what we just looked at, the primary links is essentially creating an array that looks like this, but in Drupal. You can see here, hills_menu_tree, this is essentially creating that array that Pattern Lab is expecting.

I’ll show you an easier example of what that looks like as we continue to build out this page. Let’s add another pattern to this site layout. If we come into the default template and we add content, go to Pattern Lab components, I’m going to add in a footer calling card. So that footer calling card, if we come over here into molecules, then footer, we can see the footer calling card looks like this. If we come into the template for that, that’s a molecule under footer, the calling card looks like this. This takes several variables, it takes a title, phone, email and then it loops through a social array and outputs the href and the network we defaulted in Pattern Lab. If we look at the data that we defaulted in Pattern Lab, we can come over here to footer calling card and you can see that we’ve got a calling card key and then we’ve just specified the data down there.

We’re gonna render this in Drupal so we created a content type that essentially has an edit form for all of this information. Let’s just fill this out, information technology and let’s just go sure and yes. Let’s keep that there, update and save. Now there’s our footer calling card. All right, you guys get the idea there, we’re able to create content types, we’re able to create an edit field that passes in data or we’re collecting data and then we pass that data into Mustache and render the Pattern Lab template with the data that we are pre-processing.

What is you’re working with content that isn’t going to be a ctools content type, for example nodes? Let’s create some nodes and see what that actually looks like. I’m going to come in here into configuration and we’re going to devel generate a few nodes. I can come down here, hit generate content and let’s just create 10 basic pages, we’ll generate that. Now let’s go back to our homepage here, so there we have five nodes listed on the homepage now. How do we actually style this so this looks like something? What I’m going to show is first thing I’m gonna do is I’m going to put this into a content container. Let me go over to our Pattern Lab, I’m gonna go to layouts and look at our main content. Our main content goes into a container that looks like this and the content is output inside of that container. What I’m going to do is I’m going to actually create this homepage as a view so we can actually control the template that’s being output here. I’m gonna come over here to structure, go down to views and let’s add a new view, let’s call this homepage list, we’ll continue and edit that.

We’re gonna make this a content pane, I’m actually going to get rid of this page here, we didn’t need that I should’ve unchecked it. Within that content pane, we’re going to render fields and we’re not really gonna be using the views output so uncheck that and then we’ll also throw in the body here and we’re going to limit that to 600 characters, so that’s what our view is going to look like that we’re going to use on the homepage. Let’s go ahead and save that. What I’m gonna do is I’m gonna create a new front page over in page manager. Within page manager, I’m gonna add a custom page, we’ll call this front page and then we’ll give this the path of front and we’re gonna check this box that says make this your site homepage, we’ll continue. Then I’m going to choose the layout called main content and what that’s going to do is that’s going to use the layout from Pattern Lab that uses main content and I’ll show you that here in a second.

I’ll hit continue, continue and then inside of here, we’re going to output the view that we just created. So here’s that view there, so we’ll save that and we’ll hit finish there. Update and save so now we have a front page that’s going to render a view called homepage list using the layout main content. So let’s take a look to see what happened here, let’s go back to the homepage and there you go, you can see that we’re now outputting that actual view within the page content. This home site install shouldn’t output here and this is actually being output by panels, so what we’re gonna do is we’re gonna disable that title there. If we come back here into content and then … Actually this is going to be in the site template. We’ll edit that, go into content and within the page content section, we’re going to override the title and make it nothing, update and save that. Now we’re getting a lot closer to what our styles look like in Pattern Lab.

Now the next step that we want to do is actually make this view look like something in Pattern Lab. What we’re going to do is we’re going to make that look like a two column stack view. We have this data here that’s set up in a two column stack, we’re gonna make the data from views output this template when it renders. Let’s jump back into views, so let’s go into the front page that we just created and go into the content and so here’s that view. I’m going to open this cog here and edit this view and a new tab, so views gives you the ability to specify a theme file. So what we’re gonna do is we’re actually going to specify this theme file in our theme. I’m gonna copy that and then jump into our theme over here. So here’s our Drupal theme, going to my templated directory let’s create a views directory so that all our views live in the same directory within our templates directory. Let’s create a file called views unformatted home page list.

Now, let’s just put in hello world so that you can see that this is actually working. When we re scan the templates, views is going to pick up that template file as you can see now that it’s bolded. We’ll save this and refresh the homepage and you can see now it is outputting hello world, which is in our template file. How do we actually use the template that is in Pattern Lab? Let’s go back into views, so this is where the magic happens in this theme. We have a variable expose called ‘m’, which is basically the Mustache connector to Pattern Lab. On that connector, we have a method called render, so this is where we’re going to specify the actual template that we want to use within Mustache. There is a naming convention to this and we’ll document what that naming convention is, but essentially what you need to do is specify what type of pattern it is, this is an organism so it’s in organisms and then what the name of the template it, this is a two column stack. That’s really it, that’s all you have to do to render this template, so let’s go ahead and save this and then look at our view here.

That didn’t render anything, let’s go back to our template and you can see that we’re actually not printing anything out, so let’s actually print out the results of that call and then let’s see what happens. There you go, now we’re actually printing out the template from Pattern Lab, but you can see that this is actually pulling the data from Pattern Lab, it’s pulling out its default data from Pattern Lab. How do we actually make it put our own data? Just like the content types that I was showing you, we can send it a map of how our data should look. There’s two ways we can do this, this method here, render, actually takes two arguments. One of them is an array and I’m gonna show you that first and this array is the actual map that the component is expecting. If we look to see what the component is expecting, let me jump back over to our Pattern Lab and then go into the data file that the two column stack is expecting, we can see that it’s expecting an array with a two column stack as a key and then that as an array of objects with card as the key and then title and nutgraf.

What I’m going to do is I’m actually gonna just pull some of that data out of there. Let’s go back to our Drupal theme and paste that in here. Obviously json syntax doesn’t work in PHP, so we need to convert some of this, so I’m gonna make that an array so that looks more like what PHP can understand. Now what I’m going to do is copy this array, but we also need a key of what it’s expecting. What it’s expecting as the key is primary links, sorry we’re looking at the two column stack, what is that expecting, two column stack is what that’s expecting. Let’s grab that, so now this should do it, probably because that ends the map there. Now we have this two column stack, we’re actually passing it data so you can pass it whatever data you want, but essentially this is what Drupal is looking for, so if you have your data then just go ahead and do it right here. Let’s see what this actually looks like when we save that and then come back over to Drupal and then hit refresh. You can see how it’s printing out five cards with the same data in there.

We have another way of actually mapping this data and this is through a pre-process callback. The way that works is there’s a third parameter that you can pass to this render function. Let me delete that array that we just defined and what we’re gonna do for the second parameter is we’re just going to actually pass it the variables that Drupal knows about. Inside of a Drupal theme, when you’re working with your template, there’s a variable exposed called variables that you can use and then you can output whatever variables you want. What we’re going to do is we’re gonna actually pass that to a callback that we define here and we’ll pass that in as ‘v’ just so that we don’t confuse it with variables here. What’s happening behind the scenes is this variables is being passed into this callback and now you can run PHP logic to actually pre-process your variables. Just to show you what variables looks like, let’s demo that so you can see what’s actually being passed into this callback. If we hit save there, refresh this, you can see here is what the view is actually outputting and what we really want within this data is the results that the view is outputting. Here’s all the data that the view is outputting, so this is what we want to actually map to what Drupal is expecting.

If we come back into here, what we’ll do is we’ll return an array that Pattern Lab is expecting, that array expects to have this key so we’ll copy that and then that key has a bunch of cards that are associated with it. What we’re gonna do is we’re actually gonna do the pre-process just to keep this clean in a separate function, so I’ll just define this as process card data. In this function that we’re going to define now, we’ll just copy this function guy here. This function’s going to need to take the data that we’re passing it, so pass in ‘v’ here and we’ll pass in ‘v’ here, but really what we really need from ‘v’ is just the results from the view, so maybe we just pass in ‘v’ view results. Then down here we can just say that this is going to be called the results, since it’s an array of results.

Now, essentially what we need to do is we need to create this data format again with this function that we’re using inside of Drupal. What we’ll do is we’re just going to loop through the results as result and we’re going to create what we need. Actually we need to specify that data array up here and then let’s return the data array down here. What we need to do is specify each element of the data array. That’s going to be equal to another array and in that array… Let’s see what that needs to look like, that needs to have a card key with another array with title, nutgraf and href. We’re just gonna leave href off, since we don’t have the links yet. Title, nutgraf and href. So let’s go back here and let’s start to set up what this looks like. It needs title, it’s gonna be something, nutgraf and href. We also need this to be in an array with cards, so let’s actually create the card here and pull this inside of the card. Now this is starting to look a lot like the data that Pattern Lab is expecting.

Now let’s actually map the data from views. If we come back over here and we look at what we have, we have each one of these objects, we have the note title and we have the field body. Essentially we just need to write that into the template, no title and then for the body we have field body zero rendered markup. For the href, we do have the NID so I guess we can pass that here. All right and that should be all that we need, so if we save this and then go back and refresh this, that didn’t work. Let’s take a look to see what we did wrong here. I’m going to output our variables again and just make sure that we map this properly. View, result … So we got view, ‘v’, result. View is an array, it’s not an object, this will probably fix it. So we have ‘v’, view, result, view is an object, result is an array object. We’re looping through the array and then an object. Let’s save this and see if that actually works and get rid of this criminal.

There we go, there is our views data within our template from Pattern Lab. The idea is that the Drupal theme developer just needs to specify one file that renders the template from Pattern Lab while passing it the variables that Pattern Lab is expecting or passing a callback function that our Mustache connector can then call and map out the variables that Pattern Lab is expecting.

So that’s the direction we’re going.  Hopefully that gives you the idea of what we’re trying to do. We don’t have any HTML, CSS, or JavaScript in our theme.  Any changes needed in those files get pushed upstream into the living style guide, and we pull the changes back down into Drupal.

There’s a lot going on under the hood in this theme to make all of this work. We’re thinking that this theme will end up as a base theme that you can extend, to take advantage of all this functionality.  However, that is still yet to be determined and we may change our minds on that approach.  If you have an opinion on that, please let us know in the comments.

There are definitely some trade offs to using this living style guide approach, and those trade offs exist regardless of the Drupal version you use.  I plan to release a future video to talk about the benefits and disadvantages of the living style guide approach with Drupal.  Taking this approach definitely does not fit every Drupal theme.  More about that later.

Also, we’re going to be releasing more videos as we iterate on this theme, so if you’re interested in following along with us, make sure you subscribe to our channel. Thanks for watching!

Tom Friedhof Tom Friedhof Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Please enable JavaScript to view the comments powered by Disqus.

Mar 23 2017
Mar 23

Preface

We recently had the opportunity to work on a Symfony app for one of our Higher Ed clients that we recently built a Drupal distribution for. Drupal 8 moving to Symfony has enabled us to expand our service offering. We have found more opportunities building apps directly using Symfony when a CMS is not needed. This post is not about Drupal, but cross posting to Drupal Planet to demonstrate the value of getting off the island. Enjoy!

Writing custom authentication schemes in Symfony used to be on the complicated side. But with the introduction of the Guard authentication component, it has gotten a lot easier.

One of our recent projects required use to interface with Shibboleth to authenticate users into the application. The application was written in Symfony 2 and was using this bundle to authenticate with Shibboleth sessions. However, since we were rewriting everything in Symfony 3 which the bundle is not compatible with, we had to look for a different solution. Fortunately for us, the built-in Guard authentication component turns out to be a sufficient solution, which allows us to drop a bundle dependency and only requiring us to write only one class. Really neat!

How Shibboleth authentication works

One way Shibboleth provisions a request with an authenticated entity is by setting a “remote user” environment variable that the web-server and/or residing applications can peruse.

There is obviously more to Shibboleth than that; it has to do a bunch of stuff to do the actual authenticaiton process. We defer all the heavy-lifting to the mod_shib Apache2 module, and rely on the availability of the REMOTE_USER environment variable to identify the user.

That is pretty much all we really need to know; now we can start writing our custom Shibboleth authentication guard:

<?php

namespace AppBundle\Security\Http;

use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\HttpFoundation\RedirectResponse;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Generator\UrlGeneratorInterface;
use Symfony\Component\Security\Core\Authentication\Token\TokenInterface;
use Symfony\Component\Security\Core\Exception\AuthenticationException;
use Symfony\Component\Security\Core\User\UserInterface;
use Symfony\Component\Security\Core\User\UserProviderInterface;
use Symfony\Component\Security\Guard\AbstractGuardAuthenticator;
use Symfony\Component\Security\Http\Logout\LogoutSuccessHandlerInterface;

class ShibbolethAuthenticator extends AbstractGuardAuthenticator implements LogoutSuccessHandlerInterface
{
    /**
     * @var
     */
    private $idpUrl;

    /**
     * @var null
     */
    private $remoteUserVar;

    /**
     * @var UrlGeneratorInterface
     */
    private $urlGenerator;

    public function __construct(UrlGeneratorInterface $urlGenerator, $idpUrl, $remoteUserVar = null)
    {
        $this->idpUrl = $idpUrl;
        $this->remoteUserVar = $remoteUserVar ?: 'HTTP_EPPN';
        $this->urlGenerator = $urlGenerator;
    }

    protected function getRedirectUrl()
    {
        return $this->urlGenerator->generateUrl('shib_login');
    }

    /**
     * @param Request $request The request that resulted in an AuthenticationException
     * @param AuthenticationException $authException The exception that started the authentication process
     *
     * @return Response
     */
    public function start(Request $request, AuthenticationException $authException = null)
    {
        $redirectTo = $this->getRedirectUrl();
        if (in_array('application/json', $request->getAcceptableContentTypes())) {
            return new JsonResponse(array(
                'status' => 'error',
                'message' => 'You are not authenticated.',
                'redirect' => $redirectTo,
            ), Response::HTTP_FORBIDDEN);
        } else {
            return new RedirectResponse($redirectTo);
        }
    }

    /**
     * @param Request $request
     *
     * @return mixed|null
     */
    public function getCredentials(Request $request)
    {
        if (!$request->server->has($this->remoteUserVar)) {
            return;
        }

        $id = $request->server->get($this->remoteUserVar);

        if ($id) {
            return array('eppn' => $id);
        } else {
            return null;
        }
    }

    /**
     *
     * @param mixed $credentials
     * @param UserProviderInterface $userProvider
     *
     * @throws AuthenticationException
     *
     * @return UserInterface|null
     */
    public function getUser($credentials, UserProviderInterface $userProvider)
    {
        return $userProvider->loadUserByUsername($credentials['eppn']);
    }

    /**
     * @param mixed $credentials
     * @param UserInterface $user
     *
     * @return bool
     *
     * @throws AuthenticationException
     */
    public function checkCredentials($credentials, UserInterface $user)
    {
        return true;
    }

    /**
     * @param Request $request
     * @param AuthenticationException $exception
     *
     * @return Response|null
     */
    public function onAuthenticationFailure(Request $request, AuthenticationException $exception)
    {
        $redirectTo = $this->getRedirectUrl();
        if (in_array('application/json', $request->getAcceptableContentTypes())) {
            return new JsonResponse(array(
                'status' => 'error',
                'message' => 'Authentication failed.',
                'redirect' => $redirectTo,
            ), Response::HTTP_FORBIDDEN);
        } else {
            return new RedirectResponse($redirectTo);
        }
    }

    /**
     * @param Request $request
     * @param TokenInterface $token
     * @param string $providerKey The provider (i.e. firewall) key
     *
     * @return Response|null
     */
    public function onAuthenticationSuccess(Request $request, TokenInterface $token, $providerKey)
    {
        return null;
    }

    /**
     * @return bool
     */
    public function supportsRememberMe()
    {
        return false;
    }

    /**
     * @param Request $request
     *
     * @return Response never null
     */
    public function onLogoutSuccess(Request $request)
    {
        $redirectTo = $this->urlGenerator->generate('shib_logout', array(
            'return'  => $this->idpUrl . '/profile/Logout'
        ));
        return new RedirectResponse($redirectTo);
    }
}

Let’s break it down:

  1. class ShibbolethAuthenticator extends AbstractGuardAuthenticator ... - We’ll extend the built-in abstract to take care of the non-Shibboleth specific plumbing required.

  2. __construct(...) - As you would guess, we are passing in all the things we need for the authentication guard to work; we are getting the Shibboleth iDP URL, the remote user variable to check, and the URL generator service which we need later.

  3. getRedirectUrl() - This is just a convenience method which returns the Shibboleth login URL.

  4. start(...) - This is where everything begins; this method is responsible for producing a request that will help the Security component drive the user to authenticate. Here, we are simply either 1.) redirecting the user to the Shibboleth login page; or 2.) producing a JSON response that tells consumers that the request is forbidden, if the client is expecting application/json content back. In which case, the payload will conveniently inform consumers where to go to start authenticating via the redirect property. Our front-end application knows how to handle this.

  5. getCredentials(...) - This method is responsible for extracting authentication credentials from the HTTP request i.e. username and password, JWT token in the Authorization header, etc. Here, we are interested in the remote user environment variable that mod_shib might have set for us. It is important that we check that the environment variable is actually not empty because mob_shib will still have it set but leaves it empty for un-authenticated sessions.

  6. getUser(...) - Here we get the credentials that getCredentials(...) returned and construct a user object from it. The user provider will also be passed into this method; whatever it is that is configured for the firewall.

  7. checkCredentials(...) - Following the getUser(...) call, the security component will call this method to actually verify whether or not the authentication attempt is valid. For example, in form logins, this is where you would typically check the supplied password against the encrypted credentials in the the data-store. However we only need to return true unconditionally, since we are trusting Shibboleth to filter out invalid credentials and only let valid sessions to get through to the application. In short, we are already expecting a pre-authenticated request.

  8. onAuthenticationFailure(...) - This method is called whenever our authenticator reports invalid credentials. This shouldn’t really happen in the context of a pre-authenticated request as we 100% entrust the process to Shibboleth, but we’ll fill this in with something reasonable anyway. Here we are simply replicating what start(...) does.

  9. onAuthenticationSuccess(...) - This method gets called when the credential checks out, which is all the time. We really don’t have to do anything but to just let the request go through. Theoretically, this would be there we can bootstrap the token with certain roles depending on other Shibboleth headers present in the Request object, but we really don’t need to do that in our application.

  10. supportsRememberMe(...) - We don’t care about supporting “remember me” functionality, so no, thank you!

  11. onLogoutSuccess(...) - This is technically not part of the Guard authentication component, but to the logout authentication handler. You can see that our ShibbolethAuthenticator class also implements LogoutSuccessHandlerInterface which will allow us to register it as a listener to the logout process. This method will be responsible for clearing out Shibboleth authentication data after Symfony has cleared the user token from the system. To do this we just need to redirect the user to the proper Shibboleth logout URL, and seeding the return parameter to the nice logout page in the Shibboleth iDP instance.

Configuring the router: shib_login and shib_logout routes

We’ll update app/config/routing.yml:

# app/config/routing.yml

shib_login:
  path: /Shibboleth.sso/Login

shib_logout:
  path: /Shibboleth.sso/Logout

You maybe asking yourself why we even bother creating known routes for these while we can just as easily hard-code these values to our guard authenticator.

Great question! The answer is that we want to be able to configure these to point to an internal login form for local development purposes, where there is no value in actually authenticating with Shibboleth, if not impossible. This allows us to override the shib_login path to /login within routing_dev.yml so that the application will redirect us to the proper login URL in our dev environment.

We really can’t point shib_logout to /logout, though, as it will result in an infinite redirection loop. What we do is override it in routing_dev.yml to go to a very simple controller-action that replicates Shibboleth’s logout URL external behavior:

<?php

... 

  public function mockShibbolethLogoutAction(Request $request)
  {
      $return = $request->get('return');

      if (!$return) {
          return new Response("`return` query parameter is required.", Response::HTTP_BAD_REQUEST);
      }

      return $this->redirect($return);
  }
}

Configuring the firewall

This is the last piece of the puzzle; putting all these things together.

########################################################
# 1.  We register our guard authenticator as a service #
########################################################

# app/config/services.yml

services:
  app.shibboleth_authenticator:
    class: AppBundle\Security\Http\ShibbolethAuthenticator
    arguments:
      - "@router"
      - "%shibboleth_idp_url%"
      - "%shibboleth_remote_user_var%"

...

##########################################################################
# 2. We configure Symfony to read security_dev.yml for dev environments. #
##########################################################################

# app/config/config_prod.yml

imports:
  - { resources: config.yml }
  - { resources: security.yml }

...

# app/config/config_dev.yml
imports:
  - { resources: config.yml }
  - { resources: security_dev.yml } # Dev-specific firewall configuration

...


#####################################################################################
# 3. We configure the app to use the `guard` component and our custom authenticator #
#####################################################################################

# app/config/security.yml

security:
  firewall:
    main:
      stateless: true
      guard:
        authenticators:
          - app.shibboleth_authenticator

      logout:
          path: /logout
          success_handler: app.shibboleth_authenticator

...

#####################################################
# 4. Configure dev environments to use `form_login` #
#####################################################

# app/config/security_dev.yml
security:
  firewall:
    main:
      stateless: false
      form_login:
        login_path: shib_login
        check_path: shib_login
        target_path_parameter: return

The star here is actually just what’s in the security.yml file, specifically the guard section; that’s how simple it is to support custom authentication via the Guard authentication component! It’s just a matter of pointing it to the service and it will hook it up for us.

The logout configuration tells the application to allocate the /logout path to initiate the logout process which will eventually call our service to clean up after ourselves.

You also notice that we actually have security_dev.yml file here that config_dev.yml imports. This isn’t how the Symfony 3 framework ships, but this allows us to override the firewall configuration specifically for dev environments. Here, we add the form_login authentication scheme to support logging in via an in-memory user-provider (not shown). The authentication guard will redirect us to the in-app login form instead of the Shibboleth iDP during development.

Also note the stateless configuration difference between prod and dev: We want to keep the firewall in production environments stateless; this just means that our guard authenticator will get consulted in all requests. This ensures that users will actually be logged out from the application whenever they are logged out of the Shibboleth iDP i.e. when they quit the web browser, etc. However we need to configure the firewall to be stateful during development, otherwise the form_login authentication will not work as expected.

Conclusion

I hope I was able to illustrate how versatile the Guard authentication component in Symfony is. What used to require multiple classes to be written and wired together now only requires a single class to implement, and its very trivial to configure. The Symfony community has really done a great job at improving the Developer Experience (DX).

Setting pre-authenticated requests via environment variables isn’t just used by mod_shib, but also by other authentication modules as well, like mod_auth_kerb, mod_auth_gssapi, and mod_auth_cas. It’s a well-adopted scheme that Symfony actually ships with a remote_user authentication listener starting 2.6 that makes it very easy to integrate with them. Check it out if your needs are simpler i.e. no custom authentication-starter/redirect logic, etc.

Mar 06 2017
Mar 06

In the modern world of web / application development, using package managers to pull in dependencies has become a de-facto standard. In fact, if you are developing enterprise software and you aren’t leveraging package managers I would challenge you to ask yourself why not?

Drupal was very early to adopt this mindset of pulling in dependencies almost a decade ago when Dmitri Gaskin created an extension for Drush (the Drupal Shell) that added the ability to pull contributed modules by listing them in a make file (I think Dmitri was 12 years old when he wrote the Drush extension, pretty amazing!). Since that time, the make extension has been added to Drush core.

Composer is the current standard for putting together PHP applications, which is why Drupal 8 has gone this direction, so why not use Composer to put together Drupal 7 applications?

First off, I want to clarify what I’m not talking about in this post. I am not advocating that we ditch Drush all together, I still find value in other aspects of what Drush can do. I am specifically referring to the Make aspect of Drush. Is Drush Make still necessary?

This post is also not about Drupal Console vs Drush, both CLI tools add tremendous value to development workflow, and there isn’t 100% overlap with these tools [yet]. I think we still need both tools.

This post is about how I came to see the benefit of switching to Composer from Drush Make. I recommend making this move for Drupal 7 and Drupal 8. This Drupal Composer workflow is not new, it has been around for a while. I just never saw a good reason to make the jump from Drush Make to this new process, until now. We have been asked in the comments on previous posts, “Why haven’t you adopted the Composer process?” I now have a good reason to change our process and fully jump on board with Composer building Drupal 7 applications. We appreciate all the comments we get on our blog, it sharpens everyone involved!

We have blogged about the Composer workflow in a previous post on our Drupal 8 build process in the past, but the main motivation there was to be proactive about where PHP application development is going [already is]. We didn’t have a real use case for the switch to Composer until now. This post will review how I came to that revelation.

Dependency Managers

I want to make one more point before I make the case for Composer. There are many reasons to use package managers to pull in dependencies. I’ll save the details for another blog post. The main reason developers use package managers is so that your project repository does not include libraries and modules that you do not maintain. That is why tools like Composer, npm, Yarn, Bower, and Bundler exist. Hook up your RSS reader to our blog, I’ll explain in more detail in a future post, but for now I’ll leave this link to the Composer site explaining why committing dependencies is a bad idea, in your project repo.

Version Numbers

The #1 reason to make the switch to Composer is the ability to manage version numbers. You may be asking “What’s the big deal, Drush Make handles version numbers as well?” let me give you a little context of why using Composer version numbers are a better approach.

The Back Story

Recently in a strategy meeting with one of our enterprise clients, we were discussing how to approach launching 100’s of sites on one Drupal Core utilizing multiple installation profiles on top of Acquia Site Factory. Our goal was to figure out how we could sanely manage updating potentially dozens of installation profiles without explicitly defining each version number of the profile being updated. This type of Drupal architecture is also a topic for a future blog post, but for now read Acquia’s explanation of why architecting with profiles is a good idea.

As a developer, it is common place to lock down versions to a very specific version so that we know exactly what versions we are using / deploying. This is the reason composer.lock, Gemfile.lock, yarn.lock, and npm shrinkwrap exist. We have experienced the pain of unexpected defects in applications due to an obscure dependency changing deep in the dependency tree. Most dependency managers have a very explicit command for updating dependencies, i.e. composer update, bundle update, yarn upgrade respectively, which in turn update the lock file.

A release manager does not need to know explicitly which version of a dependency (installation profile, module, etc), to release next, she simply wants the latest stable release.

Herein lies the problem with Drush Make. There are practices that exist that solve both the developer problem and release manager problem that do not exist in Drush Make, but do exist in Composer and other application development environments. It’s a common pattern that has been around for a while, it’s called semantic versioning.

Semantic Versioning

If you haven’t heard of semantic versioning (semver), go check it out now. Pretty much every package manager I have dealt with has adopted semver. Adopting semver gives the developer, or release manager, the choice of how to update dependencies within their app. There are very distinct numbers in semver for introducing breaking changes, new features, and bug fixes. How does this play into what problem use cases I mentioned above?

A developer has the ability to specify in the composer.json file specific versions, while leaving the version number flexible to pull in new bug fixes and feature improvements (patch and minor releases). Look at the example below:

{
  "name": "My Drupal Platform",
  ...
  "require": {
        ...
    "drupal/drupal": "~7.53.0",
    "drupal/views": "^3.14.0"
  },
  ...
}

The tilde ~ and caret ^ symbols have special meanings when specifying version numbers. The tilde matches the most recent minor version (updates patch release number, the last number), the caret will update you to the most recent major version (updates minor release number, the middle number).

The above example basically says, use the views module at version 3.14, and when version 3.15 comes out, update me to that version when I run composer update.

Breaking changes should only be introduced when you update the first number, the major release. Of course, if you completely trust the developer writing the contributed code this system would be enough, but not all developers follow best practice, which is why the lock file was created and the need to explicitly run composer update.

With this system in place, a release manager now only needs to worry about running one command to get the latest stable release of all dependencies. This command could also be hidden behind a nice UI (a CI Server) so all she has to do is push one button to grab all the latest dependencies and push to a testing site for verification.

Understanding everyones needs

In the past, I didn’t have a good reason to move away from Drush Make, because it did the job, and Drush is so much more than Drush Make. The strategy session we had was eye opening. Understanding the needs from an operations perspective, while not jeopardizing the integrity of the application led us down a path to see a problem that the wider development community at large has already solved (not just the PHP community). It’s very rewarding to solve problems like this, especially when you come to the conclusion that someone has already solved the problem! “We just had to find the path to the water! (–A.W.)”

What do you think about using Drush Make vs Composer for pulling together a Drupal Application? Leave us your thoughts in the comments.

Oct 12 2016
Oct 12

Ron Huber: Proprietary software does a really good job of being everything to everybody. When somebody goes and pitches something for a proprietary side, they’ll say yes to everything, where in open source, we’ll say, well we do this really well, and it’ll integrate and that’s no problem, but we don’t sell it as the end-all. We sell it as it’s a solid player and this is what we can do with it and we feel comfortable because that’s just our way of approaching things. We’re a community of good people. Where on the other side, when they sell the internet of things, there are platforms out there that are, “Oh, we’re going to control the world,” and they won’t let you do anything but control the world. Everything has to go through their system. They can do everything and they get you on it and it’s too late once you realize that it only gets you about 75% of the way there. We have to sell it differently now.

Tom Friedhof: This is where the argument of the open web and silos comes in, right? And obviously, Drupal’s pushing for the open web, because with these silos - say your marketing platform is on Facebook, right? If you want to do anything above and beyond what Facebook allows you to do, you can’t. You’re stuck within their platform.

Jordan Ryan: That’s their audience.

Tom Friedhof: Exactly. And if you want to reach that audience, you have to pay for it.

Chris Stauffer: I believe that quote was, “If you didn’t pay for it, you are the product.”

Jordan Ryan: Right. That’s very true. You know, all those Facebook followers that you have can go in, and it’s a reality they think a lot of small business owners and medium-sized enterprises - they don’t realize, if that page goes away, your whole platform you spent all that money on is gone.

Chris Stauffer: I liked what both of you two just said a second ago, which was kind of that that’s one of the main value adds of open-source. I had a conversation on Friday with a particular client - er, gentleman - that’s been my client three times in the past, but his new start-up company is not my client. So he went with a quote/unquote open platform, and I’m putting air quotes around it, that would get him on all of these different devices. He’s basically doing an MCM. So you have video platform - they did all of these different things, and he was really, really stoked, and really excited when he first told me about it. And now he’s ready to go, and he’s ready to make some changes, and they told him no, and he can’t do anything. Whereas the initial platform that I was originally pitching him was, we’ll start off with responsive web, like normal, we could throw an Android or iOS layer on top. It’ll be nice a simple, and then you’ll be able to do whatever you want to.

Ron Huber: You pivot when you need to pivot.

Chris Stauffer: Right, you pivot when you need to. And now he’s sitting there going, “I can’t get them to do the things I want them to do.” And I’m like, “Dude, I would’ve told you you could do whatever you want.” You know? I got no limitations - you want it? We’ll build it.

Tom Friedhof: But there’s a cost to that as well.

Chris Stauffer: There is! There is.

Ron Huber: Because you don’t benefit from the other hundred clients that are also asking for something.

Chris Stauffer: I mean, yeah. The initial bid for him was going to end up being about a hundred large to just kind of do a very simple CMS with a simple video object and couple simple video apps layered on top of that. And it did save him a hundred grand upfront, but now he’s to a place where he wants to actually start monetizing his assets and actually start doing a lot of those different things that he’s unable to do, and he’s probably going to end up paying a hundred grand anyways. That kind of makes sense. Because he can’t take it where he needs to.

Jordan Ryan: I’m not knocking anyone, but if you’re building a business, you shouldn’t be building on someone else’s platform. You build your own.

Ron Huber: Very good point, right? Here you are, all your technology is owned by somebody else, and you’re assuming - of course - that company’s going to be around forever. You’re also assuming that you’re a large enough client that you can drive them into do what you need them to do. And, I don’t know, most of the clients are not that big. We are actually building a java application right now, because one of our large media companies, the third-party system that they paid for went out of business.

Chris Stauffer: That sucks.

Ron Huber: Went out of business, they gave them the software, so they’re running the software, but they have to replicate it, and they have to replicate it soon, because if it fails, there’s no way they can touch it. It just goes down. So we’re busy replicating the whole thing, so that they don’t have this point of failure. And it’s great, because we’ll build it, it’ll integrate into Drupal without a problem, which is a hundred of their other websites, and it’ll be able to sit in there and integrate without any issue. But it’s on a different technology, and very few other proprietary systems will allow another technology to come in and play nice. It’s a very powerful tool, but you gotta pay the hundred thousand dollars upfront, which is killing us, because that’s a big, big investment. If you can just sign on for twenty-five hundred dollars, and - boom, here I am - that’s great.

Chris Stauffer: And that’s basically what the guy did, by the way. I don’t think it was twenty-five hundred, I think he paid ten for the whole platform.

Ron Huber: Ten, but maybe that’s the business side of it. He should go ten, cause if you’re starting a business, you put in a little bit of money - it’s basically your MVP, and you get it tested, and then you move it over.

Chris Stauffer: For the time, I think he probably actually did make the right decision, but since he was successful, now it’s the wrong decision.

Jordan Ryan: But he has to know that going into it, though.

Chris Stauffer: He did.

Jordan Ryan: The context of having a conversation with those kinds of people, and when we have those conversations - “Look, hosting service, build your own - this will probably give you runway for six months if you don’t want to build it right now. And then prove that it works, and then come back to this later. But prove that it works, right?

Chris Stauffer: He did get a second round of funding.

Ron Huber: There you go. People have a hard time with just the minimal viable product concept. And it’s really not the developers, or the engineers; it’s a CFO. The CFO wants it all done, cause he wants to write one check, and be done with it. As much as we tell them, “How do you know you’re gonna need this six months from now? Cause you can’t even tell us your requirements today. We’re building you a platform for something that we’re guessing at, you’re asking us to guess at. We’re building it, it’s working, you get it running, and chances are six months, or eight months from now, you’re going to realize - hey, this piece that’s sitting over here is making me a lot of money, and I didn’t put any effort at all into it. Okay, now I’m going to pivot and go that way.” Well, you can’t do that. If you build the whole dream - the two-year plan - upfront, then you’ve built the two-year plan and 50% of it’s not being used. But sometimes, these executives - marketing, CFO, etc. - get so hung up on the overall, “We want to do it once.”

Chris Stauffer: “We want to do it once and it has to be done right the first time.”

Jordan Ryan: This is the experience that we’re supposed to have, right?

Ron Huber: Well, we do.

Jordan Ryan: That’s a sales point for CFOs.

Ron Huber: Right. But getting people to buy into that is the hard part. And I don’t know - this is where I think proprietary software really kills it, is because they already have a package. It might be twice as much, or a three-year commitment - which is just silly. They’re selling ten-thousand dollars a month or a hundred-thousand dollars a month on something that’s already built, and that’s what we want, whether they use it or not. I’ve been looking at this, trying to figure this out for years, and I just don’t have an answer for open source. But I do think it’s still - that’s a part of our big challenge, and where we’re going with that.

Tom Friedhof: The value proposition is freedom, right? It’s the open web. It’s the freedom to do what we want to do with the applications we built. When obviously there’s the cost.

Ron Huber: And ownership, right? They own it.

Chris Stauffer: I think one of the other things, too, that I’ve started to sell as a value proposition of open source is that there’s no vendor lock-in either. I can honestly look a client in the eye and tell them, “Look, my boys follow the rules, and if, at the end of the project, you don’t like me, hire Achieve. Because I guarantee his team could pick up my code, and just be like ‘alright’.” And they would just keep running, as long as we’re following the Drupal rules and Drupal standards, there’s no vendor lock-in.

Ron Huber: Better yet, you could hire internally. Anybody you feel like. We don’t really want to do your maintenance, right? We want you to do your maintenance. We want to build your next ambitious goal, cause that’s what we’re really good at. But to do your maintenance, you should hire internally or a sub-contractor, or get India to do it.

Chris Stauffer: My point, though, was more that you’re not locked in. I have this other client that hired an engineer to develop a 100% custom system, and the engineer was, on a scale of one to ten, about a two and a half or a three. And so the whole thing is completely a worthless platform, and now my team is going in and reverse-engineering the worthless platform, to then move them out of the worthless platform, into something that’s solid. And they’re literally having to pay, I’m going to call it a $30,000 tax, if you will, on just my boys figuring out what that last guy was thinking. And with Drupal, as long as my team followed the rules, I can look a client in the eye and guarantee them that will never happen. You could, to Ron’s point, hire internal staff. You could hire a competitor, you could do anything you want to, and you’re not locked in. Whereas the client that I was referencing that wrote the proprietary software, that only worked for them - that, realistically, was a couple hundred-thousand dollar mistake. Pretty much. Because by the time she’s done, the cost of her business, the cost of my bills, the cost of the bills she had previously - all of those costs are just ridiculous, compared to if she would’ve just hired us to do it in Drupal to begin with.

Please enable JavaScript to view the comments powered by Disqus.

Oct 01 2016
Oct 01

Jordan Ryan: Are any of you selling, in particular with Drupal, the power of integrations or integrating with other systems? Kind of like the microservices decoupled…

Chris Stauffer: To a certain extent, but it’s kind of more selling them for me on the power of Drupal as an enterprise platform, then in that initial requirements-gathering process, talking to them about what other ancillary applications and legacy systems they have to tie into. Then once you’ve identified that, then selling them on the fact that you’ve already tied it in with Salesforce five times, and that’s not really that big of a deal anymore because you kind of know how to do the Salesforce thing. I’m just using his example, but I’ve found that when I’m able to speak to the fact that you’ve already done that integration four times, then it becomes not necessarily a risk. I remember back in the old days, I would always think that every time I integrated into another system, that that was my largest point of risk on the project, was I’m going to plug into something else. Now, if someone tells me I’m going to take Drupal, and I’m going to plug that into Salesforce, I go, “Eh. I don’t know. It’s probably only ten grand, maybe; maybe 15. Depends on how complicated it is.” But my blood pressure didn’t raise at all. Whereas back in the old days, with all custom systems, since a lot of that integration wasn’t already there, and I had to do it from scratch, and there weren’t modules that did it, it was way scarier. Or integrating with Facebook Connect. The first time I did that, it scared the $#@! out of me.

Ron Huber: Especially a week later, when they change the API.

Chris Stauffer: When they changed the API and it blew up in my face before it was going to go live. You mean that time?

Ron Huber: And it was your fault, of course!

Chris Stauffer: Of course! That did literally happen. We did a project for Unilever, and we were launching a Facebook app, and it blew up like a week before the demo.

Ron Huber: That’s right.

Chris Stauffer: Yeah, it was horrible. But nowadays, a lot of those prewritten integrations, I’ve already done them so many times, and they’re so mature, that it’s like, “Oh, Facebook integration. Yeah, whatever, dude. Sure, no problem.”

Jordan Ryan: One click. Not quite.

Chris Stauffer: Well, I don’t know if I’d go that far. Does that make sense? During that requirements gathering process, one of the things I tell clients a lot about Drupal, too, is when you hear me say, “There’s a module for that,” you should smile. When you hear me go, “Ooh, I don’t know if there’s a module for that,” that means you should frown because that’s what I just did to your budget. When I say there’s a module for that, I’m going to get it done, and it’s going to get done quick and cheap and efficient, and everything’s cool. But the minute I go, “I don’t know if there’s a module for that,” then that means it might take me 100 hours or 200 hours to pull off what you just asked me to do. Whereas, I might have done ten requirements that were all out of the box for the same price as that one requirement, which is going to be custom.

Ron Huber: I hate that term, out of the box. It drives me nuts.

Chris Stauffer: But you get my point.

Ron Huber: I totally get your point, and you live it, and et cetera. We consider ourselves a integration company. I feel like why people come to us is because we have so much experience in integration. There’s other shops that do Drupal. That’s not the problem. Frankly, you can get Drupal done in eastern Europe or whatever. It’s all the integration and the API, and then of course, the management side of it. It was what should you be integrating? What should you be doing? Those are the real questions and why I think you hire a US-based firm, as opposed to somebody that’s just building off of your requirements.

Chris Stauffer: Well, I think, Ron, for me, the difference, kind of building exactly on what you’re saying, is that the US-based developers have the ability to get thrown a curve ball and still hit it. Whereas, the overseas developers, when you throw them a curve ball, they don’t know what a curve ball is or what they’re supposed to do with it. They just know, “I was told to do this, and you gave me that, and now I’m completely lost, and I don’t know how to handle it.”

Ron Huber: I want it to work. I’ve spent hundreds of thousands of dollars…

Chris Stauffer: Wasted…

Ron Huber: On ever country possible to be able to supplement our team, and it hasn’t worked for what we do. I think it works excellent for somebody that’s got a three-year roadmap, and you got a product, and you want to … That works perfectly. But if you don’t know what your requirements are and you need it by November 1st, you got to do it here in the US, and you should probably do it pretty local.

Jordan Ryan: Or you need great communication.

Ron Huber: Well, yeah. Just because everybody should hire you, they don’t.

Chris Stauffer: Well, that’s the thing about systems integration, though, is systems integration never actually goes the way it’s supposed to.

Ron Huber: No.

Chris Stauffer: That’s what I meant by hitting a curve ball.

Ron Huber: No, you’re absolutely right.

Chris Stauffer: That systems integration, it’s always like we planned to have that hook up to that, and then you find out that, oh $#@!, it’s not going to work like that.

Ron Huber: It’s a lot of moving parts.

Chris Stauffer: And uh-oh. And the US guy can hang, and the other guy doesn’t.

Ron Huber: Well, it’s not their fault, either. We just have a better communication process. We’ve seen it a little bit more. I don’t think this is a US versus offshore conversation. There’s just a certain element of what it is that we do best and why we’re up here talking about it. I think that as we look at where we’re headed and where Drupal is headed, I think this move into Drupal 8 was really … Not that it’s surprising, from Dries … a visionary move because where we could go, that none of us have even … Well, not really thought out yet. He’s probably ten steps in front of us, right? He knows where we’re going to go. We all just need to catch up. This move to Sympfony based and a different object oriented function is just going to be able to get us there. I think the face of Drupal’s going to change. I think how we maybe sell or how we pitch it or how we use it is going to change, and there’s nothing wrong with that. It’s still a powerful tool. It might not be our only tool.

Tom Friedhof: One of these scenarios that comes to my mind when we’re talking about all this integration was the example that Dries gave at DrupalCon with ordering food through the Alexa and basically asking Alexa if something was on sale at Trader Joe’s, and basically having APIs talk to each other. Then when the gal at the supermarket updated the little produce and said it’s on sale, that automatically sent a text. It’s crazy how the world we live on is no longer just websites, right? Drupal is no longer just a website. It’s got to work on your phone.

Ron Huber: Well, yeah. Where else are we going to … It’s going into everything, and that’s the other big thing, is we do a lot of medical device work. Then where do you interact with the Internet of things? Where is it that we have to go? Okay, maybe Drupal doesn’t actually show up on a device, but it is the aggregator. Then when you’re trying to get in the backend and figure out, through your portal, where your customers are, where your employees are, where the new products coming, that’s another powerful tool or another version of Drupal that I think is under-promoted and underused, at this point.

Tom Friedhof: But it doesn’t have to be just Drupal. It can be anything. One of the things … We’ve always been a web shop, but we’re building a native app right now. We’re building it with React Native. It’s amazing how we’re tying these services that started off on the web, still using web technologies to build a native experiences on a mobile device, tying it back into a Sympfony application. It’s just amazing, as developers. This is one of the powers and the benefits of Drupal, is it can act as that content store or as that integration piece that these different systems can interact with.

Jordan Ryan: I think there’s something to be said about how, for a while now, Drupal’s community has wanted to get off the island. I think that’s led a lot by how agencies have needed to get off the island in order to start integrating all of these different systems. One of the, I think, opportunities Drupal has is that with all these integrated systems, there really isn’t a leading technology that you could consider a decision engine. When you’re talking about a unified customer experience across many different disparate platforms, Alexa, your iOS apps, there really isn’t a central hub. You either have to build one, or you have to start thinking about your digital strategy with Drupal as that hub that’s going to make that happen. There’s some things that I think will need to happen, as far as Drupal’s infrastructure, in order to make that more accessible, talking to all of these different IoT apps. That has some performance implications if you have a lot of traffic. It’s no longer just page views. You’ve got a lot of personalized content. I think that there’s going to be opportunity there as Drupal continues to evolve.

Chris Stauffer: In my mind, I think that that evolution towards using Drupal as a central hub … I actually think that started happening a while back. In Drupal 7, we’ve been building, for probably a good two or three years, the concept of having a Drupal website, and then having all of your content available via web services that then get ingested into an iOS app. We haven’t done a React Native one yet, but we have done a couple systems where we used like Swift on the front-end and just normal Android development, where we were basically hitting a lot of those Drupal web services. I think the movement towards 8 is making those services more of the focus, but I think that that’s kind of been there for a while now. I think that corporate executives are just starting now to understand, as you put it earlier, that it actually is a central hub, that I have a content management system that’s going to manage my content, but everything else is just a display medium, whether it is a mobile app, Facebook … You know what I mean? There’s a million different ways of consuming content.

Jordan Ryan: It’s the octopus controller controlling all the knobs.

Chris Stauffer: Right. Look at the Hollywood Reporter. The Hollywood Reporter has millions of content objects, but you can look at it through normal web; you can look at it through mobile web; you can look at it through an Android device. You can look at it through anything. If there’s a device, I’m sure the Hollywood Reporter’s got a new way of looking at it that way. You kind of see what I mean? I think that-

Jordan Ryan: Oculus Rift?

Chris Stauffer: I don’t think we have that one yet.

Please enable JavaScript to view the comments powered by Disqus.

Jul 30 2016
Jul 30

On a recent project, we had to create multiple sitemaps for each of the domains that we have setup on the site. We came across some problems that we had to resolve because of the nature of our pURL setup.

Goals

  • We want all of the front pages from each subdomain to be added to the sitemap and we are able to set the rules for them on the XMLSitemap settings page.

  • We want to make sure that the URLs that we are adding to the other pages no longer show up in the main domain’s sitemap.

Problems

1) Only On The Primary Domain

The XML sitemap module only creates one sitemap based on the primary domain.

2) Prefixes not Distinguished

Our URLs for nodes are setup so that nodes can be prefixed with our subdomain (pURL modifier) and XMLSitemap doesn’t see our prefixes as being different sites. At this point, all nodes are added to every single domain’s sitemap.

3) URL Formats

Our URLs are not in the correct format when being added to the sitemap. Our URLs should look like http://subdomain.domain.org/*, however, because we are prefixing them, they show up as http://domain.org/subdomain/*. We want our URLs to look like they are from the right sub-domain and not all coming from the base domain.

Solution

We were able to add the ability to create sitemaps for each of the 15 domains by adding the XMLSitemap domain module. The XLMSitemap domain module allows us to define a domain for each sitemap, generate a sitemap and serve it on the correct domain.

We added xmlsitemap-dont-write-empty-element-in-xml-sitemap-file-2545050-3.patch to prevent empty elements from being added to the sitemap.

Then we used a xmlsitemap_element_alter inside of our own custom module that looks something like this:

<?php

/**
 * Implements hook_xmlsitemap_element_alter().
 */
function hook_xmlsitemap_element_alter(array &$element, array $link, $sitemap) {
  $domain = $sitemap->uri['options']['base_url'];
  $url_parts = explode('//', $domain);
  $parts = explode('.', $url_parts[1]);
  $subdomain = array_shift($parts);

  $current_parts = explode('/', $link['loc']);
  $current_prefix = array_shift($current_parts);

  $modifiers = _get_core_modifiers();

  //Checks to see if we are on a valid subdomain from our pURL modifiers
  if (in_array($subdomain, array_keys($modifiers))) {
    //Checks to see if we are not on the correct subdomain
    //and that we do have a prefix (fixes front page)
    if ($current_prefix != $subdomain && $current_prefix != '') {
      //Empty out the element
      $element = array();
        return $element;
      }
    else {
      //Our subdomain matches our prefix, build our correct url
      $pattern = $current_prefix . '/';
      $element['loc'] = $domain . str_replace($pattern, '', $link['loc']);
    }
  }
  else {
    //We are on our main domain, remove elements from
    //prefixes that are subdomains
    if (in_array($current_prefix, array_keys($modifiers))) {
      $element = array();
      return $element;
    }
  }
}

/**
 * Helper function for getting the subdomains from the database cache
 */
function _get_core_modifiers() {
  if (!$cache = cache_get('subdomains')) {
    $result = db_query("SELECT id, value FROM {purl} WHERE provider = 'og_purl_provider'")->fetchAllAssoc('value');
    cache_set('subdomains', $result, 'cache', time() + 86400);
    return $result;
  }
  else {
    return $cache->data;
  }
?>

If you have any questions, suggestions, feel free to drop a comment below!

Jul 14 2016
Jul 14

Back in December, Tom Friedhof shared how we set up our Drupal 8 development and build process utilizing Docker. It has been working well in the several months we have used it and worked within its framework. Within the time-span however, we experienced a few issues here and there which led me to come up with an alternative process which keeps the good things we like and getting rid of/resolving the issues we encountered.

First, I’ll list some improvements that we’d like to see:

  1. Solve file-syncing issues

    One issue that I keep running into when working with our development process is that the file-syncing stops working when the host machine powers off in the interim. Even though Vagrant’s rsync-auto can still detect changes on the host file-system and initiates an rsync to propel files up into the containers via a mounted volume, the changes do not really appear within the containers themselves. I had a tough time debugging this issue, and the only resolution in sight was to do a vagrant reload – it’s a time-consuming process as it rebuilds every image and running them again. Having to do this every morning when I turn on my laptop at work was no fun.

  2. Performant access to Drupal’s root

    Previously, we had to mount Drupal’s document root to our host machine using sshfs to explore in it, but it’s not exactly performant. For example, performing a grep or ag to search within files contents under Drupal 8’s core takes ~10 seconds or more. Colleagues using PhpStorm report that mounting the Drupal root unto the host system brings the IDE to a crawl while it indexes the files.

  3. Levarage Docker Compose

    Docker Compose is a great tool for managing the life-cycle of Docker containers, especially if you are running multiple applications. I felt that it comes with useful features that we were missing out because we were just using Vagrant’s built-in Docker provider. Also with the expectation that Docker for Mac Beta will become stable in the not-so-distant future, I’d like the switch to a native Docker development environment as smooth as possible. For me, introducing Docker Compose into the equation is the logical first-step.

    dlite just got into my attention quite recently which could fulfill the role of Docker for Mac before its stable release, but haven’t gotten the chance to try it yet.

  4. Use Composer as the first-class package manager

    Our previous build primarily uses Drush to build the Drupal 8 site and download dependencies and relegating the resolution of some Composer dependencies to Composer Manager. Drush worked really well for us in the past and there is no pressing reason why we should abandon it, but considering that Composer Manager is deprecated for Drupal 8.x and that there is already a Composer project for Drupal sites, I thought it would be a good idea to be more proactive and rethink the way we have been doing Drupal builds and adopt the de-facto way of putting together a PHP application. At the moment, Composer is where it’s at.

  5. Faster and more efficient builds

    Our previous build utilizes a Jenkins server (also ran as a container) to perform the necessary steps to deploy changes to Pantheon. Since we were mostly deploying from our local machines anyway, I always thought that perhaps running the build steps via docker run ... would probably suffice (and it doesn’t incur the overhead of a running Jenkins instance). Ultimately, we decided to explore Platform.sh as our deployment target, so basing our build in Composer became almost imperative as Drupal 8 support (via Drush) on Platform.sh is still in beta.

With these in mind, I’d like to share our new development environment & build process.

1. File & directory structure

Here is a high-level tree-view of the file structure of the project:

/<project_root>
├── Vagrantfile
├── Makefile
├── .platform/ # Platform.sh high-level configuration
│   └── routes.yaml
├── bin/ # Executables that are used within the development workflows.
│   ├── drupal*
│   ├── drush*
│   └── sync-host*
├── docker-compose.yml # Defines the relationships and run-time properties of the Docker containers.
├── environment # File containing environment variables
├── src/ # The drupal-project root
│   ├── .gitignore
│   ├── .platform.app.yaml # Platform.sh app route configuration
│   ├── Dockerfile
│   ├── LICENSE
│   ├── bin/ # Some executables installed in the containers for proper pass-through of DrupalConsole and Drush commands.
│   │   ├── drupal-portal*
│   │   └── drush-portal*
│   ├── composer.json
│   ├── composer.lock
│   ├── custom/
│   ├── phpunit.xml.dist
│   ├── scripts/
│   ├── vendor/
│   └── web/ # the Drupal 8 root -- the directory exposed by the web-server.
└── zsh/ # Some zsh configurations
    ├── zshrc
    ├── async.zsh
    └── pure.zsh

2. The Vagrantfile

Vagrant.configure("2") do |config|

  config.vm.box = "debian/jessie64"
  config.vm.network "private_network", ip: "192.168.100.47"

  config.vm.hostname = 'activelamp.dev'

  config.vm.provider :virtualbox do |vb|
    vb.name = "activelamp.com"
    vb.memory = 2048
  end

  config.ssh.forward_agent = true

  config.vm.provision "shell",
    inline: "apt-get install -y zsh && sudo chsh -s /usr/bin/zsh vagrant",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /home/vagrant/.zshrc ] && echo '' || ln -s /vagrant/zsh/zshrc /home/vagrant/.zshrc",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/prompt_pure_setup ] && echo '' || ln -s /vagrant/zsh/pure.zsh /usr/local/share/zsh/site-functions/prompt_pure_setup",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/async ] && echo '' || ln -s /vagrant/zsh/async.zsh /usr/local/share/zsh/site-functions/async",
    run: "once"

  if ENV['GITHUB_OAUTH_TOKEN']
    config.vm.provision "shell",
      inline: "sudo sed -i '/^GITHUB_OAUTH_TOKEN=/d' /etc/environment  && sudo bash -c 'echo GITHUB_OAUTH_TOKEN=#{ENV['GITHUB_OAUTH_TOKEN']} >> /etc/environment'"
  end

  # This is here to install Docker on the virtual machine and nothing else.
  config.vm.provision :docker

  config.vm.provision :docker_compose, yml: "/vagrant/docker-compose.yml", run: "always", compose_version: "1.7.1"

  config.vm.synced_folder ".", "/vagrant", type: "nfs"
  config.vm.synced_folder "./src", "/mnt/code", type: "rsync", rsync__exclude: [".git/", "src/vendor"]
end

Compare this new manifest to the old one and you will notice that we reduce Vagrant’s involvement in defining and managing Docker containers. We are simply using this virtual machine as the Docker host, using the vagrant-docker-compose plugin to provision it with the Docker Compose executable and having it (re)build the images during provisiong stage and (re)start the containers on vagrant up.

We are also setting up Vagrant to sync file changes on src/ to /mnt/code/ in the VM via rsync. This directory in the VM will be mounted into the container as you’ll see later.

We are also setting up zsh as the login shell for the vagrant user for an improved experience when operating within the virtual machine.

3. The Drupal 8 Build

For now let’s zoom in to where the main action happens: the Drupal 8 installation. Let’s remove Docker from our thoughts for now and focus on how the Drupal 8 build works.

The src/ directory cotains all files that constitute a Drupal 8 Composer project:

<project_root>/src/
├── composer.json
├── composer.lock
├── phpunit.xml.dist
├── scripts/
│   └── composer/
├── vendor/ # Composer dependencies
│   └── ... 
└── web/ # Web root
    ├── .htaccess
    ├── autoload.php
    ├── core/ # Drupal 8 Core
    ├── drush/
    ├── index.php
    ├── modules/
    ├── profiles/
    ├── robots.txt
    ├── sites/
    │   ├── default/
    │   │   ├── .env
    │   │   ├── config/ # Configuration export files
    │   │   │   ├── system.site.yml
    │   │   │   └── ... 
    │   │   ├── default.services.yml
    │   │   ├── default.settings.php
    │   │   ├── files/
    │   │   │   └── ...
    │   │   ├── services.yml
    │   │   ├── settings.local.php.dist
    │   │   ├── settings.php
    │   │   └── settings.platform.php
    │   └── development.services.yml
    ├── themes/
    ├── update.php
    └── web.config

The first step of the build is simply executing composer install within src/. Doing so will download all dependencies defined in composer.lock and scaffold files and folders necessary for the Drupal installation to work. You can head over to the Drupal 8 Composer project repository and look through the code to see in depth how the scaffolding works.

3.1 Defining Composer dependencies from custom installation profiles & modules

Since we cannot use the Composer Manager module anymore, we need a different way of letting Composer know that we may have other dependencies defined in other areas in the project. For this let’s look at composer.json:

{
    ...
    "require": {
        ...
        "wikimedia/composer-merge-plugin": "^1.3",
        "activelamp/sync_uuids": "dev-8.x-1.x"
    },
    "extra": {
        ...
        "merge-plugin": {
          "include": [
            "web/profiles/activelamp_com/composer.json",
            "web/profiles/activelamp_com/modules/custom/*/composer.json"
          ]
        }
    }
}

We are requiring the wikimedia/composer-merge-plugin and configuring it in the extra section to also read the installation profile’s composer.json and one’s that are in custom modules within it.

We can define the contrib modules that we need for our site from within the installation profile.

src/web/profiles/activelamp_com/composer.json:

{
    "name": "activelamp/activelamp-com-profile",
    "require": {
        "drupal/admin_toolbar": "^8.1",
        "drupal/ds": "^8.2",
        "drupal/page_manager": "^[email protected]",
        "drupal/panels": "~8.0",
        "drupal/pathauto": "~8.0",
        "drupal/redirect": "~8.0",
        "drupal/coffee": "~8.0"
    }
}

As we create custom modules for the site, any Composer dependencies in them will be picked up everytime we run composer update. This replicates what Composer Manager allowed us to do in Drupal 7. Note however that unlike Composer Manager, Composer does not care if a module is enabled or not – it will always read its Composer dependencies and resolve them.

3.2 Drupal configuration

3.2.1 Settings file

Let’s peek at what’s inside src/web/settings.php:

<?php

/**
 * Load services definition file.
 */
$settings['container_yamls'][] = __DIR__ . '/services.yml';

$config_directories[CONFIG_SYNC_DIRECTORY] = __DIR__ . '/config';

/**
 * Include the Platform-specific settings file.
 *
 * n.b. The settings.platform.php file makes some changes
 *      that affect all envrionments that this site
 *      exists in.  Always include this file, even in
 *      a local development environment, to insure that
 *      the site settings remain consistent.
 */
include __DIR__ . "/settings.platform.php";

$update_free_access = FALSE;
$drupal_hash_salt = '<some hash>';

$local_settings = __DIR__ . '/settings.local.php';

if (file_exists($local_settings)) { 
  require_once($local_settings);
}

$settings['install_profile'] = 'activelamp_com';
$settings['hash_salt'] = $drupal_hash_salt;

Next, let’s look at settings.platform.php:

<?php

if (!getenv('PLATFORM_ENVIRONMENT')) {
    return;
}

$relationships = json_decode(base64_decode(getenv('PLATFORM_RELATIONSHIPS')), true);

$database_creds = $relationships['database'][0];

$databases['default']['default'] = [
    'database' => $database_creds['path'],
    'username' => $database_creds['username'],
    'password' => $database_creds['password'],
    'host' => $database_creds['host'],
    'port' => $database_creds['port'],
    'driver' => 'mysql',
    'prefix' => '',
    'collation' => 'utf8mb4_general_ci',
];

We return early from this file if PLATFORM_ENVIRONMENT is not set. Otherwise, we’ll parse the PLATFORM_RELATIONSHIPS data and extract the database credentials from it.

For our development environment however, we’ll do something different in settings.local.php.dist:

<?php

$databases['default']['default'] = array(
    'database' => getenv('MYSQL_DATABASE'),
    'username' => getenv('MYSQL_USER'),
    'password' => getenv('MYSQL_PASSWORD'),
    'host' => getenv('DRUPAL_MYSQL_HOST'),
    'driver' => 'mysql',
    'port' => 3306,
    'prefix' => '',
);

We are pulling the database values from the environment, as this is how we’ll pass data in a Docker run-time. We also append .dist to the file-name because we don’t actually want settings.local.php in version control (otherwise, it will mess up the configuration in non-development environments). We will simply rename this file as part of the development workflow. More on this later.

3.2.2 Staged configuration

src/web/sites/default/config/ contains YAML files that constitute the desired Drupal 8 configuration. These files will be used to seed a fresh Drupal 8 installation with configuration specific for the site. As we develop features, we will continually export the configuration entities and place them into this folder so that they are also versioned via Git.

Configuration entities in Drupal 8 are assigned a universally unique ID (a.k.a UUID). Because of this, configuration files are typically only meant to be imported into the same (or a clone of the) Drupal site they were imported from. The proper approach is usually getting hold of a database dump of the Drupal site and use that to seed a Drupal 8 installation which you plan to import the configuration files into. To streamline the process during development, we wrote the drush command sync-uuids that updates the UUIDs of the active configuration entities of a non-clone site (i.e. a freshly installed Drupal instance) to match those found in the staged configuration. We packaged it as Composer package named activelamp/sync_uuids.

The complete steps for the Drupal 8 build is the following:

$ cd src
$ composer install
$ [ -f web/sites/default/settings.local.php ] && : || cp web/sites/default/settings.local.php.dist web/sites/default/settings.local.php
$ drush site-install activelamp_com --account-pass=default-pass -y
$ drush pm-enable config sync_uuids -y
$ drush sync-uuids -y
$ drush config-import -y

These build steps will result a fresh Drupal 8 installation based on the activelamp_com installation profile and will have the proper configuration entities from web/sites/default/config. This will be similar to any site that is built from the same code-base minus any of the actual content. Sometimes that is all that you need.

Now let’s look at the development workflow utilizing Docker. Let’s start with the src/Dockerfile:

FROM php:7.0-apache

RUN apt-get update && apt-get install -y \
  vim \
  git \
  unzip \
  wget \
  curl \
  libmcrypt-dev \
  libgd2-dev \
  libgd2-xpm-dev \
  libcurl4-openssl-dev \
  mysql-client

ENV PHP_TIMEZONE America/Los_Angeles

# Install extensions
RUN docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
 && docker-php-ext-install -j$(nproc) gd pdo_mysql curl mbstring opcache

# Install Composer & add global vendor bin dir to $PATH
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN echo 'export PATH="$PATH:/root/.composer/vendor/bin"' >> $HOME/.bashrc

# Install global Composer dependencies
RUN composer global require drush/drush:8.1.2 drupal/console:0.11.3
RUN $HOME/.composer/vendor/bin/drupal init
RUN echo source '$HOME/.console/console.rc' >> $HOME/.bashrc

# Set timezone.
RUN echo "date.timezone = \"$PHP_TIMEZONE\"" > /usr/local/etc/php/conf.d/timezone.ini
ARG github_oauth_token
# Register a GitHub OAuth token if present in build args.
RUN [ -n $github_oauth_token ] && composer config -g github-oauth.github.com $github_oauth_token || echo ''

RUN [ -e /etc/apache2/sites-enabled/000-default.conf ] && sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/sites-enabled/000-default.conf || sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/apache2.conf

# Copy scripts used by pass-through bin/drush and bin/drupal
COPY bin/drush-portal /usr/bin/drush-portal
COPY bin/drupal-portal /usr/bin/drupal-portal

COPY . /var/www/
WORKDIR /var/www/

RUN composer --working-dir=/var/www install

The majority of the Dockerfile should be self-explanatory. The important bits are the provisioning of a GitHub OAuth token & adding of the {drupal,drush}-portal executables which are essential for the bin/{drush,drupal} pass-through scripts.

Provisioning a GitHub OAuth token

Sometimes it is necessary to configure Composer to use an OAuth token to authenticate on GitHub’s API when resolving dependencies. These tokens must remain private and should not be committed into version control. We declare that our Docker build will take github_oauth_token as a build argument. If present, it will configure Composer to authenticate using it to get around API rate limits. More on this later.

DrupalConsole and Drush pass-through scripts

Our previous build involved opening up an SSH port on the container running Drupal so that we can execute Drush commands remotely. However, we should already be able to run Drush commands inside the container without having SSH access by utilizing docker run. However the commands can get too lengthy. In fact, they will be extra lengthy because we also need to execute this from within the Vagrant machine using vagrant ssh.

Here are a bunch of scripts that makes it easier to execute drush and drupal commands from the host machine:

Here are the contents of bin/drush and bin/drupal:

#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drupal-portal [email protected]"
vagrant ssh -c "$cmd"
#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drush-portal [email protected]"
vagrant ssh -c "$cmd"

This allow us to do bin/drush to run Drush commands and bin/drupal ... to run DrupalConsole commands, and the arguments will be pass over to the executables in the container.

Here are the contents of src/bin/drupal-portal and src/bin/drush-portal:

#!/usr/bin/env bash
/root/.composer/vendor/bin/drupal --root=/var/www/web [email protected]
#!/usr/bin/env bash
/root/.composer/vendor/bin/drush --root=/var/www/web [email protected]

The above scripts are added to the container and is essential to making sure drush and drupal commands are applied to the correct directory.

In order for this to work, we actually have to remove Drush and DrupalConsole from the project’s composer.json file. This is easily done via the composer remove command.

The docker-compose.yml file

To tie everything together, we have this Compose file:

version: '2'
services:
  server:
    build:
      context: ./src
      args:
        github_oauth_token: ${GITHUB_OAUTH_TOKEN}
    volumes:
      - /mnt/code:/var/www
      - composer-cache:/root/.composer/cache
    env_file: environment
    links:
      - mysql:mysql
    ports:
      - 80:80
  mysql:
    image: 'mysql:5.7.9'
    env_file: environment
    volumes:
      - database:/var/lib/mysql

volumes:
  database: {}
  composer-cache: {}

There are four things of note:

  1. github_oauth_token: ${GITHUB_OAUTH_TOKEN}

    This tells Docker Compose to use the environment variable GITHUB_OAUTH_TOKEN as the github_oauth_token build argument. This, if not empty, will effectively provision the Composer with an OAuth token. If you go back to the Vagrantfile, you will see that this environment variable is set in the virtual machine (because docker-compose is run under it) by appending it to the /etc/environment file. All it needs is that the environment variable is present in the host environment (OS X) during the provisioning step.

    For example, it can be provisioned via: GITHUB_OAUTH_TOKEN=<token> vagrant provision

  2. composer-cache:/root/.composer/cache

    This tells Docker to mount a volume on /root/.composer/cache so that we can persist the contents of this directory between restarts. This will ensure that composer install and composer update is fast and would not require re-downloading packages from the web every time we run. This will drastically imrpove the build speeds.

  3. database:/var/lib/mysql

    This will tell Docker to persist the MySQL data between builds as well. This is so that we don’t end up with an empty database whenever we restart the containers.

  4. env_file: environment

    This let us define all environment variables in a single file, for example: bash MYSQL_USER=activelamp MYSQL_ROOT_PASSWORD=root MYSQL_PASSWORD=some-secret-passphrase MYSQL_DATABASE=activelamp DRUPAL_MYSQL_HOST=mysql We just configure each service to read environment variables from the same file as they both need these values.

We employ rsync to sync files from the host machine to the VM since it offers by far the fastest file I/O compared to the built-in alternatives in Vagrant + VirtualBox. In the Vagrantfile we specified that we sync src/ to /mnt/code/ in the VM. Following this we configured Docker Compose to mount this directory into the server container. This means that any file changes we make on OS X will get synced up to /mnt/code, and ultimately into /var/www/web in the container. However, this only covers changes that originate from the host machine.

To sync changes that originates from the container – files that were scaffolded by drupal generate:*, Composer dependencies, and Drupal 8 core itself – we’ll use the fact that our project root is also available at /vagrant as a mount in the VM. We can use rsync to sync files the other way – rsyncing from /mnt/code to /vagrant/src will bring file changes back up to the host machine.

Here is a script I wrote that does an rsync but will ask for confirmation before doing so to avoid overwriting potentially uncommitted work:

#!/usr/bin/env bash

echo "Dry-run..." 

args=[email protected]

diffs="$(vagrant ssh -- rsync --dry-run --itemize-changes $args | grep '^[><ch.][dfLDS]\|^\*deleted')"

if [ -z "$diffs" ]; then
  echo "Nothing to sync."
  exit 0
fi

echo "These are the differences detected during dry-run. You might lose work.  Please review before proceeding:"
echo "$diffs"
echo ""
read -p "Confirm? (y/N): " choice

case "$choice" in
  y|Y ) vagrant ssh -- rsync $args;;
  * ) echo "Cancelled.";;
esac

We are keeping this generic and not bake in the paths because we might want to sync arbitrary files to arbitrary destinations.

We can use this script like so:

$ bin/sync-host --recursive --progress --verbose --exclude=".git/" --delete-after /mnt/code/ /vagrant/src/

If the rsync will result in file changes on the host machine, it will bring up a summary of the changes and will ask if you want to proceed or not.

Makefile

We are using make as our task-runner just like in the previous build. This is really useful for encapsulating operations that are common in our workflow:

# Sync files back to host machine (See Vagrantfile synced folder config)
sync-host:
        bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

sync:
        vagrant rsync-auto

sync-once:
        vagrant rsync

docker-rebuild:
        vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml build

docker-restart:
        vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml up -d

composer-install:
        vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www install

composer-update:
        vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www update --no-interaction 

# Use to update src/composer.lock if needed without `sync-host`
# i.e. `make lock-file > src/composer.lock`
lock-file:
        @vagrant ssh -- cat /mnt/code/composer.lock

install-drupal: composer-install
        vagrant ssh -- '[ -f /mnt/code/web/sites/default/settings.local.php ] && echo '' || cp /mnt/code/web/sites/default/settings.local.php.dist /mnt/code/web/sites/default/settings.local.php'
        -bin/drush si activelamp_com --account-pass=secret -y
        -bin/drush en config sync_uuids -y
        bin/drush sync-uuids -y
        [ $(ls -l src/web/sites/default/config/*.yml | wc -l) -gt 0  ] && bin/drush cim -y || echo "Config is empty. Skipping import..."

init: install-drupal
        yes | bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

platform-ssh:
        ssh <site_id>@ssh.us.platform.sh

The Drupal 8 build steps are simply translated to use bin/drush and the actual paths within the virtual machine in the install-drupal task. After cloning the repository for the first time, a developer should just be able to execute make init, sit back with a cup of coffee and wait until the task is complete.

Try it out yourself!

I wrote the docker-drupal-8 Yeoman generator so that you can easily give this a spin. Feel free to use it to look around and see it in action, or even to start off your Drupal 8 sites in the future:

$ npm install -g yo generator-docker-drupal-8
$ mkdir myd8
$ cd myd8
$ yo docker-drupal-8

Just follow through the instructions, and once complete, run vagrant up && make docker-restart && make init to get it up & running.

If you have any questions, suggestions, anything, feel free to email me at [email protected] or drop a comment below!

Jun 15 2016
Jun 15

The web development community can have a long list of requirements, languages, frameworks, constructs and tools that most companies or bosses want you to know.

This list may not include everything you need to know including PHP, HTML, CSS, responsive web development principles, and Drupalisms. Here is the list of some of the important skills, concepts, and tools that we think you should know as a beginner Drupal developer.

1. Version Control

Every developer should have some experience with version control and versioning. Version control is an essential part of the Drupal community. Versioning allows for Drupal projects to be easily managed, maintained and contributed in a uniform manner. Version control will also most likely be used in-house to manage each client project as well.

2. Command Line Interface (CLI)

It isn’t necessary to be a CLI Ninja, however being able to work comfortably using a CLI is very important. One of the advantages to using a CLI is the ability to be more productive. You can quickly automate repetitive tasks, perform tasks without jumping from application to application, and the ability to use tools like Drush to perform tasks that would normally require you to navigate 3 or more mouse clicks to accomplish.

3. Package Managers

Using package managers is important to the installation of Drupal. Whether it is installing Sass or Bootstrap from node or Drush from composer, it is important to know how package managers work and exactly what you are running before running commands on your computer.

4. Contributing Back

An important part of the Drupal community is contributing back to projects and core. When you find an issue, such as something that just doesn’t seem to work correctly, or you would like to implement a functionality to Drupal, you should think about giving back to the community. If you find an issue on an existing project or core, check to see if there is an existing ticket on that project. If there isn’t, you can create one, and if you can debug it and resolve the issue you can contribute a patch to that issue. If you don’t know exactly how to debug the issue you can have an open conversation with other developers and maintainers to help resolve the issue. Contributing and interacting in the community moves Drupal forward.

5. CSS Preprocessors

Within the last couple of years, there has been a movement to CSS preprocessors to add a programmatic feel to CSS2 and CSS3. There are some that are against preprocessors because it adds a little more overhead to a project. Whether you use them or not, you may have a client or framework that uses one that you might need to be familiar with how to use a preprocessor.

6. A Framework

Within the Drupal community, there is often talk of headless Drupal. We have seen some interesting ideas come from the adopters of headless Drupal. Headless Drupal setups usually use a framework for the front-end. It may be Angular, Angular 2, Backbone, Ember or something different, however, most of the frameworks have two things in common, they are often written in Javascript and almost always make use of templating.

7. Templating

It is important to know the principles of templating so that you can easily pick up and learn new frameworks. Whether it is Mustache, Twig, Jade, or the templating syntax from within Angular, there are similarities between the syntax and the principles can be applied to each of the languages that will allow you to quickly step from one to the next with a smaller learning curve.

8. Basic Debugging

Debugging a problem correctly can save you valuable time by getting you directly to the cause of an issue instead of looking over each line of code one by one. It is essential to know how to do basic debugging when working with Drupal. Sometimes the error messages can give you enough information, other times it is necessary to step into Devel or XDebug and step through the project to find the exact location where the code is not working correctly so that you can start to solve the problem.

9. Unit Testing / Code Testing

Testing your own code is important. When it comes to code testing you have many options, from TTD and BDD you can write unit tests to cover your classes, linting to make sure you are writing “good”, standardized code. Linting can be helpful for writing code that others can easily navigate and sets up some best practices for you to follow.

10. A CMS

When starting with Drupal, it might be good to have familiarity with a CMS platform before jumping in. There are some advantages to knowing the constructs of other CMS platforms and being familiar with how to work within a platform. However, when working with Drupal it is important to think about the way Drupal works and not be stuck in the way other CMS platforms accomplish goals.

Conclusion

As a web developer, it is important to know many concepts and technologies. Many companies will not require you to know everything, do everything and be a jack-of-all-trades. In technology, there are so many new tools, frameworks, and languages coming out daily that it is impossible to stay on top of them all. It is far better to get a good base understanding of core web concepts that can be applied to multiple languages, tools, and technologies and then specialize.

Did I miss something you feel is important? Is there something you would like to have seen on the list? Leave a comment below.

Jun 07 2016
Jun 07

Continuing from Evan’s blog post on building pages with Paragraphs and writing custom blocks of content as fields, I will walk you through how to create a custom field-formatter in Drupal 8 by example.

A field-formatter is the last piece of code to go with the field-type and the field-widget that Evan wrote about in the previous blog post. While the field-type tells Drupal about what data comprises a field, the field-formatter is responsible for telling Drupal how to display the data stored in the field.

To recap, we defined a hashtag_search field type in the previous blog post whose instances will be composed of two items: the hashtag to search for, and the number of items to display. We want to convert this data into a list of the most recent n tweets with the specified hashtag.

A field-formatter is a Drupal plugin, just like its respective field-type and field-widget. They live in <module_path>/src/Plugin/Field/FieldFormatter/ and are namespaced appropriately: Drupal\<module_name>\Plugin\Field\FieldFormatter.

<?php

namespace Drupal\my_module\Plugin\Field\FieldFormatter;


use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\FormatterBase;
use Drupal\Core\Form\FormStateInterface;

/**
 * @FieldFormatter(
 *     id = "hashtag_formatter",
 *     label = @Translation("Hashtag Search"),
 *     field_types = {
 *      "hashtag_search"
 *     }
 * )
 */
class HashtagFormatter extends FormatterBase
{

    public function viewElements(FieldItemListInterface $items, $langcode)
    {
        return array();
    }
}

We tell Drupal important details about our new field-formatter using a @FieldFormatter class annotation. We declare its unique id; a human-readable, translatable label; and a list of field_types that it supports.

The most important method in a field-formatter is the viewElements method. It’s responsibility is returning a render array based on field data being passed as <Drupal\Core\Field\FieldItemListInterface> $items.

Let’s look at the code:

<?php

use Drupal\my_module\Twitter\TwitterClient;
use Drupal\my_module\Twitter\TweetFormatter;

...

    /**
     * @var TwitterClient
     */
    protected $twitter;

    /**
     * @var TweetFormatter
     */
    protected $formatter;

    ...

    public function viewElements(FieldItemListInterface $items, $langcode)
    {
        $element = [];

        // Iterate through all  values from one ore more fields...
        foreach ($items as $delta => $item) {

            try {

                // Use service to fetch `$item->count` no. of tweets matching the hashtag...
                $results = $this->twitter->search($item->hashtag_search, $item->count);

                // Map through each tweet and generate an HTML-rich version
                // complete with hashtags, mentions, URLs, etc. as links, using a formatter service.
                // Assign the HTML-rich version to a "formatted_text" property.
                $statuses = array_map(function ($s) {
                    $s['formatted_text'] = $this->formatter->toHtml($s['text'], $s['entities']);
                    return $s;
                }, $results['statuses']);

                // Add a header...
                if (!empty($statuses)) {
                    $element[$delta]['header'] = [
                        '#markup' => '<h4>#' . $item->hashtag_search . '</h4>'
                    ];
                }

                // Tell Drupal that each status is to be rendered
                // by the `my_module_status` theme.
                foreach ($statuses as $status) {
                    $element[$delta]['status'][] = [
                        '#theme' => 'my_module_status',
                        '#status' => $status
                    ];
                }

            } catch (\Exception $e) {
                // If an error/exception occur i.e. Twitter auth errors, log it, and carry-on
                // gracefully...
                $this->logger->error('[:exception]: %message', [
                    ':exception' => get_class($e),
                    '%message' => $e->getMessage(),
                ]);
                continue;
            }
        }

        // Include the `twitter_intents` library defined by this module which holds
        // `Drupal.behavior`s that dictate functionality of the Twitter block UI.
        $element['#attached']['library'][] = 'my_module/twitter_intents';

        return $element;
    }

    ...

See https://github.com/bezhermoso/tweet-to-html-php for how TweetFormatter works. Also, you can find the source-code for the basic Twitter HTTP client here: https://gist.github.com/bezhermoso/5a04e03cedbc77f6662c03d774f784c5

Custom theme renderer

As shown above, each individual tweets are using the my_module_status render theme. We’ll define it in the my_module.module file:

<?php

/**
 * Implements hook_theme().
 */
function my_module_theme($existing, $type, $theme, $path) {
  $theme = [];
  $theme['my_module_status'] = array(
    'variables' => array(
      'status' => NULL
    ),
    'template' => 'twitter-status',
    'render element' => 'element',
    'path' => $path . '/templates'
  );

  return $theme;
}

With this, we are telling Drupal to use the template file modules/my_module/templates/twitter-status.twig.html for any render array using my_module_status as its theme.

Render caching

Drupal 8 does a good job caching content: typically any field formatter is only called once and the resulting collective render arrays are cached for subsequent page loads until the Drupal cache is cleared. We don’t really want our Twitter block to be cached for that long. Since it is always great practice to keep caching enabled, we can define how caching is to be applied to our Twitter blocks. This is done by adding cache definitions in the render array before we return it:

<?php

      public function viewElements(...)
      {

        ...

        $element['#attached']['library'][] = 'my_module/twitter_intents';
        /* Cache block for 5 minutes. */
        $element['#cache']['max-age'] = 60 * 5;
        
        return $element;
      }

Here we are telling Drupal to keep the render array in cache for 5 minutes. Drupal will still cache the rest of the page’s elements how they want to be cached, but will call our field formatter again – which pulls fresh data from Twitter – if 5 minutes has passed since the last time it was called.

Jun 03 2016
Jun 03

On a recent project we had to create a section that is basically a Twitter search for a hashtag. It needed to be usuable in different sections of the layout and work the same.Also, we were using the Paragraphs module and came up with a pretty nifty (we think) solution of creating a custom field that solved this particular problem for us. I will walk you through how to create a custom field/widget/formatter for Drupal 8. There are Drupal console commands for generating boilerplate code for this… which I will list before going through each of the methods for the components.

Field Type creation

The first thing to do is create a custom field. In a custom module (here as “my_module”) either run drupal:generate:fieldtype or create a file called HashTagSearchItem.php in src/Plugin/Field/FieldType. The basic structure for the class will be:

<?php

namespace Drupal\my_module\Plugin\Field\FieldType;

use Drupal\Core\Field\FieldItemBase;
use Drupal\Core\Field\FieldStorageDefinitionInterface;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Language\LanguageInterface;
use Drupal\Core\TypedData\DataDefinition;

/**
 * Plugin implementation of the 'hashtag_search' field type.
 *
 * @FieldType(
 *   id = "hashtag_search",
 *   label = @Translation("Hashtag Search"),
 *   description = @Translation("An field for a hashtag search"),
 *   default_widget = "hashtag_search_widget",
 *   default_formatter = "hashtag_formatter"
 * )
 */
class HashtagSearchItem extends FieldItemBase {

/// methods here.

}

Next, implement a few methods that will tell Drupal how our field will be structured. Provide a default field settings for the field that will be the count for the amount of tweets to pull. This will return of default settings keyed by the setting’s name.

<?php

  /**
   * {@inheritdoc}
   */
  public static function defaultFieldSettings() {
    return [
      'count' => 6
    ] + parent::defaultFieldSettings();
  }

Then provide the field item’s properties. In this case there will be an input for hashtag and a count. Each property will be keyed by the property name and be a DataDefinition defining what the properties will hold.

<?php
  /**
   * {@inheritdoc}
   */
  public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) {
    $properties = [];
    $properties['hashtag_search'] = DataDefinition::create('string')
      ->setLabel(t('The hashtag to search for.'));
    $properties['count'] = DataDefinition::create('integer')
      ->setLabel(t('The count of twitter items to pull.'));
    return $properties;
  }

Then provide a schema for the field. This will be the properties that we have created above.

<?php
  /**
   * {@inheritdoc}
   */
  public static function schema(FieldStorageDefinitionInterface $field_definition) {
    return [
      'columns' => [
        'hashtag_search' => [
          'type' => 'varchar',
          'length' => 32,
        ],
        'count' => [
          'type' => 'int',
          'default' => 6
        ]
      ]
    ];
  }

Field widget creation

Next create the widget for the field, which is the actual form element and it’s settings. Either drupal:generate:fieldwidget or create a file in src/Plugin/Field/FieldWidget/ called HashtagSearchWidget.php. This is the class’ skeleton:

<?php

namespace Drupal\my_module\Plugin\Field\FieldWidget;

use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\WidgetBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Render\Element;

/**
 * Plugin implementation of the 'hashtag_search_widget' widget.
 *
 * @FieldWidget(
 *   id = "hashtag_search_widget",
 *   label = @Translation("Hastag Search"),
 *   field_types = {
 *     "hashtag_search"
 *   },
 * )
 */

class HashtagSearchWidget extends WidgetBase {
  /// methods here
}

Then implement several methods. Provide a default count of tweets to pull for new fields and the settings form for the field item:

<?php
  /**
   * {@inheritdoc}
   */
  public static function defaultSettings() {
    return [
      'default_count' => 6,
    ] + parent::defaultSettings();
  }
  
  /**
   * {@inheritdoc}
   */
  public function settingsForm(array $form, FormStateInterface $form_state) {
    $elements = [];
    $elements['default_count'] = [
      '#type' => 'number',
      '#title' => $this->t('Default count'),
      '#default_value' => $this->getSetting('default_count'),
      '#empty_value' => '',
      '#min' => 1
    ];

    return $elements;
  }

  /**
   * {@inheritdoc}
   */
  public function settingsSummary() {
    $summary = [];
    $summary[] = t('Default count: !count', array('!count' => $this->getSetting('default_count')));

    return $summary;
  }

Then create the actual form element. Add the hashtag textfield and count number field and wrap it in a fieldset for a better experience:

<?php
  /**
   * {@inheritdoc}
   */
  public function formElement(FieldItemListInterface $items, $delta, array $element, array &$form, FormStateInterface $form_state) {
    $item = $items[$delta];

    $element['hashtag_search'] = [
      '#type' => 'textfield',
      '#title' => $this->t('Hashtag'),
      '#required' => FALSE,
      '#size' => 60,
      '#default_value' => (!$item->isEmpty()) ? $item->hashtag_search : NULL,
    ];

    $element['count'] = [
      '#type' => 'number',
      '#title' => $this->t('Pull count'),
      '#default_value' => $this->getSetting('default_count'),
      '#size' => 2
    ];

    $element += [
      '#type' => 'fieldset',
    ];

    return $element;
  }

In part 2, Bez will show you how to pull the tweets and create a field formatter for the display of the tweets. You can read that post here!

May 17 2016
May 17

Actually, we never left. We didn’t stop building Drupal sites, even through the long release cycle. However, we did move our company website, activelamp.com, off of Drupal about 18 months ago. Our company site had been built on Drupal since the Drupal 4.7 days. That was back when it started to become uncool to write and maintain your own home-grown CMS. I eventually found Drupal, ditched my custom CMS, and never looked back.

Our site started on Drupal 4.7, upgraded onto Drupal 5, then Drupal 6, and also Drupal 7 all at the beginning of the release cycles of Drupal. About 18 months ago, when our site was in dire need of an update, we evaluated Drupal 8 but realized with no release date in sight, and the fact that we did not want to chase HEAD and develop on unstable API’s, we decided to go a different route and build our updated site on Jekyll, a popular static generator. It’s more fun to tinker with new technology when working on non-billable stuff, which is what we did. We brushed up on our Ruby skills and built out a Jekyll site (which is this site you’re looking at if you’re reading this blog post before Q3 of 2016).

We’re getting ready for another update to our company website and moving back to Drupal to do it. Jekyll was great, but it came with its disadvantages over something like Drupal. This post will highlight some of the advantages and disadvantages of working with Jekyll the past 18 months, as well as highlight why we’re excited to put activelamp.com on Drupal 8 in Q3 of this year.

Getting off the Island

If you’ve been around the Drupal community for a few years, you’ve probably heard the phrase “Get off the island”. There was, and still is, a big push to bring other technologies into the Drupal stack and rid ourselves of NIH Syndrome – Not Invented Here Syndrome.

We as a team took this movement quite literally and started doing more than just Drupal. We started to take on projects utilizing the Full Stack Symfony Framework, Laravel, AngularJS, Ember, Express / Node, Hapi, and Jekyll. We had successfully gotten off the island, so to speak, and it felt good. We decided to build activelamp.com on Jekyll, it has several advantages over using a CMS like Drupal.

Advantages of Static Generators

Having a statically generated site has huge advantages. Let’s review a few of them:

Performance / Scalability

You don’t need a complex hosting setup to host your site. We are currently hosting activelamp.com on S3, a simple storage service provided by Amazon Web Services. In fact, several months after we launched activelamp.com, we built a Jekyll site for Riot Games hosted on S3. The start of season League of Legends site we built on Jekyll handled millions of requests per day, hosted on AWS S3. Not bad for such a highly trafficked site. No moving parts equals a fast site.

Security

Since Jekyll sites are static HTML, there isn’t a backend to exploit. There are no scripts that actually run on the server. This means you don’t have to stay up-to-date with security updates – there are none.

Structured Content

The final output of a Jekyll site is a static HTML site, but we still have structure when creating content. On activelamp.com, we have blog content, video content, job postings, etc. We add content using Markdown, with a little bit of YAML at the top of the file, and place the files into specific folders in our document tree. Jekyll compiles the site from a set of HTML templates, YAML, and Markdown files. Our content is written into discrete files and compiled on build. Since our content has semantic structure, we are still able to compose pages together with whatever content we want, we just need to write a Ruby plugin to do so. Which leads us to the disadvantages.

Disadvantages of Not Using a CMS

We found ourselves spending lots of time writing Ruby plugins when we wanted Jekyll to act more like a CMS. A few of the disadvantages we faced with our site on Jekyll include:

Editor Experience Sucks

If you have non-technical people on your team that want to contribute, there is a high barrier to entry. We have a few non-developers on our team, and it would be so much nicer if they didn’t have to use Markdown to write blog posts for our site. The rich experience you can have with Drupal 8 and CKEditor is top notch, something we’re missing using Jekyll. Running Jekyll, our non-technical users needed to learn how to compile the site to preview changes and also had to learn how to use git to submit their blog posts for review before publishing them. Jekyll is great for developers, not for non-developers.

Have to Write Code for Everything

Not that I have anything against writing code, but I’ve been spoiled by the Drupal community. For the most part, there is likely a module for anything that you want to accomplish. If there isn’t a module in the wild, there is a huge community behind Drupal that will hopefully contribute to a new module that you put out, continually improving it (Plus, I would much rather code PHP than Ruby).

No Backend, No Interactions.

I listed no backend as an advantage above under Security, but you really can’t do anything interactive without a backend. Our activelamp.com Jekyll site actually has a small backend written with Node JS. We have a small Express app that handles the forms and social streams, and a small Handlebars app that calls out to Google Analytics to create the most popular posts lists on blog category pages. Our site isn’t 100% static, it’s just not possible unless you truly do want a brochure type site where users are just consuming content, not interacting.

Excited for Drupal 8

We have been building on Drupal 8 since last December. We launched a portion of a site on Drupal 8 a couple months ago, and we’re launching a full site in a few weeks on Drupal 8. Drupal development has become exciting again.

Our new website is going to call for more interactivity with our users (premium content, client portal, partner portal, etc…). It’s in our best interest to go back to a platform where we don’t have to code every feature that we want. Another advantage for going back to Drupal 8 is that we’ll get to setup a nice content publishing workflow for ourselves again. Jekyll was fine, but we’ve built some pretty nice workflows for our clients, it would be nice to get an easier workflow into our internal processes too, to relieve the tension for the non-developers on our team.

Most importantly, Drupal 8 is fun to develop on. The OOP approach to writing modules, and leveraging composer packages is amazing. Drupal has definitely taken a step in the right direction. In my opinion, as Drupal 8 gains traction it will become the de facto standard for Enterprise CMS needs.

May 07 2016
May 07

Drupal 8 has greatly improved editor experience out-of-the-box. It comes shipped with CKEditor for WYSIWYG editing. Although, D8 ships with a custom build of CKEditor and it may not have the plugins that you would like to have or that your client wants to have.I will show you how to add new plugins into the CKEditor that comes with Drupal 8.

Adding plugins with a button

First, create a bare-bones custom module called editor_experience. Files will be added here that will tell Drupal that there is a new CKEditor plugin. Find a plugin to actually install… for the first example I will use bootstrap buttons ckeditor plugin. Place the downloaded plugin inside libraries directory at the root of the Drupal installation; or use a make file to place it there. Also make sure you have the libraries module installed drupal module:download libraries.

Create a file inside of the editor_experience module inside of src/Plugin/CKEditorPlugin called BtButton.php. Add the name space and the two use statements shown below.

<?php

/**
 * @file
 * Definition of \Drupal\editor_experience\Plugin\CKEditorPlugin\BtButton.
 */

namespace Drupal\editor_experience\Plugin\CKEditorPlugin;

use Drupal\ckeditor\CKEditorPluginBase;
use Drupal\editor\Entity\Editor;

/**
 * Defines the "Bootstrap Buttons" plugin.
 *
 * @CKEditorPlugin(
 *   id = "btbutton",
 *   label = @Translation("Bootstrap Buttons")
 * )
 */
class BtButton extends CKEditorPluginBase {

  ... // Methods will go here

}

The annotation @CKEditorPlugin tells Drupal there is a plugin for CKEditor to load. For the id, use the name of the plugin as defined in the plugin.js file that came with the btbutton download. Now we add several methods to our BtButton class.

First method will return false since it is not part of the internal CKEditor build.

<?php


/**
 * Implements \Drupal\ckeditor\Plugin\CKEditorPluginInterface::isInternal().
 */
public function isInternal() {
  return FALSE;
}

Next method will get the plugin’s javascript file.

<?php


/**
 * Implements \Drupal\ckeditor\Plugin\CKEditorPluginInterface::getFile().
 */
public function getFile() {
  return libraries_get_path('btbutton') . '/plugin.js';
}

Let Drupal know where your button is. Be sure that the key is set to the name of the plugin. In this case btbutton.

<?php


/**
   * Implements \Drupal\ckeditor\Plugin\CKEditorPluginButtonsInterface::getButtons().
   */
  public function getButtons() {
    return [
      'btbutton' => [
        'label' => t('Bootstrap Buttons'),
        'image' => libraries_get_path('btbutton') . '/icons/btbutton.png'
      ]
    ];
  }

Also implement getConfig() and return an empty array since this plugin has no configurations.

Then go to admin/config/content/formats/manage/basic_html or whatever format you have that uses the CKEditor and pull the Bootstrap button icon down into the toolbar.

Now the button is available for use on the CKEditor!

Adding plugins without a button (CKEditor font)

Some plugins do not come with a button png that allows users to drag the tool into the configuration, so what then?

In order to get a plugin into Drupal that does not have a button, the implementation of getButtons() is a little different. For example to add the Font/Font size dropdowns use image_alternative like below:

<?php


/**
 * Implements \Drupal\ckeditor\Plugin\CKEditorPluginButtonsInterface::getButtons().
 */
 public function getButtons() {
   return [
     'Font' => [
       'label' => t('Font'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '<a href="#" role="button" aria-label=""><span class="ckeditor-button-dropdown"><span class="ckeditor-button-arrow"></span></span></a>',
         '#context' => [
           'font' => t('Font'),
         ],
       ],
     ],
     'FontSize' => [
       'label' => t('Font Size'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '<a href="#" role="button" aria-label=""><span class="ckeditor-button-dropdown"><span class="ckeditor-button-arrow"></span></span></a>',
         '#context' => [
           'font' => t('Font Size'),
         ],
       ],
     ],
   ];
}

Then pull in the dropdown the same way the Bootstrap button plugin was added! Have any questions? Comment below or tweet us @activelamp.

Mar 15 2016
Mar 15

The San Diego Drupal Camp was great! You can’t beat the weather in San Diego, and as usual, these regional events are great for catching up with old friends that are still plugging away with the Drupal content management system. Checkout our highlight video:

Sandcamp Highlight Video

This year I had the pleasure of giving 3 sessions at the camp, and as promised, I want to share the slides and code for everything that I presented. Each deck is iframed in on this article, feel free to checkout my github page if you want the speaker notes too.

Encapsulation, Inheritance, and Polymorphism

In this session, I talked about why it’s a good idea to create your own Entities when the content you’re adding requires extended functionality. I showed a cool little trick we used on a recent project to have a custom PHP Class for each bundle of an Entity using the Proxy Pattern. That’s Polymorphism in Drupal 7

Views Handlers in Drupal 7

The majority of this session was a walk thru on how write Views Handlers in Drupal 7, starting with the easier handlers to extend going to the more complex handlers. Also, if you’ve never used PSR-4 in Drupal 7, I’ll give you a quick crash course on how to do that in D7.

Panels is so Misunderstood

I talked about why Panels, in my opinion, is one of the most mis-understood modules in the Drupal eco-system. This session was mainly all live demo showing the power of modeling data with Fields and Nodes, and then realizing the data model and relationships via Page Manager, writing very little code. Make sure you catch our video on this!

Videos of sessions are coming

All three of my sessions were recorded. We will be releasing each session over the next couple weeks. Point your RSS readers at our blog, or go subscribe to our YouTube channel if you’re interested in seeing those videos when they’re released.

Jan 21 2015
Jan 21

It isn't just about Drupal here at ActiveLAMP -- when the right project comes along that diverges from the usual demands of content management, we get to use other cool technologies to satisfy more exotic requirements. Last year we had a project that presented us with the opportunity to broaden our arsenal beyond the Drupal toolbox. Basically, we had to build a website which handles a growing amount of vetted content coming in from the site's community and 2 external sources, and the whole catalog is available through the use of a rich search tool and also through a RESTful web service which other of our client's partners can use to search for content to display on their respective websites.

Drupal 7 -- more than just a CMS

We love Drupal and we recognize its power in managing content of varying types and complexity. We at ActiveLAMP have solved a lot of problems with it in the past, and have seen how potent it can be. We were able to map out many of the project's requirements to Drupal functionality and we grew confident that it is the right tool for the project.

We pretty much implemented the majority of the site's content-management, user-management, and access-control functionality with Drupal, from content creation, revision, display, and for printing. We relied heavily on built-in functionality to tie things together. Did I mention that the site and content-base and theme components are bi-lingual? Yeah, the wide foray of i18n modules took care of that.

One huge reason we love Drupal is because of its striving community which drives to make it better and more powerful every day. We leveraged open-sourced modules that the community has produced over the years to satisfy project requirements that Drupal does not provide out-of-the-box.

For starters, we based our project on the Panopoly distribution of Drupal which bundles a wide selection of modules that gave us great flexibility in structuring our pages and saving us precious time in site-building and theming. We leveraged a lot of modules to solve more specialized problems. For example, we used the Workbench suite of modules to take care of the implementation of the review-publish-reject workflow that was essential to maintain the site's integrity and quality. We also used the ZURB Foundation starter theme as the foundation for our site pages.

What vanilla Drupal and the community modules cannot provide us we wrote ourselves, thanks to Drupal's uber-powerful "plug-and-play" architecture which easily allowed us to write custom modules to tell Drupal exactly what we need it to do. The amount of work that can be accomplished by the architecture's hook system is phenomenal, and it elevates Drupal from being just a content management system to a content management framework. Whatever your problem, there most probably is a Drupal module for it.

Flexible indexing and searching with Elasticsearch

A large aspect to our project is that the content we handle should be subject to a search tool available on the site. The criterias for searching do not only demand the support for full-text searches, but also filtering by date-range, categorizations ("taxonomies" in Drupal), and most importantly, geo-location queries and sorting by distance (e.g., within n miles from a given location, etc.) It was readily apparent that SQL LIKE expressions or full-text search queries with the MyISAM engine for MySQL just wouldn't cut it.

We needed a full-pledged full-text search engine that also supports geo-spatial operations. And surprise! -- there is a Drupal module for that (A confession: not really a surprise). The Apache Solr Search modules readily provide us the ability to index all our content straight from Drupal and into Apache Solr, an open-source search platform built on top of the famous Apache Lucene engine.

Despite the comfort that the module provided, I evaluated other options which eventually led us to Elasticsearch, which we ended up using over Solr.

Elasticsearch advertises itself as:

“a powerful open source search and analytics engine that makes data easy to explore”

...and we really found this to be true. Since it is basically a wrapper around Lucene and exposing its features through a RESTful API, it is readily available to any apps no matter which language it is written in. Given the wide proliferation and usage of REST APIs in web development, it puts a familiar face on a not-so-common technology. As long as you speak HTTP, the lingua franca of the Web, you are in business.

Writing/indexing documents into Elasticsearch is straight-forward: represent your content as a JSON object and POST it up into the appropriate endpoints. If you wish to retrieve it on its own, simply issue a GET request together with its unique ID which Elasticsearch assigned it and gave back during indexing. Updating it is also a PUT request away. Its all RESTful and nice.

Making searches is also done through API calls, too. Here is an example of a query which contains a Lucene-like text search (grouping conditions with parentheses and ANDs and ORs), a negation filter, a basic geo-location filtering, and with results sorted by distance from a given location:

POST /volsearch/toolkit_opportunity/_search HTTP/1.1
Host: localhost:9200
{
  "from":0,
  "size":10,
  "query":{
    "filtered":{
      "filter":{
        "bool":{
          "must":[
            {
              "geo_distance":{
                "distance":"100mi",
                "location.coordinates":{
                  "lat":34.493311,
                  "lon":-117.30288
                }
              }
            }
          ],
          "must_not":[
            {
              "term":{
                "partner":"Mentor Up"
              }
            }
          ]
        }
      },
      "query":{
        "query_string":{
          "fields":[
            "title",
            "body"
          ],
          "query":"hunger AND (financial OR finance)",
          "use_dis_max":true
        }
      }
    }
  },
  "sort":[
    {
      "_geo_distance":{
        "location.coordinates":[
          34.493311,
          -117.30288
        ],
        "order":"asc",
        "unit":"mi",
        "distance_type":"plane"
      }
    }
  ]
}

Queries are written following Elasticsearch's own DSL (domain-specific language) which are in the form of JSON objects. The fact that queries are represented as tree of search specifications in the form of dictionaries (or “associative arrays” in PHP parlance) makes them a lot easier to understand, traverse, and manipulate as needed without the need of third-party query builders that Lucene's query syntax leaves to be desired. It is this syntactic sugar that helped convinced us to use Elasticsearch.

What makes Elasticsearch flexible is that it is at some degree schema-less. It really made it quite quick for us to get started and get things done. We just hand it with documents with no pre-defined schema and it just does it job at trying to guess the field types, inferring from the data we provided. We can specify new text fields and filter against them on-the-go. If you decide to start using richer queries like geo-spatial and date-ranges, then you should explicitly declare fields as having richer types like dates, date-ranges, and geo-points to tell Elasticsearch how to index the data accordingly.

To be clear, Apache Solr also exposes Lucene through a web service. However we think Elasticsearch API design is more modern and much easier to use. Elasticsearch also provides a suite of features that lends it to easier scalability. Visualizing the data is also really nifty with the use of Kibana.

The Search API

Because of the lack of built-in access control in Elasticsearch, we cannot just expose it to third-parties who wish to consume our data. Anyone who can see the Elasticsearch server will invariably have the ability to write and delete content from it. We needed a layer that firewalls our search index away from public. Not only that, it will also have to enforce our own simplified query DSL that the API consumers will use.

This is another aspect that we looked beyond Drupal. Building web services isn't exactly within Drupal's purview, although it can be accomplished with the help of third-party modules. However, our major concern was in regards to the operational cost of involving it in the web service solution in general: we felt that the overhead of Drupal's bootstrap process is just too much for responding to API requests. It would be akin to swatting a fruit fly with a sledge-hammer. We decided to implement all search functionality and the search API itself in a separate application and writing it with Symfony.

More details on how we introduced Symfony into the equation and how we integrated together will be the subject of my next blog post. For now we just like to say that we are happy with our decision to split the project's scope into smaller discrete sub-problems because it allowed us to target each one of them with more focused solutions and expand our horizon.

Sep 06 2012
Sep 06

Occasionally a node reference or entity reference autocomplete widget will not operate as expected, specifically when it is based off a view reference display. Other widgets, the select box, or list of checkboxes, will still function correctly.

This will happen if the view is depending on a contextual filter (an argument), but is not being provided one. Normally a view can try to automatically fill in the argument if one is not provided based on the current page url. If the view fails to receive an argument and is unable to infer its value from the url path then it will fail to provide any results.

Outlined below is a possible scenario that would cause an autocomplete node reference field to fail.

  1. You are editing an autocomplete node reference field on a taxonomy term edit page.
  2. The view reference display you have configured is setup to only show content associated with the 'current' taxonomy term.
  3. In the view, the taxonomy term argument is provided by the current context if no value is available.

Here is how we have configured the view:



Beneath 'Contextual Filters' clicking on 'Context: Has Taxonomy Term ID' will provide more information on this filter (argument received by views):


Note: In this screenshot we are running the openpublish beta2 profile, which currently has not updated to the latest patches.



Notice that the view will try to fill a value into it's contextual filter if none is provided. It will try to do this based on the current url:



If your widget is setup to be list or select box then the view will be able to determine the current context (a taxonomy term) and provide a default value. Views can do this because the context is determined while the form is being loaded. But if you are using an autocomplete field, the json callback to drupal provides no context and the view has no idea what page it is being accessed from.

A solution can be achieved by providing a custom function that handles the autocomplete callback from json. The function will then explicitly set the views argument to the correct taxonomy term id.

  1. Alter the existing form field to use a different callback path with hook_form_FORM_ID_alter()
  2. Create the path router in hook_menu()
  3. Design the callback function itself to invoke views

Alter The Existing Form Field

In this particular case, after viewing the source of our taxonomy form, we find the form tag id is 'taxonomy-form-term'. This translates into taxonomy_form_term as the FORM_ID when declaring the hook_form_FORM_ID_alter() function. The node reference field itself has been named 'field_myfield_nref' and contains up to 3 descrete values.

<?php
/**
 * Implements hook_form_FORM_ID_alter().
 */
function mymodule_form_taxonomy_form_term_alter(&$form, &$form_state$form_id) {
  
// We will get our term id argument from the from build itself.
  
$term_id  $form['#term']['tid'];
  
// This is the path we will create in hook_menu().
  
$new_path "mymodule/autocomplete/{$term_id}";
  
// Maximum number of descrete values (deltas) that are present.
  
$max_delta $form['field_myfield_nref']['und']['#max_delta'];
  
// Hijack the autocomplete callback for each of our values.
  
for($x=0$x&lt;=$max_delta$x++) {
    
$form['field_myfield_nref']['und'][$x]['nid']['#autocomplete_path'] = $new_path;
  }
}
?>

The above hook is enough by itself to get the autocomplete widget to begin polling a different path as you type in characters. Make sure that you have flushed all caches, and that your browser is not caching the old javascript. Now Drupal needs to be configured to do something useful when the above path is being requested.

Create the Path Router

<?php
/**
 * Implements hook_menu().
 */
function mymodule_menu() {
  
// The index will specify which path is responded to.
  
$items['mymodule/autocomplete/%'] = array(
    
// This is the function to be envoked.
    
'page callback' => 'mymodule_autocomplete_callback',
    
// Which url path segments, delineated by the forward slash (/), should be
    // sent to our function as arguments. Zero based.
    
'page arguments' => array(2),
    
'access callback' => TRUE,
    
'type' => MENU_CALLBACK,
  );
  return 
$items;
}
?>

Now, when the autocomplete field accesses the path /mymodule/autocomplete/{integer_value} Drupal will execute the function mymodule_autocomplete_callback. Next, the function must be configured to Invoke the correct view and return something useful to the requesting javascript.

Design the Callback Function

<?php
/**
 * Autocomplete callback.
 *
 * Recieve a field autocomplete json request from a taxonomy term edit page.
 * Returns a list of article nodes whos titles matches what has already
 * been typed into the field so far.
 *
 * @param int $term_id
 *   Unique taxonomy term identifier. This is the variable that is represented
 *   by the % sign of the path in hook_menu().
 * @param string $string
 *   Contents of the json submission. This will be what the user has typed into
 *   the node reference field so far.
 *
 * @return drupal_son_output()
 *   A json formated string containing possible matches constructed by a view.
 */
function mymodule_autocomplete_callback($term_id$string '') {
  
// We know the name of this field specifically because this is an edge case
  // solution. More flexible code could be put in place so as to not hard code
  // this. The field settings will store which view to use.
  
$field field_info_field('field_myfield_nref');
  
// These options will be received by views. Within the result set that views
  // will provide, we want to further limit by comparing the field 'title'
  // against what was submitted by the javascript ($string). We will compare
  // by 'contains', meaning the title must contain $string. The total results
  // returned should be no more than 10.
  
$options = array(
    
'string'      => $string,
    
'match'       => 'contains',
    
'ids'         => array(),
    
'limit'       => 10,
    
'title_field' => 'title',
  );
  
$settings $field['settings']['view'];
  
// This is the important part below. This view requires an argument for the
  // contextual filter to operate when the context can not be determined
  // automatically
  
$settings['args'] = array($term_id);
  
$matches = array();
  
// This is where the view is run that is reponsible for creating the possible
  // selections for autocomplete. Now we can pass in the argument that would have
  // otherwise been empty.
  
$references references_potential_references_view('node'$settings['view_name'], $settings['display_name'], $settings['args'], $options);
  foreach (
$references as $id => $row) {
    
// Markup is fine in autocompletion results (might happen when rendered
    // through Views) but we want to remove hyperlinks.
    
$suggestion preg_replace('/<a href="http://activelamp.com/2012/09/06/customize-autocomplete-fields-with-results-you-want//([^<]*)">([^<]*)<\/a>/''$2'$row['rendered']);
    
// Add a class wrapper for a few required CSS overrides.
    
$matches[$row['title'] . " [nid:$id]"] = '<div class="reference-autocomplete">' $suggestion '</div>';
  }
  return 
drupal_json_output($matches);
}
?>

Success

By crafting our own module and using the above hooks and callback functions we now have modified the autocomplete field to work as desired. While editing our taxonomy term, the node reference field that allows us to select any node that is already associated with this taxonomy term works correctly. The first two values have already been filled out, while the third is in the process of displaying possible options.



Mar 14 2012
Mar 14

Last night at the main LA Drupal meet-up I had the opportunity of talking about how we do things at ActiveLAMP, and some of the processes we follow. The LA Drupal organizers created a new session slot in the monthly meet-up called "Shop Talk". The idea is to bring someone in from a local shop in the area to come share about how the Drupal shop they work for runs, and share some processes that might benefit others in the community. It was a very TED talk type of vibe, as you only get 20 minutes to talk about your shop and present your ideas, but none the less I think I captured (at a very high level) how we approach and manage code, tasks, drupal, and deployment at ActiveLAMP.

Unfortunately, due to technical difficulty, the recording that we took from the presentation did not save properly on the flash drive that was plugged into my computer, but hopefully the slides are still helpful by itself.


Feb 28 2012
Feb 28

I have been following the Aegir project for some time now, almost 3 years. It’s great to see how far the project has come along, and how easy it is to get an Aegir instance up (it used to be very challenging to install). However, I haven’t really fully embraced Aegir (yet) into our current workflows at ActiveLAMP. I’m still pondering how exactly I want to use the tool. Allow me to describe our current process, before I elaborate on how I want to use Aegir for deployment.

Our current workflow.

Early in 2010 we adopted a workflow for managing and building sites using Drush Make. Several articles inspired this new workflow for us, so I won’t go into detail of why it’s a good idea to use Drush Make for doing your builds, rather than having one giant repository of code in which 80% of that code you don't touch (hack) anyway.

On each of our developers machines we have one drupal core (a platform) that runs any number of sites that we may be working on at the time. We may have several platforms (different core versions i.e. 7.9, 7.10, 7.12, etc...) on our development machines. All sites that we work on are essentially a multisite within the specific platform (drupal core version) that we're working within. With every site we have an aliases.drushrc.php within its sites directory. This aliases.drushrc.php file holds meta data regarding what platform the site is running on locally, as well as where the production server is, and where the dev server is.

I wrote a custom drush command that actually sets all this up, so we don't have to do too much work. When we start a new site we just type `drush ns mynewsite.com` (drush new-site) and that will fire off a number of tasks:

  • Check drupal.org to see what the latest version of core is
  • Check local platforms directory on developer machine to see if that core version is installed.
    • If the core version is not installed, it is downloaded, a vhost is setup in the apache config, and the /etc/hosts files is edited for the new platform.
  • Then it checks our git repository for a repo called newsite.com.git and checks that out into a separate sites directory (not within the platform just downloaded)
    • If the site repo doesn't exist, the drush new-site command creates a new site directory with a few files copied over from the drush new-site command for defaults (site.make, .gitignore, and rebuild.sh)
    • git init, git add, git commit, and git push origin master are all executed with the initial files at the very end of the drush command.
  • The new sites directory is symlinked to the platform
  • Two drush alias files are created, a global alias file placed in ~/.drush, and an aliases.drushrc.php in the actual site directory.
  • Finally the site is installed with `drush si`

For deployment we use Capistrano invoked with a custom Drush command. This drush command looks at parameters set in aliases.drushrc.php within the sites directory and then fires off a capistrano command passing in arguments it finds in aliases.drushrc.php. We're only deploying the sites directory to the new environment, and capistrano takes care of symlinking it to the correct platform on the dev or production servers.

I'm leaving out a lot of minor details, but in a nutshell that's how our current workflow works for developing sites and deploying.

Thoughts for using Aegir for deployment.

Most people (that I've read about in blog posts and the drupal.org issue queue) that are using Aegir for deployment, are using install profiles for their custom code, and are utilizing a recursive make file technique to handle the build of the platform, as well as the profile. The process seems to work well, and it makes sense, but I'm not sure I want to handle our deploys this way for a number of reasons:

  • A platform has to be built for every deployment, with the "new" profile.
  • Really only one site is going to run on this platform, unless you put multiple profiles in the platform make file (which then leads to more issues if you have different development cycles for the number of sites you're currently working on.)
  • A whole bunch of extraneous tasks are being fired to build this platform, when only site code could only be deployed.
  • Most important to me, you're getting away from what Aegir does well, host instances of sites on a common platform.


My initial thoughts for handling this with Aegir is just integrating Capistrano with Aegir with a few drush commands and then expose those drush commands to hostmaster. I would also add a sites make file field to the node add form for sites, so that when creating a new site, you can specify a site make file, just like you can when you build a platform.

The process of deployment would still be handled by Capistrano, while still utilizing Aegir for creating, managing, and migrating sites. I'm going to start developing this functionality, but I'm curious to hear others thoughts on this, and if there any holes in how I think this could work.

Feb 14 2012
Feb 14

Recently, we needed a way to hijack all of the links on a page, and add some additional Google Analytics tracking. I was initially going to use some of the methods I had written about in a previous blog post. A lot of time had passed since using that method, so we decided to see if there was a module that accomplished the same task.

Lo and behold, there is now a module that does exactly that. It's an api module that allows you to write a hook that will hijack anything across an entire site, according to a css selector. The module is called Google Analytics Event Tracking.

It had a few issues, but they were easy to fix. For example, the hook it used was hook_api rather than the listed hook_google_analytics_et_api. I fixed the issues in a patch. The patch also includes two new tracking replacements. The original only had a replacement for "!text", which would take the text within the hijacked object, and use it as the category, action, or label of the event, depending on where it was placed in the hook. I added two additional replacements for "!href" and "!current_page". "!href" uses the href attribute of the selector, and "!current_page" grabs the current url. In addition, if href is undefined, or any of the tags are not set for a particular selector, the function returns without tracking the event.

This is a very powerful tool to add to your tracking toolset. Basically, it will allow you to take absolutely any object or group of objects, whether they be links, divs, spans, etc, and track any action on those objects, whether it be a hover, a click, mouseup, mousedown, etc. So far, we have only tested with the "click" action, but any problems with the others could be easily fixed.

The hook to set all of the work is extremely simple to write. Here is an example of a crazy hook that hijacks every single link on an entire site, and uses the new replacements to track every internal click made from one page on the site to another:

<?php
function hook_google_analytics_et_api() {
  return array(
    array(
      
'selector' => 'a',
      
'category' => '!href',
      
'action' => 'Internal click',
      
'label' => '!current_page',
      
'value' => 0,
      
'noninteraction' => TRUE,
    ),
  );
}
?>


The "selector" is the css selector. In this case, we are selecting all anchor tags on the site. This is probably not very efficient, but shows the power of what can be done with this tool.

The "category" is the Category of the GA event, which will be listed in the categories section under the events, when viewing the results in GA. In this case, we are using the "!href" replacement. Since links with no href are skipped, we don't have to worry about removing them within the selector, although it may be more efficient if you have a lot of such anchor tags.

The "action" can also be a replacement, but is normally plain text. It will show up in the Action section of your GA events.

The "label" is the information you find after clicking on the category from within the Events section of GA. In this case, we are tracking the page we are currently on.

The "value" can be any numerical value. This will normally be a 0 or 1, unless you are doing something tricky with your selectors to grab additional information. You could get really creative with that one.

"noninteraction" should be set to true, unless you wish the event to be used in bounce-rate calculations.

For the hook example above, you will have the following results in GA:
You would go to the Events section of Content, then click the "Internal click" action to see the results. You would then have a list of URLs from which to choose. These URLs are the pages that came up as a result of clicking on links within the site. If you click on one of the URLs, you will see a list of all the pages that had links to the URL that were clicked.

This is just one example of what can be done with this module. You can track almost anything. I was even considering adding an admin section to set up additional replacements, so replacement items could be added. You could use things like rel, src, title, etc. Anything that can be an attribute on an object could also be used as a replacement. Perhaps all attributes could be added as replacement options.

Jan 25 2012
Jan 25

I've been getting inquiries in IRC and in the issue queue about a module I blogged about a few days ago. The blog post I wrote may have seemed that the module we are working on is duplicating the features module and that we should instead work on the features module. I want to clarify our intentions.

The configuration module isn't a replacement for features. The vision is that they could work together. Features currently serves two purposes, 1) to group configuration together to satisfy a certain use-case, and 2) the actual export and management of configuration into a "feature" module. Features module is an awesome module when using it for what it was built for, creating KIT compliant features. The reality is most people that use features probably haven't even read the KIT specification.

Hypothetically, let's use the features concept with the concept of a bakery. Lets say a baker has a feature called "Birthday Cake". This "birthday cake" feature has several ingredients to make a birthday cake and one of those ingredients is called flour. The baker has another system that manages ingredients called configuration. Configuration manages the flour ingredient and makes sure it always stays the same type of flour.

Flour can go in a lot of recipes, not just in birthday cakes, so this ingredient shouldn't be owned by birthday cakes, flour should be free to be in any food item without creating a dependency of needing to bake a birthday cake too. If the birthday cake feature managed the flour ingredient, the baker could never bake cookies with his "Cookies" feature, without also baking a "Birthday Cake". If flour is managed by the "Birthday Cake" feature and the baker decides to never bake birthday cakes again for his bakery, flour no longer exists, because you only have flour if you bake birthday cakes.

If one of the ingredients that configuration is managing ever changes the meaning of flour to mean "spelt flour" rather than what it was, "white flour", the recipes (features) would need to be notified by the configuration system saying that flour is different now and your recipes are now different. This is how they work together.

Ingredients shouldn't care what recipes use it, and recipes shouldn't manage and own individual ingredients exclusively. The ingredients need to be managed by a different system. Ingredients are configurations and recipes are features. Recipes use ingredients, and features could use configuration.

This module simply takes the configuration part out of features, and provides a new workflow for managing configuration, more along the lines of what CMI is going to do in Drupal 8. The vision is the features module could evolve to use this module once this module matures over the coming months. When Drupal 8 comes out, features will need to evolve to not do configuration management any longer anyways, all this will be built into core. Drupal 8 will probably not be released for a while, if it follows the same development cycle as D7, hence the reason why this module was written, to incorporate Drupal 8 configuration management ideas in Drupal 7. Since Features, in a sense, already does configuration management, we used a lot of features module code to build this module.

As CMI progresses and more code is committed for Drupal 8, it's possible that this module may start looking like a back port of CMI and less like a duplication of features module.

Either way, features module will need to evolve eventually.

Features module is a great module when using it for what is was built for, creating KIT compliant features. It's once you start to depart from it's intended use, that you begin to find its limits.

Jan 23 2012
Jan 23

At ActiveLAMP, we have always been a big proponent for putting all configuration for the sites we work on into features. Much like everyone else in the Drupal community that uses features module, we figure out what configurations belong together, and create a feature to group these configurations together. Do these configurations together satisfy a certain use-case? Sure they do, for the particular site that we created it for, but for the most part, the feature really isn’t reusable on other sites unless we build another site that has the same exact requirements that this feature contains. In reality, we don’t really create reusable features that we can then use on other projects, because the projects we work on are just too different to be able to do this.

The features paradigm works great when you’re working on very similar sites, or even a distribution like Open Atrium, but not so much when working on many different sites that have nothing to do with each other, and many different requirements. When you really get down to it, we really use features module to manage configuration for the specific site we’re working on so that we can simplify deployment to dev, staging and production; we don’t use features to create configurations that satisfy a certain use-case that’s usable on other sites. In fact, I believe many of us in the Drupal community have become so accustomed to using features for configuration management and deployment, we just glaze over why Features module was really created -- to create a collection of Drupal entities which taken together satisfy a certain use-case (excerpt from features module page)

Don’t get me wrong, I love the features module, but I have to admit that I’ve run into my share of issues using features module for configuration management and deployment. Fortunately others in the community have ran into these issues too and have released modules such as features override, features plumber, and Features Tools. Not to mention entire workflows have been created around how to use Features to manage configuration for deployment that don’t even come close to creating KIT compliant features. Features module is really being misused, it’s not being used to create features, it’s being used to manage configuration and deployment.

Several weeks ago, after having multiple conversations with Alan Doucette (dragonwize), of Riot Games, this paradigm shift hit home for me. I had been using features module for configuration management and deployment so long, that I didn’t even think twice that I wasn’t using it for its intended purpose. I also realized that I had a tool belt full of work arounds to make features module kind of work for configuration management and deployment. There are a number of issues that you can run into using Features module for configuration management and deployment. In future blog posts I’ll elaborate on what specific issues we have run into.

After my discussions with Alan I was tasked to create a module just like the features module, except without the features part of it. We still think the features idea is a great idea -- to have a group of configuration to satisfy a certain use-case -- but we don’t think features module should be the tool to export and manage configuration. Our vision is that features evolves into using this configuration module to group configurations into a feature, but not actually own the configuration in a "feature" module.

Over the past few weeks I’ve been rewriting the features module without the features part of it. I’ve also taken a some concepts from the configuration management initiative, specifically the concept of the "activestore" and "datastore" architecture. This module is currently in a sandbox, as we’re hoping to get the namespace of an abandoned project. This module is definitely a work in progress, but we’re already using it on a couple production sites to work out the bugs and workflow. We want to get the community involved to hopefully push this module forward.

If you want to checkout the module, you can download it from the sandbox for now (http://drupal.org/sandbox/tomfriedhof/1412412). Once we get the namespace we’ve requested, we’ll promote it to a full project. Try the module out, file issues, and help out. Alan and I will be giving a BoF at SandCamp this Saturday, for those of you in town. Come join us, and hear about our motivations for building this module, and give us feedback.

Jan 06 2011
Jan 06

California's #1 RV Dealership is now running Drupal 7. ActiveLAMP completely redesigned and re-implemented mikethompson.com from Drupal 5. Several months ago we made the decision to leap head first into Drupal 7 development, rather than use Drupal 6 for this rebuild, and we're glad we did.

MikeThompson.com provides an easy way for users to find and inquire on RV's, schedule service appointments, apply for job openings, etc..., however, the real benefit of using Drupal 7 is two fold, it is a great CMS and it is a great framework.

Drupal 7 CMS Benefits

The content on mikethompson.com is real easy to manage. Inventory Managers at the dealership can login, and easily manage new and used inventory; Content Managers can manage miscellaneous content pages, announcements, and promotions; and sales staff can manage inquires on the site (using webform), the list goes on of what we were able to accomplish using the Drupal 7 CMS.

Drupal 7 Framework Benefits

Drupal is engineered with the developer in mind. We wrote quite a bit of custom code to make it very easy for users using the CMS to manage the site. For example, with the Field API we were able to add extra fields to taxonomy terms and file upload fields, the drupal hook system allowed us to easily tap into webform module to implement access control from our own modules, the Drupal 7 contextual links allowed us to make it real easy for content managers to manage various regions of the site. On top of being able to hook into the existing functionality already in Drupal, we built a handful of custom modules to bring everything together. We were able to extend Drupal 7 to do exactly what we needed it to do.

Modules we used

Some of the contrib modules we used in building the site include Views, Webform, WYSIWYG, Administer Users by Role (we ported), Google Analytics, Secure Pages, Pathauto, Context, and of course Features. We also used very heavily a few core modules such as Taxonomy -- to categorize RV Inventory several different ways, Field -- to add custom fields to Taxonomy and Nodes, and Image -- the imagecache replacement that is now in Drupal 7 core.

Modules we are releasing to the community.

We also built a few new modules that we will be contributing back to the community for Drupal 7, Themepacket and Spritesheets.

Themepacket provides a new views display that makes it easier for themers and developers to theme views output using custom theme templates, without manually registering theme hooks in your modules and themes, it also discovers any assets found within your themepacket implementation, without the need to call drupal_add_css() or drupal_add_js(). The module will also preprocess your fields for you so you can use nice looking variables in your templates.

Spritesheets is a module we developed to optimize css background images in themes and modules on to one image asset. You have the ability to configure which directories Spritesheets module will search, and it will parse your CSS to find images that can be included on a spritesheet. You can then configure which images should be optimized to a spritesheet. Spritesheets can greatly reduce page load time, bandwidth, and the tax on your server by combining.

Conclusion

Drupal 7 has launched, and it is ready for prime time! The #D7CX initiative seems to have really paid off. All the heavy hitting contrib modules we needed to build this site has a D7 version. That's a big win for the community! We were able to build a site that is easy to manage and maintain for the site managers using Drupal 7 core, contrib modules, and by extending the Drupal 7 platform to our specific needs with custom modules. Great CMS! Great Framework! Drupal 7 is phenomenal!!!

Jul 15 2010
Jul 15

Registration is now open for DrupalCamp LA 2010. Mark your calendars for Saturday & Sunday, August 7-8th, 2010. Attendance is free. The camp will be taking place in the same great venue from last year - UC Irvine in the city of Irvine, California. The campus has housing available if you wish to rent rooms to stay overnight.

Free to attend. Parking costs about $8-12 per day. Lunch is not provided but you can bring your own or buy a food pass from the cafeteria on campus (which people liked last year).

Don't miss out. Tell your friends! For more details checkout the DrupalCampLA website: http://2010.drupalcampla.com/

Apr 28 2010
Apr 28

This post is a follow-up to the "Use Google Analytics Instead of the Statistics Module" post. If you want to use Google Analytics for all of your site statistics, you may need to add links that the google_analytics module can't handle. The google_analytics module is great, and handles almost everything you may need, including clicks on external links. In some cases, however, it has no way to track an external click.

I was recently presented with the problem of tracking clicks on an "Add This" dropdown. The drop-down handles everything in javascript, so the "links" don't even have an anchor tag. Each one is made up of a div with an onClick event attached. Fortunately, we can add an event listener to the AddThis drop-down in jQuery, then add our share click with one line of code. Here is an example:

  1. $(function() {

  2. var add_button = null;

  3. addthis.addEventListener('addthis.menu.share', shareEventHandler);

  4. $('.addthis_button').mouseover(function() {

  5. add_button = $(this);

  6. });

  7. function shareEventHandler(evt) {

  8. if (evt.type == 'addthis.menu.share') {

  9. pageTracker._trackPageview(add_button.attr('addthis:url') + '/share/' + evt.data.service);

  10. }

  11. }

  12. });

In the above example, an event listener is being added on 'addthis.menu.share', which is when a user clicks on one of the share links. I've also added a mouse-over event to all buttons with the '.addthis_button' class, since we have more than one share button on a page. This is where I set the button the user is currently using.

Since AddThis requires a special attribute (addthis:url), I can simply grab that attribute in the event handler, and add it to the Google Analytics tracker. In this case, we created a fake share link for the node types being shared (for example, blog/22/share or blog/22/share/facebook). When someone tries to go to the link directly, it will redirect them back to the home page, so the share is only tracked when someone actually clicks on one of the links in the drop-down.

The next step would be to use the techniques outlined in the last blog post to properly track these links in a useful format. In the case above, we've taken an external click and transformed it into an internal click. For example, a click to share the link blog/22 on Facebook would result in an internal click to the link blog/22/share/facebook. If we wanted to see how many people shared any blog on any site, we could set our filter to the following:

$filter = 'pagePath =~ /blog/[0-9]+/share';

and if we wanted to see how many people shared any blog post on Facebook, we could set our filter to the following:

$filter = 'pagePath =~ /blog/[0-9]+/share/facebook';

But what about clicks to actual external links? Let's say our site has user profiles, and users are able to enter a website URL to display on their profile. We can tell the google_analytics module to track clicks to external links, but how do we track these clicks using the techniques outlined in the last blog post? Well, the google_analytics module tracks these clicks as an "Event". So if we wanted to see how many times someone clicked on the external website link for a particular profile (using a content_profile node), we would write something like the following:

  1. $website = db_result(db_query("SELECT field_website_value FROM {content_field_website} WHERE nid = %d", $nid));

  2. // Always add an ending slash

  3. if (substr($website, -1) !== '/') {

  4. $website .= '/';

  5. }

  6. $request = array(

  7. '#metrics' => array('uniqueEvents'),

  8. '#filter' => 'eventLabel == '.$website,

  9. '#start_date' => $start_date,

  10. '#end_date' => $end_date,

  11. '#start_index' => 1,

  12. '#max_results' => 1,

  13. );

  14. try {

  15. $entries = google_analytics_api_report_data($request);

  16. }

  17. catch (Exception $e) {

  18. return $e->getMessage();

  19. }

  20. if (!empty($entries)) {

  21. foreach ($entries as $entry) {

  22. $metrics = $entry->getMetrics();

  23. $stats['more info'] = $metrics['uniqueEvents'];

  24. }

  25. }

The code above will get the total number of events involving the external website. Since the google_analytics module only tracks click events for these links, there's no reason to further narrow the events to look at click events. If you wanted to track all external click events, however, you would set your filter to the following:

$filter = 'eventCategory==Outgoing links && eventAction==Click";

Mar 17 2010
Mar 17

I recently created a module that uses the Google Analytics API to capture the top ten nodes of various content types by day, week, and all time. This is a great option for any site that needs to use caching, and can’t use the Statistics module.

The module depends on the google_analytics_api module, which makes the job of capturing all the data extremely easy with the google_analytics_api_report_data() function. Here is some easy example code for building a report:

  1. <?php

  2. if (!$start_date) {

  3. $start_date = date('Y-m-d');

  4. }

  5. if (!$end_date) {

  6. $end_date = date('Y-m-d'); // H:i:s // can't include time... if before noon, include previous day

  7. }

  8. $dimensions = array('pagePath');

  9. $metrics = array('visits');

  10. $sort_metric = array('-visits');

  11. $filter = 'pagePath [email protected] /blog/ || pagePath [email protected] /article/';

  12. $start_index = 1;

  13. $max_results = 20;

  14. // Construct request array.

  15. $request = array(

  16. '#dimensions' => $dimensions,

  17. '#metrics' => $metrics,

  18. '#sort_metric' => $sort_metric,

  19. '#filter' => $filter,

  20. '#start_date' => $start_date,

  21. '#end_date' => $end_date,

  22. '#start_index' => $start_index,

  23. '#max_results' => $max_results,

  24. );

  25. try {

  26. $entries = google_analytics_api_report_data($request);

  27. }

  28. catch (Exception $e) {

  29. return $e->getMessage();

  30. }

By default, today’s date is used for both the start and end date, to give today’s top content. GA requires both a start and end date, so to get all-time results, you will need to set the start date to the date you first started using GA with your site.

To get the top content, sorted by most popular to least popular, the dimensions variable needs to be set to “pagePath,” with a “visits” metric (for unique page views). or a "pageviews" metric (for all views). The sort_metric variable is set to “-visits” (or "-pageviews") to sort from most visits to least (note the “-” prefix, which tells Google Analytics to sort our results in reverse order).

Since I want to grab blogs and articles only, I have set the filter to match only paths that contain “/blog/” or “/article/”. Unfortunately, this is the only way to filter your node types, so it’s a good idea to use pathauto to ensure all node types have a specific path, and write some code that prevents any other node types from having the path you are targeting.

In my case, there were also specific CCK fields I needed to use in order to filter out additional nodes. If you know that this is going to happen ahead of time, you can always inject something in the path for nodes that have the CCK fields you would like to filter out, and filter them out when retrieving the report. Otherwise, you will have to do what I did, which was to retrieve more results than are needed in the final report (note that $max_results is set to 20, even though this will eventually be a top ten list), and filter the out the excess with a database query, then unset the remaining excess.

One other catch with using Google Analytics in place of Statistics is that it does not work well with cron. You can get it to run through cron when running cron.php manually, but I couldn't find a way to get it to work automatically, even using various spoofing methods. The method will finish without errors, but GA will not return any data.

Cache variables can save the day here! We can modify the code above with the following:

  1. <?php

  2. if ($cache = cache_get('ga_stats', 'cache_content')) {

  3. $stats = $cache->data;

  4. }

  5. else {

  6. //GA code from above goes here

  7. if (!empty($entries)) {

  8. foreach ($entries as $entry) {

  9. $metrics = $entry->getMetrics();

  10. $stats['visits'] = $metrics['visits'];

  11. //grab any other data you want here

  12. }

  13. }

  14. if (!empty($stats)) {

  15. cache_set('ga_stats', $stats, 'cache_content', CACHE_TEMPORARY);

  16. }

  17. }

Just replace ga_stats with the name you want for your variable above. In fact, you can create variables for multiple individual pages as well, if you really want to study all the stats for specific pages. You may also want to replace cache_content with a different cache object, such as a custom one created in your own module.

This is only the beginning of what you can do with Google Analytics. If you plan your pages and URLs well, you can capture almost any data you want, even link clicks and page exits. The google_analytics_api module provides plenty of options, and the report API itself offers a plethora of options.

Jan 29 2010
Jan 29

I generally would style individual page elements like menus, blocks, views, and other content by using their own class names or IDs. That would mean if I wanted a consistent style to be applied to many of these elements I would have to override template files just to add a consistent class attribute, or have multi-line selectors in my css which would make it incredibly difficult to organize. Then one day I got smart and started to use Panels. Without getting into much detail, I’d have to say that using Panels to create my own custom layouts and plugins has changed my game as a Drupal themer.

But I’m already a proficient Drupal themer, I don’t need panels to do my layouts...

Perhaps, but you reap huge benefits from developing and designing Panel layouts and style plugins instead of block and page template files. For example:

  • An intuitive administration page - The block administration page has a single column to represent your layout; not exactly user-friendly.
  • Reusable style plugins for both panes and entire regions - sure you can use classes to style your blocks, but that requires extra work in your code when you need to change skins quickly.. Not to mention you can accidentally break things.
  • Apply the same piece of content in multiple regions - You can’t do this out of the box with Drupal, however, the Multiblock module helps with that, but I found that to be a little extra maintenance on my part.
  • Creating Panel layouts are actually easy - Just as much as page.tpl.php, but you get more bang for your buck.
  • More control over visibility settings - Panels goes way beyond Drupal’s simple “show all, or show none” approach (see block visibility). Panels has built-in context rules, as well as hooks to define your own contexts.
  • More control over caching - Again, Panels provides hooks to create your own caching plugins. Might not be used too often, but it’s there when you need it.
  • The list literally goes on and on and on...

So are you sold yet?

Overview of the Panels API for Style Plugins

Developing a Panels style plugin isn’t particularly complicated, but does require you to have familiarity with some of the tools available. Here are the essentials:

HOOK_STYLENAME_panels_styles()

This returns an array of the following keys:

  • title - displays the name of your style to the UI
  • description - displays a description above your settings form (if applicable)
  • render pane - callback to render a pane (implemented as theme_YOURCALLBACK)
  • render panel - callback to render a panel region (implemented as theme_YOURCALLBACK)
  • settings form - callback to function that returns a form definition
  • settings validation - same as FAPI form_validate
  • hook theme - same details as hook_theme

theme_style_render_pane( $content, $pane, $display )

This is your render pane callback (the “style_render_pane” text can be whatever you want, so long it matches the value of “render pane” key in HOOK_STYLENAME_panels_styles)

  • $content object - contains information about what type of content that particular pane is displaying (node, menu, block, custom, etc.) title, content, so on...
  • $pane object - contains all the properties you can think of for a pane: pane id, panel display id, which region it’s rendered in, the style plugin attached to it, user configured css ID and classes, position in region, even data about the content that it contains ($content).
  • $display panels_display object - contains a HEAP of data about the panel itself: arguments passed to it, it’s regions, it’s layout style, cache settings, and even the data about each the panes that it contains ($pane).

theme_style_render_panel( $display, $panel_id, $panes, $settings )

This is your render panel callback (“style_render_pane” can be called whatever you like, so long it matches the value of “render panel” key in HOOK_STYLENAME_panels_styles)

  • $display panels_display object - same as above
  • $panel_id int - self explanatory..
  • $panes array - an array of all pane objects that are rendered on the display
  • $settings array - The results of the settings form for each panel

Creating a demo plugin


click here to download the demo and test out on your own site.

In a future post I’d like to show exactly how I’m using Panel style plugins to demonstrate some powerful tools you can create in your theme, but for now I’ll do something really simple to help get people familiar with what you can do.
I generally like to put panel layouts and plugins in my theme opposed to in a module. So first thing we’ll do is tell Panels that we have a custom plugin in our theme. In this case, our theme name is called ‘mytheme’, so we’ll add this line to our mytheme.info file:
(you can drop this into your already existing theme, just replace ‘mytheme’ with whatever you theme is called.)

mytheme.info

  1. plugins[panels][styles] = plugins/styles


Now we will create a folder named demo with a file named demo.inc in the plugins/styles directory in our theme. Optionally you can create the file without putting it in a folder, I just do it to help me organize any additional files I may include with my plugin.

Let’s create our initial hook to set up the plugin, we’ll be referencing HOOK_STYLENAME_panels_styles() :

demo.inc

  1. <?php

  2. /**

  3. * Implementation of hook_panels_styles().

  4. */

  5. function mytheme_demo_panels_styles() {

  6. return array(

  7. 'demo' => array(

  8. 'title' => t('Demo'),

  9. 'description' => t('my custom Panel style plugin'),

  10. 'render panel' => 'demo_style_panel',

  11. 'render pane' => 'demo_style_pane',

  12. 'settings form' => 'demo_form',

  13. 'hook theme' => array(

  14. 'demo' => array(

  15. 'template' => 'demo_template',

  16. 'path' => drupal_get_path('theme', 'mytheme') .'/plugins/styles/demo',

  17. 'arguments' => array(

  18. 'content' => NULL,

  19. ),

  20. ),

  21. ),

  22. ),

  23. );

  24. }

So in our hook implementation we are stating we created theme_demo_style_panel for the the panel render, theme_demo_style_pane for the pane render, a function named demo_form that will return our settings form definition, and we are registering a template to the theme registry in which we can call theme(‘demo’, $content) that will render markup using demo_template.tpl.php.

Flush your cache and go create a panel and add some content in panes. Click on the cog (either on the top left if it’s a region, or the top right if it’s a pane) and choose a new “Style”. At this point you should be able to see your plugin listed, Woo-hoo! However, you’ll get an error if you try using it... First, we have to create our style callbacks!

demo.inc

  1. function theme_demo_style_panel($display, $panel_id, $panes, $settings){

  2. $content = new stdClass;

  3. $content->settings = array();

  4. $content->settings['position'] = $settings['flare_position'] ? $settings['flare_position'] : 'left-top';

  5. foreach($panes as $pane_id => $data) {

  6. $content->content .= panels_render_pane($data, $display->content[$pane_id], $display);

  7. }

  8. return theme('demo', $content);

  9. }

All we’re doing here is using an object to collect all the output of the panes this panel region contains. We are doing this in a foreach loop using the panels_render_pane() function. It is important that we do this, otherwise our panel will be missing content. There’s also a mention of a settings array, ignore that for now, we’ll use it later when we implement a settings form. Now you can use your region style without errors, however, there isn’t anything exciting about returning the raw content.. Let’s move into actually using a template. Create a file named demo_template.tpl.php in your demo folder.

demo_template.tpl.php

  1. <?php drupal_add_css(drupal_get_path('theme','mytheme') .'/plugins/styles/demo/demo.css', 'theme'); ?>

  2. &lt;div class="demo"&gt;

  3. &lt;div class="demo-inner"&gt;

  4. &lt;div class="demo-inner-deep"&gt;

  5. <?php print $content->content; ?>

  6. <?php if ($content->settings['position']): ?>

  7. &lt;div class="flare flare-<?php print $content->settings['position']; ?>"&gt;&lt;/div&gt;

  8. <?php endif; ?>

  9. &lt;/div&gt;

  10. &lt;/div&gt;

  11. &lt;/div&gt;

Here I’m using drupal_add_css() to retrieve the css file we need to style our template. This is all pretty typical stuff if you’re familiar with template files. Oh, one more thing, create a file named demo.css. Here’s what that looks like:

demo.css

  1. .demo,

  2. .demo-inner,

  3. .demo-inner-deep{

  4. background:url(demo.png) no-repeat;

  5. }

  6. .demo{

  7. position:relative;

  8. width:373px;

  9. background-position:0 0;

  10. padding-top:25px;

  11. color:#fff;

  12. font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;

  13. font-size:1.2em;

  14. margin:12px;

  15. }

  16. .demo-inner{

  17. background-position: right bottom;

  18. padding-bottom:25px;

  19. }

  20. .demo-inner-deep{

  21. background-position:-373px 0;

  22. background-repeat:repeat-y;

  23. padding:15px 35px;

  24. }

  25. .demo .flare{

  26. background:url(flare.png) no-repeat;

  27. width:50px;

  28. height:62px;

  29. position:absolute;

  30. }

  31. .demo .flare-left-top{

  32. top:-10px;

  33. left:0;

  34. }

  35. .demo .flare-right-top{

  36. top:-10px;

  37. right:0;

  38. }

  39. .demo .flare-left-bottom{

  40. bottom:-10px;

  41. left:0;

  42. }

  43. .demo .flare-right-bottom{

  44. bottom:-10px;

  45. right:0;

  46. }

  47. /************ Demo Pane Styles *****************/

  48. .demo-pane{

  49. font-family: "Zapfino", "westminster", "webdings";

  50. color:#bada55;

  51. line-height:40px;

  52. }

At this point you can apply the style to the Panel (not the pane, we’ll do that next), test the page and see what it looks like. If you’re getting errors make sure you clear your cache. If you errors indicate it cannot find your template, you can explicitly assign a ‘path’ in your theme hook:
 'path' => drupal_get_path('theme', 'mytheme') .'/plugins/styles/demo'

Okay, so let me explain what the settings array is going to do. I want to include an image on the corner of my container to give it a little flare. However, I’d like to be able to configure which corner I want to use each time I use this style. So next we will implement a settings form that we can configure in case we decide that we can’t commit to a particular corner.

demo.inc

  1. function demo_form($style_settings){

  2. $form = array();

  3. $form['flare_position'] = array(

  4. '#type' => 'radios',

  5. '#title' => 'Choose the position of your flare.',

  6. '#default_value' => (isset($style_settings['flare_position'])) ? $style_settings['flare_position'] : 'left-top',

  7. '#options' => array(

  8. 'left-top' => t('Top Left'),

  9. 'right-top' => t('Top Right'),

  10. 'left-bottom' => t('Bottom Left'),

  11. 'right-bottom' => t('Bottom Right'),

  12. ),

  13. );

  14. return $form;

  15. }

Pretty standard Form API stuff. But here is the fun part: You can go back to the panel configuration page and choose “style settings” from the panel cog. Once a user submits this form, we will have access to all of their configurations in the panel style callback using the $settings parameter! That explains the $content->settings array in the panel style callback.

The last thing we want to play with is creating our pane callback. Just for the sake of demonstration we’re going to do this one really, really simple.

demo.inc

  1. function theme_demo_style_pane($content, $pane, $display){

  2. $output = '';

  3. $output .= '<div class="demo-pane">';

  4. $output .= $content->content;

  5. $output .= '</div>';

  6. return $output;

  7. }

Go ahead and configure the pane with this style in the admin page. All this is doing is wrapping a class around the pane content for styling purposes. Of course, you can almost do everything a panel callback is capable of (with the exception of including a settings form, as far as I can tell.)

So there you have it. A typical style plugin in which you can build from. I’d like to note here that I read the plugins code that come with the Panels module as my documentation, so I want to give thanks to the author, Earl Miles, for having that in there.

In my next post on Panels I’d like to talk about the data structure of the objects that are being passed into the style callbacks and map out the important data you can mine out for your templates.

PreviewAttachmentSize 11.57 KB
Jan 23 2010
Jan 23

If you use jQuery in development, you've almost definitely used $(document).ready(). $(document).ready() is a wonderful function that lets you run Javascript code as the page is ready to handle it. But one potentially frustrating aspect of it is that functions are executed in the same order they're passed in, and jQuery doesn't let you choose which functions run first.

Usually, this is fine, but every once in a while, you really want your function to run before another function. For example, if your page is running jCarousel or Google Maps, or any other library that changes the markup of your page, you might want to do some processing before they get a chance. With the traditional $(document).ready(), you'd be out of luck. However, a quick look at the $(document).ready() internals shows a way to get what we want.

ready: function(fn) {
  // Attach the listeners
  bindReady();
 
  // If the DOM is already ready
  if ( jQuery.isReady )
    // Execute the function immediately
    fn.call( document, jQuery );
 
  // Otherwise, remember the function for later
  else
    // Add the function to the wait list
    jQuery.readyList.push( fn );
 
  return this;
},

It's not really important to understand everything that's going on here, so don't worry about the Javascript. First, the function tells the browser to alert jQuery when it's done loading. Next, it checks if the browser is already ready, to see if you're running a function long after the page has already loaded. If the page is ready, jQuery will just run the function immediately. But if the page hasn't finished loading (the usual case for $(document).ready() logic), your function gets put on the end of a jQuery.readyList array. This is what we're after.

Before the page is finished loading, all of the functions added through $(document).ready() are put on the jQuery.readyList variable. If you want to change the order in which these functions are executed, all you have to do is alter this array.

Here's what it would look like to put your function at the front of the line for execution:

// Adding a function to $(document).ready() the regular way
$(document).ready(function() {
  // processing happens here
});
 
// Adding a function to the front of the list
$.readyList.unshift(function() {
  // processing here
});

Of course, this shouldn't be overused. Other jQuery authors expect functions to run in the order they're added, so they might be thrown off if the readyList is modified too much. Still, a small change like this can save a lot of headache in the right situation.

Notes:
Array manipulation is a bit beyond the scope of this post, but w3schools has a good reference on that here: JavaScript Array Object - w3schools.com

This should work in jQuery 1.3.x and 1.2.x, but will not work in 1.4. The readyList variable in 1.4 is no longer attached to the jQuery object, so it's no longer accessible to outside code.

Jan 11 2010
Jan 11

Devel is a supremely useful module for Drupal development, but if you've never enabled the Development menu block, there are some useful links you might be missing out on. Here are some features of Devel that you might not know about:

Execute PHP
Path: devel/php
Provides a text area for entering PHP code into. Any output (print, print_r, var_dump) is shown in a drupal_set_message.

PHP Info
Path: devel/phpinfo
Get PHP configuration info from the server your site is running on.

View Theme Registry
Path: devel/theme/registry
Get down deep with the theme info your Drupal site knows about. Great for expert themers.

Function Reference
Path: devel/reference
Lists every function available to your Drupal site without loading any include files. Each function links to the API reference site you can specify at admin/settings/devel, so if it's a core function or a well-documented contrib funciton, it'll link to its documentation.

Reinstall Modules
Path: devel/reinstall
Uninstall and reinstall modules. This saves a lot of time compared to disabling the module, uninstalling it, and then re-enabling it.

Theme Developer Toggle
Path: devel/devel_themer
Another time saver. I love the Theme Developer module, but it's such a hassle to turn it on, use it for minute, and then off again right after using it. This path is a callback that switches the module on and off. Very handy.

View Source Code
Path: devel/source?file=sites/default/settings.php
View the actual, raw PHP source of any file on your website. If this sounds like a security issue to you, remember that every one of these Devel paths requires permission to view it.

Edit Stored Variables
Path: devel/variable
Edit variables stored in the {variable} table. These are the same variables available through variable_get() and variable_set().

Available Form Elements
Path: devel/elements
If you've worked with Form API, you've seen #type declarations like 'textfield', 'textarea', 'fieldset' and so on. This page lists all the types available to you. This can be handy with modules like gmap that silently add really cool form elements for you.

Aug 09 2009
Aug 09

Check out the video of the youngest DrupalCampLA volunteer, on the schedule page of the DrupalCampLA website:

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web