Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Aug 09 2016
Aug 09

If automated testing is not already part of your development workflow, then it’s time to get started. In this post, I’ll show you how to use Behat to test that your Drupal site is working properly.

The post Testing Your Drupal Site with Behat appeared first on php[architect].

Aug 02 2016
Aug 02

drupal-development-sketch-mockupsdrupal-development-sketch-mockups

Navigating the administrative backend of Drupal can be daunting.  Here are some helpful things to consider while managing a Drupal development project.

Clearing cache

Knowing how to clear cache on Drupal is critical. During the development phase, chances are you will be looking at the site as developers are working simultaneously. Clearing Drupal cache should be the first step in troubleshooting as changes such as theme or module changes might not take place immediately. The easiest way to clear cache from the administration menu is Administration > Configuration > Development > Performance > click on ‘Clear Cache’. If your site uses the admin menu module, a shortcut easily accessible at all times on the top menu bar. So the next time something doesn’t show up as expected, try clearing the cache first!

Basic Drupal terminology

A lot of scary words can be thrown around while describing project requirements and needs. In order to get a better understand of the inner workings of Drupal development, here are some common Drupal terms you should familiarize yourself to.

  1. Node – Pieces of content are known as nodes. They are stored in the database with an unique identifier known as a node ID.
  2. Content Type – Content types are the most basic categorization of nodes. A node’s content type defines what kind of node it will be. Each content type can be assigned a variety of fields and configuration settings. The default ones that Drupal ships with are “article” and “basic page”, but you can create your own as well.
  3. Field – Fields are the datasets that makes up a content type. They define the kind of information you want to be stored within each content type. For example, a Staff content type may consist of a First Name and Last Name fields, while an Article content type could have Body and Description fields.
  4. View – One of the most common way to display aggregate lists of data on the frontend is using the Views module. Views are very powerful as it can define what the end user sees through the use of specific fields, filters, and sorting criterias.
  5. Taxonomy – Taxonomy is the practice of classifying nodes through the use of keywords. A taxonomic structure consists of a vocabulary and its associated taxonomy Terms. A vocabulary is the overarching category made up of sets of terms to while taxonomy terms are the individual pieces.

Be aware of different user permission levels

Keep in mind that what you see while logged in might not reflect what other users might see. A lot of times when a client does not have access to something, it is most likely a permission problem. It is always helpful to have test accounts with various user roles so you can log into them and see exactly what other accounts are seeing.

Expecting manual configuration changes on production is risky
Things can become a bit sticky once a client is needing manual configuration changes made on production environment. If the right processes are not in place, the development environment can get out of sync very quickly. Always note the changes needed and make the changes to all the environments across the board so nothing gets lost. Another way is pulling the production database downstream periodically. You can also use the Features module to capture certain configuration changes so it is committed in the codebase (however, there are limitations to this module).

Jul 30 2016
Jul 30

On a recent project, we had to create multiple sitemaps for each of the domains that we have setup on the site. We came across some problems that we had to resolve because of the nature of our pURL setup.

Goals

  • We want all of the front pages from each subdomain to be added to the sitemap and we are able to set the rules for them on the XMLSitemap settings page.
  • We want to make sure that the URLs that we are adding to the other pages no longer show up in the main domain's sitemap.

Problems

1) Only On The Primary Domain

The XML sitemap module only creates one sitemap based on the primary domain.

2) Prefixes not Distinguished

Our URLs for nodes are setup so that nodes can be prefixed with our subdomain (pURL modifier) and XMLSitemap doesn't see our prefixes as being different sites. At this point, all nodes are added to every single domain's sitemap.

3) URL Formats

Our URLs are not in the correct format when being added to the sitemap. Our URLs should look like http://subdomain.domain.org/*, however, because we are prefixing them, they show up as http://domain.org/subdomain/*. We want our URLs to look like they are from the right sub-domain and not all coming from the base domain.

Solution

We were able to add the ability to create sitemaps for each of the 15 domains by adding the XMLSitemap domain module. The XLMSitemap domain module allows us to define a domain for each sitemap, generate a sitemap and serve it on the correct domain.

We added xmlsitemap-dont-write-empty-element-in-xml-sitemap-file-2545050-3.patch to prevent empty elements from being added to the sitemap.

Then we used a xmlsitemap_element_alter inside of our own custom module that looks something like this:




function hook_xmlsitemap_element_alter(array &$element, array $link, $sitemap) {
  $domain = $sitemap->uri['options']['base_url'];
  $url_parts = explode('//', $domain);
  $parts = explode('.', $url_parts[1]);
  $subdomain = array_shift($parts);

  $current_parts = explode('/', $link['loc']);
  $current_prefix = array_shift($current_parts);

  $modifiers = _get_core_modifiers();

  
  if (in_array($subdomain, array_keys($modifiers))) {
    
    
    if ($current_prefix != $subdomain && $current_prefix != '') {
      
      $element = array();
        return $element;
      }
    else {
      
      $pattern = $current_prefix . '/';
      $element['loc'] = $domain . str_replace($pattern, '', $link['loc']);
    }
  }
  else {
    
    
    if (in_array($current_prefix, array_keys($modifiers))) {
      $element = array();
      return $element;
    }
  }
}


function _get_core_modifiers() {
  if (!$cache = cache_get('subdomains')) {
    $result = db_query("SELECT id, value FROM {purl} WHERE provider = 'og_purl_provider'")->fetchAllAssoc('value');
    cache_set('subdomains', $result, 'cache', time() + 86400);
    return $result;
  }
  else {
    return $cache->data;
  }
?>

If you have any questions, suggestions, feel free to drop a comment below!

Jul 14 2016
Jul 14

Back in December, Tom Friedhof shared how we set up our Drupal 8 development and build process utilizing Docker. It has been working well in the several months we have used it and worked within its framework. Within the time-span however, we experienced a few issues here and there which led me to come up with an alternative process which keeps the good things we like and getting rid of/resolving the issues we encountered.

First, I'll list some improvements that we'd like to see:

  1. Solve file-syncing issues

    One issue that I keep running into when working with our development process is that the file-syncing stops working when the host machine powers off in the interim. Even though Vagrant's rsync-auto can still detect changes on the host file-system and initiates an rsync to propel files up into the containers via a mounted volume, the changes do not really appear within the containers themselves. I had a tough time debugging this issue, and the only resolution in sight was to do a vagrant reload -- it's a time-consuming process as it rebuilds every image and running them again. Having to do this every morning when I turn on my laptop at work was no fun.

  2. Performant access to Drupal's root

    Previously, we had to mount Drupal's document root to our host machine using sshfs to explore in it, but it's not exactly performant. For example, performing a grep or ag to search within files contents under Drupal 8's core takes ~10 seconds or more. Colleagues using PhpStorm report that mounting the Drupal root unto the host system brings the IDE to a crawl while it indexes the files.

  3. Levarage Docker Compose

    Docker Compose is a great tool for managing the life-cycle of Docker containers, especially if you are running multiple applications. I felt that it comes with useful features that we were missing out because we were just using Vagrant's built-in Docker provider. Also with the expectation that Docker for Mac Beta will become stable in the not-so-distant future, I'd like the switch to a native Docker development environment as smooth as possible. For me, introducing Docker Compose into the equation is the logical first-step.

    dlite just got into my attention quite recently which could fulfill the role of Docker for Mac before its stable release, but haven't gotten the chance to try it yet.

  4. Use Composer as the first-class package manager

    Our previous build primarily uses Drush to build the Drupal 8 site and download dependencies and relegating the resolution of some Composer dependencies to Composer Manager. Drush worked really well for us in the past and there is no pressing reason why we should abandon it, but considering that Composer Manager is deprecated for Drupal 8.x and that there is already a Composer project for Drupal sites, I thought it would be a good idea to be more proactive and rethink the way we have been doing Drupal builds and adopt the de-facto way of putting together a PHP application. At the moment, Composer is where it's at.

  5. Faster and more efficient builds

    Our previous build utilizes a Jenkins server (also ran as a container) to perform the necessary steps to deploy changes to Pantheon. Since we were mostly deploying from our local machines anyway, I always thought that perhaps running the build steps via docker run ... would probably suffice (and it doesn't incur the overhead of a running Jenkins instance). Ultimately, we decided to explore Platform.sh as our deployment target, so basing our build in Composer became almost imperative as Drupal 8 support (via Drush) on Platform.sh is still in beta.

With these in mind, I'd like to share our new development environment & build process.

1. File & directory structure

Here is a high-level tree-view of the file structure of the project:

/<project_root>
├── Vagrantfile
├── Makefile
├── .platform/ 
│   └── routes.yaml
├── bin/ 
│   ├── drupal*
│   ├── drush*
│   └── sync-host*
├── docker-compose.yml 
├── environment 
├── src/ 
│   ├── .gitignore
│   ├── .platform.app.yaml 
│   ├── Dockerfile
│   ├── LICENSE
│   ├── bin/ 
│   │   ├── drupal-portal*
│   │   └── drush-portal*
│   ├── composer.json
│   ├── composer.lock
│   ├── custom/
│   ├── phpunit.xml.dist
│   ├── scripts/
│   ├── vendor/
│   └── web/ 
└── zsh/ 
    ├── zshrc
    ├── async.zsh
    └── pure.zsh

2. The Vagrantfile

Vagrant.configure("2") do |config|

  config.vm.box = "debian/jessie64"
  config.vm.network "private_network", ip: "192.168.100.47"

  config.vm.hostname = 'activelamp.dev'

  config.vm.provider :virtualbox do |vb|
    vb.name = "activelamp.com"
    vb.memory = 2048
  end

  config.ssh.forward_agent = true

  config.vm.provision "shell",
    inline: "apt-get install -y zsh && sudo chsh -s /usr/bin/zsh vagrant",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /home/vagrant/.zshrc ] && echo '' || ln -s /vagrant/zsh/zshrc /home/vagrant/.zshrc",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/prompt_pure_setup ] && echo '' || ln -s /vagrant/zsh/pure.zsh /usr/local/share/zsh/site-functions/prompt_pure_setup",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/async ] && echo '' || ln -s /vagrant/zsh/async.zsh /usr/local/share/zsh/site-functions/async",
    run: "once"

  if ENV['GITHUB_OAUTH_TOKEN']
    config.vm.provision "shell",
      inline: "sudo sed -i '/^GITHUB_OAUTH_TOKEN=/d' /etc/environment  && sudo bash -c 'echo GITHUB_OAUTH_TOKEN=#{ENV['GITHUB_OAUTH_TOKEN']} >> /etc/environment'"
  end

  
  config.vm.provision :docker

  config.vm.provision :docker_compose, yml: "/vagrant/docker-compose.yml", run: "always", compose_version: "1.7.1"

  config.vm.synced_folder ".", "/vagrant", type: "nfs"
  config.vm.synced_folder "./src", "/mnt/code", type: "rsync", rsync__exclude: [".git/", "src/vendor"]
end

Compare this new manifest to the old one and you will notice that we reduce Vagrant's involvement in defining and managing Docker containers. We are simply using this virtual machine as the Docker host, using the vagrant-docker-compose plugin to provision it with the Docker Compose executable and having it (re)build the images during provisiong stage and (re)start the containers on vagrant up.

We are also setting up Vagrant to sync file changes on src/ to /mnt/code/ in the VM via rsync. This directory in the VM will be mounted into the container as you'll see later.

We are also setting up zsh as the login shell for the vagrant user for an improved experience when operating within the virtual machine.

3. The Drupal 8 Build

For now let's zoom in to where the main action happens: the Drupal 8 installation. Let's remove Docker from our thoughts for now and focus on how the Drupal 8 build works.

The src/ directory cotains all files that constitute a Drupal 8 Composer project:

/src/
├── composer.json
├── composer.lock
├── phpunit.xml.dist
├── scripts/
│   └── composer/
├── vendor/ # Composer dependencies
│   └── ...
└── web/ # Web root
    ├── .htaccess
    ├── autoload.php
    ├── core/ # Drupal 8 Core
    ├── drush/
    ├── index.php
    ├── modules/
    ├── profiles/
    ├── robots.txt
    ├── sites/
    │   ├── default/
    │   │   ├── .env
    │   │   ├── config/ # Configuration export files
    │   │   │   ├── system.site.yml
    │   │   │   └── ...
    │   │   ├── default.services.yml
    │   │   ├── default.settings.php
    │   │   ├── files/
    │   │   │   └── ...
    │   │   ├── services.yml
    │   │   ├── settings.local.php.dist
    │   │   ├── settings.php
    │   │   └── settings.platform.php
    │   └── development.services.yml
    ├── themes/
    ├── update.php
    └── web.config

The first step of the build is simply executing composer install within src/. Doing so will download all dependencies defined in composer.lock and scaffold files and folders necessary for the Drupal installation to work. You can head over to the Drupal 8 Composer project repository and look through the code to see in depth how the scaffolding works.

3.1 Defining Composer dependencies from custom installation profiles & modules

Since we cannot use the Composer Manager module anymore, we need a different way of letting Composer know that we may have other dependencies defined in other areas in the project. For this let's look at composer.json:

{
    ...
    "require": {
        ...
        "wikimedia/composer-merge-plugin": "^1.3",
        "activelamp/sync_uuids": "dev-8.x-1.x"
    },
    "extra": {
        ...
        "merge-plugin": {
          "include": [
            "web/profiles/activelamp_com/composer.json",
            "web/profiles/activelamp_com/modules/custom/*/composer.json"
          ]
        }
    }
}

We are requiring the wikimedia/composer-merge-plugin and configuring it in the extra section to also read the installation profile's composer.json and one's that are in custom modules within it.

We can define the contrib modules that we need for our site from within the installation profile.

src/web/profiles/activelamp_com/composer.json:

{
  "name": "activelamp/activelamp-com-profile",
  "require": {
    "drupal/admin_toolbar": "^8.1",
    "drupal/ds": "^8.2",
    "drupal/page_manager": "^[email protected]",
    "drupal/panels": "~8.0",
    "drupal/pathauto": "~8.0",
    "drupal/redirect": "~8.0",
    "drupal/coffee": "~8.0"
  }
}

As we create custom modules for the site, any Composer dependencies in them will be picked up everytime we run composer update. This replicates what Composer Manager allowed us to do in Drupal 7. Note however that unlike Composer Manager, Composer does not care if a module is enabled or not -- it will always read its Composer dependencies and resolve them.

3.2 Drupal configuration

3.2.1 Settings file

Let's peek at what's inside src/web/settings.php:




$settings['container_yamls'][] = __DIR__ . '/services.yml';

$config_directories[CONFIG_SYNC_DIRECTORY] = __DIR__ . '/config';


include __DIR__ . "/settings.platform.php";

$update_free_access = FALSE;
$drupal_hash_salt = '';

$local_settings = __DIR__ . '/settings.local.php';

if (file_exists($local_settings)) {
  require_once($local_settings);
}

$settings['install_profile'] = 'activelamp_com';
$settings['hash_salt'] = $drupal_hash_salt;

Next, let's look at settings.platform.php:



if (!getenv('PLATFORM_ENVIRONMENT')) {
    return;
}

$relationships = json_decode(base64_decode(getenv('PLATFORM_RELATIONSHIPS')), true);

$database_creds = $relationships['database'][0];

$databases['default']['default'] = [
    'database' => $database_creds['path'],
    'username' => $database_creds['username'],
    'password' => $database_creds['password'],
    'host' => $database_creds['host'],
    'port' => $database_creds['port'],
    'driver' => 'mysql',
    'prefix' => '',
    'collation' => 'utf8mb4_general_ci',
];

We return early from this file if PLATFORM_ENVIRONMENT is not set. Otherwise, we'll parse the PLATFORM_RELATIONSHIPS data and extract the database credentials from it.

For our development environment however, we'll do something different in settings.local.php.dist:



$databases['default']['default'] = array(
    'database' => getenv('MYSQL_DATABASE'),
    'username' => getenv('MYSQL_USER'),
    'password' => getenv('MYSQL_PASSWORD'),
    'host' => getenv('DRUPAL_MYSQL_HOST'),
    'driver' => 'mysql',
    'port' => 3306,
    'prefix' => '',
);

We are pulling the database values from the environment, as this is how we'll pass data in a Docker run-time. We also append .dist to the file-name because we don't actually want settings.local.php in version control (otherwise, it will mess up the configuration in non-development environments). We will simply rename this file as part of the development workflow. More on this later.

3.2.2 Staged configuration

src/web/sites/default/config/ contains YAML files that constitute the desired Drupal 8 configuration. These files will be used to seed a fresh Drupal 8 installation with configuration specific for the site. As we develop features, we will continually export the configuration entities and place them into this folder so that they are also versioned via Git.

Configuration entities in Drupal 8 are assigned a universally unique ID (a.k.a UUID). Because of this, configuration files are typically only meant to be imported into the same (or a clone of the) Drupal site they were imported from. The proper approach is usually getting hold of a database dump of the Drupal site and use that to seed a Drupal 8 installation which you plan to import the configuration files into. To streamline the process during development, we wrote the drush command sync-uuids that updates the UUIDs of the active configuration entities of a non-clone site (i.e. a freshly installed Drupal instance) to match those found in the staged configuration. We packaged it as Composer package named activelamp/sync_uuids.

The complete steps for the Drupal 8 build is the following:

$ cd src
$ composer install
$ [ -f web/sites/default/settings.local.php ] && : || cp web/sites/default/settings.local.php.dist web/sites/default/settings.local.php
$ drush site-install activelamp_com --account-pass=default-pass -y
$ drush pm-enable config sync_uuids -y
$ drush sync-uuids -y
$ drush config-import -y

These build steps will result a fresh Drupal 8 installation based on the activelamp_com installation profile and will have the proper configuration entities from web/sites/default/config. This will be similar to any site that is built from the same code-base minus any of the actual content. Sometimes that is all that you need.

Now let's look at the development workflow utilizing Docker. Let's start with the src/Dockerfile:

FROM php:7.0-apache

RUN apt-get update && apt-get install -y \
  vim \
  git \
  unzip \
  wget \
  curl \
  libmcrypt-dev \
  libgd2-dev \
  libgd2-xpm-dev \
  libcurl4-openssl-dev \
  mysql-client

ENV PHP_TIMEZONE America/Los_Angeles


RUN docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
 && docker-php-ext-install -j$(nproc) gd pdo_mysql curl mbstring opcache


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN echo 'export PATH="$PATH:/root/.composer/vendor/bin"' >> $HOME/.bashrc


RUN composer global require drush/drush:8.1.2 drupal/console:0.11.3
RUN $HOME/.composer/vendor/bin/drupal init
RUN echo source '$HOME/.console/console.rc' >> $HOME/.bashrc


RUN echo "date.timezone = \"$PHP_TIMEZONE\"" > /usr/local/etc/php/conf.d/timezone.ini
ARG github_oauth_token

RUN [ -n $github_oauth_token ] && composer config -g github-oauth.github.com $github_oauth_token || echo ''

RUN [ -e /etc/apache2/sites-enabled/000-default.conf ] && sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/sites-enabled/000-default.conf || sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/apache2.conf


COPY bin/drush-portal /usr/bin/drush-portal
COPY bin/drupal-portal /usr/bin/drupal-portal

COPY . /var/www/
WORKDIR /var/www/

RUN composer --working-dir=/var/www install

The majority of the Dockerfile should be self-explanatory. The important bits are the provisioning of a GitHub OAuth token & adding of the {drupal,drush}-portal executables which are essential for the bin/{drush,drupal} pass-through scripts.

Provisioning a GitHub OAuth token

Sometimes it is necessary to configure Composer to use an OAuth token to authenticate on GitHub's API when resolving dependencies. These tokens must remain private and should not be committed into version control. We declare that our Docker build will take github_oauth_token as a build argument. If present, it will configure Composer to authenticate using it to get around API rate limits. More on this later.

DrupalConsole and Drush pass-through scripts

Our previous build involved opening up an SSH port on the container running Drupal so that we can execute Drush commands remotely. However, we should already be able to run Drush commands inside the container without having SSH access by utilizing docker run. However the commands can get too lengthy. In fact, they will be extra lengthy because we also need to execute this from within the Vagrant machine using vagrant ssh.

Here are a bunch of scripts that makes it easier to execute drush and drupal commands from the host machine:

Here are the contents of bin/drush and bin/drupal:

#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drupal-portal $@"
vagrant ssh -c "$cmd"
#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drush-portal $@"
vagrant ssh -c "$cmd"

This allow us to do bin/drush to run Drush commands and bin/drupal ... to run DrupalConsole commands, and the arguments will be pass over to the executables in the container.

Here are the contents of src/bin/drupal-portal and src/bin/drush-portal:

#!/usr/bin/env bash
/root/.composer/vendor/bin/drupal --root=/var/www/web $@
#!/usr/bin/env bash
/root/.composer/vendor/bin/drush --root=/var/www/web $@

The above scripts are added to the container and is essential to making sure drush and drupal commands are applied to the correct directory.

In order for this to work, we actually have to remove Drush and DrupalConsole from the project's composer.json file. This is easily done via the composer remove command.

The docker-compose.yml file

To tie everything together, we have this Compose file:

version: '2'
services:
  server:
    build:
      context: ./src
      args:
        github_oauth_token: ${GITHUB_OAUTH_TOKEN}
    volumes:
      - /mnt/code:/var/www
      - composer-cache:/root/.composer/cache
    env_file: environment
    links:
      - mysql:mysql
    ports:
      - 80:80
  mysql:
    image: 'mysql:5.7.9'
    env_file: environment
    volumes:
      - database:/var/lib/mysql

volumes:
  database: {}
  composer-cache: {}

There are four things of note:

  1. github_oauth_token: ${GITHUB_OAUTH_TOKEN}

    This tells Docker Compose to use the environment variable GITHUB_OAUTH_TOKEN as the github_oauth_token build argument. This, if not empty, will effectively provision the Composer with an OAuth token. If you go back to the Vagrantfile, you will see that this environment variable is set in the virtual machine (because docker-compose is run under it) by appending it to the /etc/environment file. All it needs is that the environment variable is present in the host environment (OS X) during the provisioning step.

    For example, it can be provisioned via: GITHUB_OAUTH_TOKEN= vagrant provision

  2. composer-cache:/root/.composer/cache

    This tells Docker to mount a volume on /root/.composer/cache so that we can persist the contents of this directory between restarts. This will ensure that composer install and composer update is fast and would not require re-downloading packages from the web every time we run. This will drastically imrpove the build speeds.

  3. database:/var/lib/mysql

    This will tell Docker to persist the MySQL data between builds as well. This is so that we don't end up with an empty database whenever we restart the containers.

  4. env_file: environment

    This let us define all environment variables in a single file, for example:

    MYSQL_USER=activelamp
    MYSQL_ROOT_PASSWORD=root
    MYSQL_PASSWORD=some-secret-passphrase
    MYSQL_DATABASE=activelamp
    DRUPAL_MYSQL_HOST=mysql

    We just configure each service to read environment variables from the same file as they both need these values.

We employ rsync to sync files from the host machine to the VM since it offers by far the fastest file I/O compared to the built-in alternatives in Vagrant + VirtualBox. In the Vagrantfile we specified that we sync src/ to /mnt/code/ in the VM. Following this we configured Docker Compose to mount this directory into the server container. This means that any file changes we make on OS X will get synced up to /mnt/code, and ultimately into /var/www/web in the container. However, this only covers changes that originate from the host machine.

To sync changes that originates from the container -- files that were scaffolded by drupal generate:*, Composer dependencies, and Drupal 8 core itself -- we'll use the fact that our project root is also available at /vagrant as a mount in the VM. We can use rsync to sync files the other way -- rsyncing from /mnt/code to /vagrant/src will bring file changes back up to the host machine.

Here is a script I wrote that does an rsync but will ask for confirmation before doing so to avoid overwriting potentially uncommitted work:

#!/usr/bin/env bash

echo "Dry-run..."

args=$@

diffs="$(vagrant ssh -- rsync --dry-run --itemize-changes $args | grep '^[>)"

if [ -z "$diffs" ]; then
  echo "Nothing to sync."
  exit 0
fi

echo "These are the differences detected during dry-run. You might lose work.  Please review before proceeding:"
echo "$diffs"
echo ""
read -p "Confirm? (y/N): " choice

case "$choice" in
  y|Y ) vagrant ssh -- rsync $args;;
  * ) echo "Cancelled.][dfLDS]\|^\*deleted'";;
esac

We are keeping this generic and not bake in the paths because we might want to sync arbitrary files to arbitrary destinations.

We can use this script like so:

$ bin/sync-host --recursive --progress --verbose --exclude=".git/" --delete-after /mnt/code/ /vagrant/src/

If the rsync will result in file changes on the host machine, it will bring up a summary of the changes and will ask if you want to proceed or not.

Makefile

We are using make as our task-runner just like in the previous build. This is really useful for encapsulating operations that are common in our workflow:


sync-host:
	bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

sync:
	vagrant rsync-auto

sync-once:
	vagrant rsync

docker-rebuild:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml build

docker-restart:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml up -d

composer-install:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www install

composer-update:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www update --no-interaction



lock-file:
	@vagrant ssh -- cat /mnt/code/composer.lock

install-drupal: composer-install
	vagrant ssh -- '[ -f /mnt/code/web/sites/default/settings.local.php ] && echo '' || cp /mnt/code/web/sites/default/settings.local.php.dist /mnt/code/web/sites/default/settings.local.php'
	-bin/drush si activelamp_com --account-pass=secret -y
	-bin/drush en config sync_uuids -y
	bin/drush sync-uuids -y
	[ $(ls -l src/web/sites/default/config/*.yml | wc -l) -gt 0  ] && bin/drush cim -y || echo "Config is empty. Skipping import..."

init: install-drupal
	yes | bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

platform-ssh:
	ssh ></span>@ssh.us.platform.sh

The Drupal 8 build steps are simply translated to use bin/drush and the actual paths within the virtual machine in the install-drupal task. After cloning the repository for the first time, a developer should just be able to execute make init, sit back with a cup of coffee and wait until the task is complete.

Try it out yourself!

I wrote the docker-drupal-8 Yeoman generator so that you can easily give this a spin. Feel free to use it to look around and see it in action, or even to start off your Drupal 8 sites in the future:

$ npm install -g yo generator-docker-drupal-8
$ mkdir myd8
$ cd myd8
$ yo docker-drupal-8

Just follow through the instructions, and once complete, run vagrant up && make docker-restart && make init to get it up & running.

If you have any questions, suggestions, anything, feel free to drop a comment below!

Jul 08 2016
Jul 08

/drupal-8-migration//drupal-8-migration/

First things first, what is Drupal 8? Well, Drupal 8 is the biggest update in Drupal history. It is said to be way easier to create content and every built in theme is responsive design. With over 200 new features, Drupal 8 has made its appearance. Drupal 8 is an improved suite of tools and features not seen in Drupal 7 and can be the application backbone for your projects. A few months ago we discussed how you can benefit from Drupal 8 . But now, let’s talk about migration and things you may want to consider. 

Drupal 8 Migration:

Apple comes out with a new iPhone every how many months? Joking, but we all know that it seems like they are constantly creating a mad rush for people to upgrade to the next best thing. Same is true with Drupal, there is no right or wrong time to migrate to Drupal 8, however, if you want to be up-to-date with the latest cutting edge technology you typically choose to migrate just as you would get the newest version of the iPhone.

Things to consider whenever migrating to Drupal 8:

  • The availability of modules from the community- many of them have not yet been fully converted to Drupal 8.
  • Do you have any custom modules that will have to be ported to Drupal 8?
  • Is your theme available for Drupal 8 or does it need to be ported (converted) to Drupal 8?
  • Does all content need to be migrated? This may be a good time to prune your content.
  • If your site is complex, you may require additional modules, and possibly custom modules to help migrate.

What are your Drupal needs? Feel free to get in touch so we can discuss your next Drupal project.

Jun 23 2016
Jun 23

These days, it’s pretty rare that we build websites that aren’t some kind of redesign. Unless it’s a brand new company or project, the client usually has some sort of web presence already, and for one reason or another, they’ve decided to replace it with something shiny and new.

In an ideal world, the existing system has been built in a sensible way, with a sound content strategy and good separation of concerns, so all you need to do is re-skin it. In the Drupal world, this would normally mean a new theme, or if we’re still in our dream Zen Garden scenario, just some new CSS.

However, the reality is usually different. In my experience, redesigns are hardly ever just redesigns. When a business is considering significant changes to the website like some form of re-branding or refresh, it’s also an opportunity to think about changing the content, or the information architecture, or some aspects of the website functionality. After all, if you’re spending time and money changing how your website looks, you might as well try to improve the way it works while you’re at it.

So the chances are that your redesign project will need to change more than just the theme, but if you’re unlucky, someone somewhere further back along the chain has decided that it’s ‘just a re-skinning’, and therefore it should be a trivial job, which shouldn’t take long. In the worst case scenario, someone has given the client the impression that the site just needs a new coat of paint, but you’re actually inheriting some kind of nasty mess with unstable foundations that should really be fixed before you even think about changing how it looks. Incidentally, this is one reason why sales people should always consult with technical people who’ve seen under the bonnet of the system in question before agreeing prices on anything.

Even if the redesign is relatively straightforward from a technical point of view, perhaps it’s part of a wider rebranding, and there are associated campaigns whose dates are already expensively fixed, but thinking about the size of the website redesign project happened too late.

In other words, for whatever reason, it’s not unlikely that redesign projects will find themselves behind schedule, or over budget - what should you do in this situation? The received agile wisdom is that time and resources are fixed, so you need to flex on scope. But what’s the minimum viable product for a redesign? When you’ve got an existing product, how much of it do you need to rework before you put the new design live?

This is a question that I’m currently considering from a couple of angles. In the case of one of my personal projects, I’m upgrading an art gallery listings site from Drupal 6 to Drupal 8. The old site is the first big Drupal site I built, and is looking a little creaky in places. The design isn’t responsive, and the content editing experience leaves something to be desired. However, some of the contributed modules don’t have Drupal 8 versions yet, and I won’t have time to do the work involved to help get those modules ready, on top of the content migration, the new theme, having a full-time job and a family life, and all the rest of it.

In my day job, I’m working with a large multinational client on a set of sites where there’s no Drupal upgrade involved, but the suggested design does include some functional changes, so it isn’t just a re-theming. The difficulty here is that the client wants a broader scope of change than the timescales and budget allow.

When you’re in this situation, what can you do? As usual with interesting questions, the answer is ‘it depends’. Processes like impact mapping can help you to figure out the benefits that you get from your redesign. If you’ve looked at your burndown rates, and know that you’re not going to hit the deadline, what can you drop? Is the value that you gain from your redesign worth ditching any of the features that won’t be ready? To put it another way, how many of your existing features are worth keeping? A redesign can (and should) be an opportunity for a business to look at their content strategy and consider rationalising the site. If you’ve got a section on your site that isn’t adding any value, or isn’t getting any traffic, and the development team will need to spend time making it work in the new design, perhaps that’s a candidate for the chop?

We should also consider the Pareto principle when we’re structuring our development work, and start by working on the things that will get us most of the way there. This fits in with an important point made by scrum, which can sometimes get forgotten about: that each sprint should deliver “a potentially shippable increment”. In this context, I would interpret this to mean that we should make sure that the site as a whole doesn’t look broken, and then we can layer on the fancy bits afterwards, similar to a progressive enhancement approach to dealing with older browsers. If you aren’t sure whether you’ll have time to get everything done, don’t spend an excessive amount of time polishing one section of the site to the detriment of basic layout and styling that will make the whole site look reasonably good.

Starting with a style guide can help give you a solid foundation to build upon, by enabling you to make sure that all the components on the site look presentable. You can then test those components in their real contexts. If you’ve done any kind of content audit (and somebody really should have done), you should have a good idea of the variety of pages you’ve got. At the very least, your CMS should help you to know what types of content you have, so that you can take a sample set of pages of each content type or layout type, and you’ll be able to validate that they look good enough, whatever that means in your context.

There is another option, though. You don’t have to deliver all the change at once. Can you (and should you) do a partial go-live with a redesign? Depending on how radical the redesign is, the attitudes to change and continuous delivery within your organisation and client, and the technology stack involved, it may make sense to deliver changes incrementally. In other words, put the new sections of the site live as they’re ready, and keep serving the old bits from the existing system. There may be brand consistency, user experience, and content management process reasons why you might not want to do this, but it is an option to consider, and it can work.

On one previous project, we were carrying out a simultaneous redesign and Drupal 6 to 7 upgrade, and we were able to split traffic between the old site and the new one. It made things a little bit more complicated in terms of handling user sessions, but it did give the client the freedom to decide when they thought we had enough of the new site for them to put it live. In the end, they decided that the answer was ‘almost all of it’.

So what’s the way forward?

In the case of my art gallery listings site, the redesign itself has a clear value, and with Drupal 6 being unsupported, I need to get the site onto Drupal 8 sooner rather than later. There’s definitely a point that will come fairly soon, even if I don’t get to spend as long as I’d like working on it, where the user experience will be improved by the new site, even though some of the functionality from the old site isn’t there, and isn’t likely to be ready for a while. I’m my own client on that project, so I’m tempted to just put the redesign live anyway.

In the case of my client, there are decisions to be made about which of the new features need to be included in the redesign. De-scoping some of the more complex changes will bring the project back into the realm of being a re-theming, the functional changes can go into subsequent releases, and hopefully we’ll hit the deadline.

A final point that I’d like to make is that we shouldn’t fall into the trap of thinking of redesigns as big-bang events that sit outside the day-to-day running of a site. Similarly, if you’re thinking about painting your house, you should also think about whether you also need to fix the roof, and when you’re going to schedule the cleaning. Once the painting is done, you’ll still be living there, and you’ll have the opportunity to do other jobs if and when you have the time, energy, and money to do so.

Along with software upgrades, redesigns should be considered as part of a business’s long-term strategy, and they should be just one part of a plan to keep making improvements through continuous delivery.

Jun 15 2016
Jun 15

The web development community can have a long list of requirements, languages, frameworks, constructs and tools that most companies or bosses want you to know.

This list may not include everything you need to know including PHP, HTML, CSS, responsive web development principles, and Drupalisms. Here is the list of some of the important skills, concepts, and tools that we think you should know as a beginner Drupal developer.

1. Version Control

Every developer should have some experience with version control and versioning. Version control is an essential part of the Drupal community. Versioning allows for Drupal projects to be easily managed, maintained and contributed in a uniform manner. Version control will also most likely be used in-house to manage each client project as well.

2. Command Line Interface (CLI)

It isn't necessary to be a CLI Ninja, however being able to work comfortably using a CLI is very important. One of the advantages to using a CLI is the ability to be more productive. You can quickly automate repetitive tasks, perform tasks without jumping from application to application, and the ability to use tools like Drush to perform tasks that would normally require you to navigate 3 or more mouse clicks to accomplish.

3. Package Managers

Using package managers is important to the installation of Drupal. Whether it is installing Sass or Bootstrap from node or Drush from composer, it is important to know how package managers work and exactly what you are running before running commands on your computer.

4. Contributing Back

An important part of the Drupal community is contributing back to projects and core. When you find an issue, such as something that just doesn't seem to work correctly, or you would like to implement a functionality to Drupal, you should think about giving back to the community. If you find an issue on an existing project or core, check to see if there is an existing ticket on that project. If there isn't, you can create one, and if you can debug it and resolve the issue you can contribute a patch to that issue. If you don't know exactly how to debug the issue you can have an open conversation with other developers and maintainers to help resolve the issue. Contributing and interacting in the community moves Drupal forward.

5. CSS Preprocessors

Within the last couple of years, there has been a movement to CSS preprocessors to add a programmatic feel to CSS2 and CSS3. There are some that are against preprocessors because it adds a little more overhead to a project. Whether you use them or not, you may have a client or framework that uses one that you might need to be familiar with how to use a preprocessor.

6. A Framework

Within the Drupal community, there is often talk of headless Drupal. We have seen some interesting ideas come from the adopters of headless Drupal. Headless Drupal setups usually use a framework for the front-end. It may be Angular, Angular 2, Backbone, Ember or something different, however, most of the frameworks have two things in common, they are often written in Javascript and almost always make use of templating.

7. Templating

It is important to know the principles of templating so that you can easily pick up and learn new frameworks. Whether it is Mustache, Twig, Jade, or the templating syntax from within Angular, there are similarities between the syntax and the principles can be applied to each of the languages that will allow you to quickly step from one to the next with a smaller learning curve.

8. Basic Debugging

Debugging a problem correctly can save you valuable time by getting you directly to the cause of an issue instead of looking over each line of code one by one. It is essential to know how to do basic debugging when working with Drupal. Sometimes the error messages can give you enough information, other times it is necessary to step into Devel or XDebug and step through the project to find the exact location where the code is not working correctly so that you can start to solve the problem.

9. Unit Testing / Code Testing

Testing your own code is important. When it comes to code testing you have many options, from TTD and BDD you can write unit tests to cover your classes, linting to make sure you are writing "good", standardized code. Linting can be helpful for writing code that others can easily navigate and sets up some best practices for you to follow.

10. A CMS

When starting with Drupal, it might be good to have familiarity with a CMS platform before jumping in. There are some advantages to knowing the constructs of other CMS platforms and being familiar with how to work within a platform. However, when working with Drupal it is important to think about the way Drupal works and not be stuck in the way other CMS platforms accomplish goals.

Conclusion

As a web developer, it is important to know many concepts and technologies. Many companies will not require you to know everything, do everything and be a jack-of-all-trades. In technology, there are so many new tools, frameworks, and languages coming out daily that it is impossible to stay on top of them all. It is far better to get a good base understanding of core web concepts that can be applied to multiple languages, tools, and technologies and then specialize.

Did I miss something you feel is important? Is there something you would like to have seen on the list? Leave a comment below.

Jun 07 2016
Jun 07

Continuing from Evan's blog post on building pages with Paragraphs and writing custom blocks of content as fields, I will walk you through how to create a custom field-formatter in Drupal 8 by example.

A field-formatter is the last piece of code to go with the field-type and the field-widget that Evan wrote about in the previous blog post. While the field-type tells Drupal about what data comprises a field, the field-formatter is responsible for telling Drupal how to display the data stored in the field.

To recap, we defined a hashtag_search field type in the previous blog post whose instances will be composed of two items: the hashtag to search for, and the number of items to display. We want to convert this data into a list of the most recent n tweets with the specified hashtag.

A field-formatter is a Drupal plugin, just like its respective field-type and field-widget. They live in /src/Plugin/Field/FieldFormatter/ and are namespaced appropriately: Drupal\\Plugin\Field\FieldFormatter.



namespace Drupal\my_module\Plugin\Field\FieldFormatter;


use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\FormatterBase;
use Drupal\Core\Form\FormStateInterface;


class HashtagFormatter extends FormatterBase
{

    public function viewElements(FieldItemListInterface $items, $langcode)
    {
        return array();
    }
}

We tell Drupal important details about our new field-formatter using a @FieldFormatter class annotation. We declare its unique id; a human-readable, translatable label; and a list of field_types that it supports.

The most important method in a field-formatter is the viewElements method. It's responsibility is returning a render array based on field data being passed as $items\Core\Field\FieldItemListInterface>.

Let's look at the code:



use Drupal\my_module\Twitter\TwitterClient;
use Drupal\my_module\Twitter\TweetFormatter;

...

    
    protected $twitter;

    
    protected $formatter;

    ...

    public function viewElements(FieldItemListInterface $items, $langcode)
    {
        $element = [];

        
        foreach ($items as $delta => $item) {

            try {

                
                $results = $this->twitter->search($item->hashtag_search, $item->count);

                
                
                
                $statuses = array_map(function ($s) {
                    $s['formatted_text'] = $this->formatter->toHtml($s['text'], $s['entities']);
                    return $s;
                }, $results['statuses']);

                
                if (!empty($statuses)) {
                    $element[$delta]['header'] = [
                        '#markup' => '

#'</span> . $item->hashtag_search . '

'
]; } foreach ($statuses as $status) { $element[$delta]['status'][] = [ '#theme' => 'my_module_status', '#status' => $status ]; } } catch (\Exception $e) { $this->logger->error('[:exception]: %message', [ ':exception' => get_class($e), '%message' => $e->getMessage(), ]); continue; } } $element['#attached']['library'][] = 'my_module/twitter_intents'; return $element; } ...

See https://github.com/bezhermoso/tweet-to-html-php for how TweetFormatter works. Also, you can find the source-code for the basic Twitter HTTP client here: https://gist.github.com/bezhermoso/5a04e03cedbc77f6662c03d774f784c5

Custom theme renderer

As shown above, each individual tweets are using the my_module_status render theme. We'll define it in the my_module.module file:




function my_module_theme($existing, $type, $theme, $path) {
  $theme = [];
  $theme['my_module_status'] = array(
    'variables' => array(
      'status' => NULL
    ),
    'template' => 'twitter-status',
    'render element' => 'element',
    'path' => $path . '/templates'
  );

  return $theme;
}

With this, we are telling Drupal to use the template file modules/my_module/templates/twitter-status.twig.html for any render array using my_module_status as its theme.

Render caching

Drupal 8 does a good job caching content: typically any field formatter is only called once and the resulting collective render arrays are cached for subsequent page loads until the Drupal cache is cleared. We don't really want our Twitter block to be cached for that long. Since it is always great practice to keep caching enabled, we can define how caching is to be applied to our Twitter blocks. This is done by adding cache definitions in the render array before we return it:



      public function viewElements(...)
      {

        ...

        $element['#attached']['library'][] = 'my_module/twitter_intents';
        
        $element['#cache']['max-age'] = 60 * 5;

        return $element;
      }

Here we are telling Drupal to keep the render array in cache for 5 minutes. Drupal will still cache the rest of the page's elements how they want to be cached, but will call our field formatter again -- which pulls fresh data from Twitter -- if 5 minutes has passed since the last time it was called.

Jun 04 2016
Jun 04

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Jun 03 2016
Jun 03

On a recent project we had to create a section that is basically a Twitter search for a hashtag. It needed to be usuable in different sections of the layout and work the same. Also, we were using the Paragraphs module and came up with a pretty nifty (we think) solution of creating a custom field that solved this particular problem for us. I will walk you through how to create a custom field/widget/formatter for Drupal 8. There are Drupal console commands for generating boilerplate code for this... which I will list before going through each of the methods for the components.

Field Type creation

The first thing to do is create a custom field. In a custom module (here as "my_module") either run drupal:generate:fieldtype or create a file called HashTagSearchItem.php in src/Plugin/Field/FieldType. The basic structure for the class will be:



namespace Drupal\my_module\Plugin\Field\FieldType;

use Drupal\Core\Field\FieldItemBase;
use Drupal\Core\Field\FieldStorageDefinitionInterface;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Language\LanguageInterface;
use Drupal\Core\TypedData\DataDefinition;


class HashtagSearchItem extends FieldItemBase {



}

Next, implement a few methods that will tell Drupal how our field will be structured. Provide a default field settings for the field that will be the count for the amount of tweets to pull. This will return of default settings keyed by the setting's name.



  
  public static function defaultFieldSettings() {
    return [
      'count' => 6
    ] + parent::defaultFieldSettings();
  }

Then provide the field item's properties. In this case there will be an input for hashtag and a count. Each property will be keyed by the property name and be a DataDefinition defining what the properties will hold.


  
  public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) {
    $properties = [];
    $properties['hashtag_search'] = DataDefinition::create('string')
      ->setLabel(t('The hashtag to search for.'));
    $properties['count'] = DataDefinition::create('integer')
      ->setLabel(t('The count of twitter items to pull.'));
    return $properties;
  }

Then provide a schema for the field. This will be the properties that we have created above.


  
  public static function schema(FieldStorageDefinitionInterface $field_definition) {
    return [
      'columns' => [
        'hashtag_search' => [
          'type' => 'varchar',
          'length' => 32,
        ],
        'count' => [
          'type' => 'int',
          'default' => 6
        ]
      ]
    ];
  }

Field widget creation

Next create the widget for the field, which is the actual form element and it's settings. Either drupal:generate:fieldwidget or create a file in src/Plugin/Field/FieldWidget/ called HashtagSearchWidget.php. This is the class' skeleton:



namespace Drupal\my_module\Plugin\Field\FieldWidget;

use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\WidgetBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Render\Element;



class HashtagSearchWidget extends WidgetBase {
  
}

Then implement several methods. Provide a default count of tweets to pull for new fields and the settings form for the field item:


  
  public static function defaultSettings() {
    return [
      'default_count' => 6,
    ] + parent::defaultSettings();
  }

  
  public function settingsForm(array $form, FormStateInterface $form_state) {
    $elements = [];
    $elements['default_count'] = [
      '#type' => 'number',
      '#title' => $this->t('Default count'),
      '#default_value' => $this->getSetting('default_count'),
      '#empty_value' => '',
      '#min' => 1
    ];

    return $elements;
  }

  
  public function settingsSummary() {
    $summary = [];
    $summary[] = t('Default count: !count', array('!count' => $this->getSetting('default_count')));

    return $summary;
  }

Then create the actual form element. Add the hashtag textfield and count number field and wrap it in a fieldset for a better experience:


  
  public function formElement(FieldItemListInterface $items, $delta, array $element, array &$form, FormStateInterface $form_state) {
    $item = $items[$delta];

    $element['hashtag_search'] = [
      '#type' => 'textfield',
      '#title' => $this->t('Hashtag'),
      '#required' => FALSE,
      '#size' => 60,
      '#default_value' => (!$item->isEmpty()) ? $item->hashtag_search : NULL,
    ];

    $element['count'] = [
      '#type' => 'number',
      '#title' => $this->t('Pull count'),
      '#default_value' => $this->getSetting('default_count'),
      '#size' => 2
    ];

    $element += [
      '#type' => 'fieldset',
    ];

    return $element;
  }

In part 2, Bez will show you how to pull the tweets and create a field formatter for the display of the tweets. You can read that post here!

May 17 2016
May 17

Actually, we never left. We didn't stop building Drupal sites, even through the long release cycle. However, we did move our company website, activelamp.com, off of Drupal about 18 months ago. Our company site had been built on Drupal since the Drupal 4.7 days. That was back when it started to become uncool to write and maintain your own home-grown CMS. I eventually found Drupal, ditched my custom CMS, and never looked back.

Our site started on Drupal 4.7, upgraded onto Drupal 5, then Drupal 6, and also Drupal 7 all at the beginning of the release cycles of Drupal. About 18 months ago, when our site was in dire need of an update, we evaluated Drupal 8 but realized with no release date in sight, and the fact that we did not want to chase HEAD and develop on unstable API's, we decided to go a different route and build our updated site on Jekyll, a popular static generator. It's more fun to tinker with new technology when working on non-billable stuff, which is what we did. We brushed up on our Ruby skills and built out a Jekyll site (which is this site you're looking at if you're reading this blog post before Q3 of 2016).

We're getting ready for another update to our company website and moving back to Drupal to do it. Jekyll was great, but it came with its disadvantages over something like Drupal. This post will highlight some of the advantages and disadvantages of working with Jekyll the past 18 months, as well as highlight why we're excited to put activelamp.com on Drupal 8 in Q3 of this year.

Getting off the Island

If you've been around the Drupal community for a few years, you've probably heard the phrase "Get off the island". There was, and still is, a big push to bring other technologies into the Drupal stack and rid ourselves of NIH Syndrome -- Not Invented Here Syndrome.

We as a team took this movement quite literally and started doing more than just Drupal. We started to take on projects utilizing the Full Stack Symfony Framework, Laravel, AngularJS, Ember, Express / Node, Hapi, and Jekyll. We had successfully gotten off the island, so to speak, and it felt good. We decided to build activelamp.com on Jekyll, it has several advantages over using a CMS like Drupal.

Advantages of Static Generators

Having a statically generated site has huge advantages. Let's review a few of them:

Performance / Scalability

You don't need a complex hosting setup to host your site. We are currently hosting activelamp.com on S3, a simple storage service provided by Amazon Web Services. In fact, several months after we launched activelamp.com, we built a Jekyll site for Riot Games hosted on S3. The start of season League of Legends site we built on Jekyll handled millions of requests per day, hosted on AWS S3. Not bad for such a highly trafficked site. No moving parts equals a fast site.

Security

Since Jekyll sites are static HTML, there isn't a backend to exploit. There are no scripts that actually run on the server. This means you don't have to stay up-to-date with security updates -- there are none.

Structured Content

The final output of a Jekyll site is a static HTML site, but we still have structure when creating content. On activelamp.com, we have blog content, video content, job postings, etc. We add content using Markdown, with a little bit of YAML at the top of the file, and place the files into specific folders in our document tree. Jekyll compiles the site from a set of HTML templates, YAML, and Markdown files. Our content is written into discrete files and compiled on build. Since our content has semantic structure, we are still able to compose pages together with whatever content we want, we just need to write a Ruby plugin to do so. Which leads us to the disadvantages.

Disadvantages of Not Using a CMS

We found ourselves spending lots of time writing Ruby plugins when we wanted Jekyll to act more like a CMS. A few of the disadvantages we faced with our site on Jekyll include:

Editor Experience Sucks

If you have non-technical people on your team that want to contribute, there is a high barrier to entry. We have a few non-developers on our team, and it would be so much nicer if they didn't have to use Markdown to write blog posts for our site. The rich experience you can have with Drupal 8 and CKEditor is top notch, something we're missing using Jekyll. Running Jekyll, our non-technical users needed to learn how to compile the site to preview changes and also had to learn how to use git to submit their blog posts for review before publishing them. Jekyll is great for developers, not for non-developers.

Have to Write Code for Everything

Not that I have anything against writing code, but I've been spoiled by the Drupal community. For the most part, there is likely a module for anything that you want to accomplish. If there isn't a module in the wild, there is a huge community behind Drupal that will hopefully contribute to a new module that you put out, continually improving it (Plus, I would much rather code PHP than Ruby).

No Backend, No Interactions.

I listed no backend as an advantage above under Security, but you really can't do anything interactive without a backend. Our activelamp.com Jekyll site actually has a small backend written with Node JS. We have a small Express app that handles the forms and social streams, and a small Handlebars app that calls out to Google Analytics to create the most popular posts lists on blog category pages. Our site isn't 100% static, it's just not possible unless you truly do want a brochure type site where users are just consuming content, not interacting.

Excited for Drupal 8

We have been building on Drupal 8 since last December. We launched a portion of a site on Drupal 8 a couple months ago, and we're launching a full site in a few weeks on Drupal 8. Drupal development has become exciting again.

Our new website is going to call for more interactivity with our users (premium content, client portal, partner portal, etc...). It's in our best interest to go back to a platform where we don't have to code every feature that we want. Another advantage for going back to Drupal 8 is that we'll get to setup a nice content publishing workflow for ourselves again. Jekyll was fine, but we've built some pretty nice workflows for our clients, it would be nice to get an easier workflow into our internal processes too, to relieve the tension for the non-developers on our team.

Most importantly, Drupal 8 is fun to develop on. The OOP approach to writing modules, and leveraging composer packages is amazing. Drupal has definitely taken a step in the right direction. In my opinion, as Drupal 8 gains traction it will become the de facto standard for Enterprise CMS needs.

May 14 2016
May 14

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

May 12 2016
May 12

drupal-nasa-website-monitordrupal-nasa-website-monitor

A content management system, or CMS, is a web application designed to make it easy for non-technical users to add, edit and manage a website. We use WordPress and Drupal the most for CMS development, but it is all dependent on our clients’ needs. Not only do content management systems help website users with content editing, they also take care of a lot of behind the scenes work.

Whenever it comes to developing a website from scratch, and for a client who wants to be able to manage the site after the launch it is important as a developer to find a tool that the client will be able to use. When we think about web development it’s always better for the client and for the company to find a good content management system or CMS, because it solves problems that you will never have to worry about from the UI of the backend to the front-end wanted features it solves a lot of issues upfront that you will not have to worry about later.  As a website evolves, it will never stay in the final version you delivered to your client, when we develop we need to always think to the site’s future.

WordPress is one of the most popular tools because it is very adaptable. The amount of plugins (solutions to your problems) are endless. Not only does it have great features but it has a friendly UI backend. All of the advantages mentioned lower the development time, which helps the client to lower their costs. In short, WordPress saves time and money! The most recent example is our very own website Mobomo.

Another resource for a CMS is Drupal. Drupal may be a little more difficult to develop with because it can handle bigger sites with much more data and a ton of users but this system is better for newspapers or government sites such as NASA. 

Each CMS will have their own advantages but our first priority is making it adaptable to the client’s needs.

May 07 2016
May 07

Drupal 8 has greatly improved editor experience out-of-the-box. It comes shipped with CKEditor for WYSIWYG editing. Although, D8 ships with a custom build of CKEditor and it may not have the plugins that you would like to have or that your client wants to have. I will show you how to add new plugins into the CKEditor that comes with Drupal 8.

Adding plugins with a button

First, create a bare-bones custom module called editor_experience. Files will be added here that will tell Drupal that there is a new CKEditor plugin. Find a plugin to actually install... for the first example I will use bootstrap buttons ckeditor plugin. Place the downloaded plugin inside libraries directory at the root of the Drupal installation; or use a make file to place it there. Also make sure you have the libraries module installed drupal module:download libraries.

Create a file inside of the editor_experience module inside of src/Plugin/CKEditorPlugin called BtButton.php. Add the name space and the two use statements shown below.





namespace Drupal\editor_experience\Plugin\CKEditorPlugin;

use Drupal\ckeditor\CKEditorPluginBase;
use Drupal\editor\Entity\Editor;


class BtButton extends CKEditorPluginBase {

  ... 

}

The annotation @CKEditorPlugin tells Drupal there is a plugin for CKEditor to load. For the id, use the name of the plugin as defined in the plugin.js file that came with the btbutton download. Now we add several methods to our BtButton class.

First method will return false since it is not part of the internal CKEditor build.





public function isInternal() {
  return FALSE;
}

Next method will get the plugin's javascript file.





public function getFile() {
  return libraries_get_path('btbutton') . '/plugin.js';
}

Let Drupal know where your button is. Be sure that the key is set to the name of the plugin. In this case btbutton.





  public function getButtons() {
    return [
      'btbutton' => [
        'label' => t('Bootstrap Buttons'),
        'image' => libraries_get_path('btbutton') . '/icons/btbutton.png'
      ]
    ];
  }

Also implement getConfig() and return an empty array since this plugin has no configurations.

Then go to admin/config/content/formats/manage/basic_html or whatever format you have that uses the CKEditor and pull the Bootstrap button icon down into the toolbar.

Now the button is available for use on the CKEditor!

Adding plugins without a button (CKEditor font)

Some plugins do not come with a button png that allows users to drag the tool into the configuration, so what then?

In order to get a plugin into Drupal that does not have a button, the implementation of getButtons() is a little different. For example to add the Font/Font size dropdowns use image_alternative like below:





 public function getButtons() {
   return [
     'Font' => [
       'label' => t('Font'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '{{ font }}',
         '#context' => [
           'font' => t('Font'),
         ],
       ],
     ],
     'FontSize' => [
       'label' => t('Font Size'),
       'image_alternative' => [
         '#type' => 'inline_template',
         '#template' => '{{ font }}',
         '#context' => [
           'font' => t('Font Size'),
         ],
       ],
     ],
   ];
}

Then pull in the dropdown the same way the Bootstrap button plugin was added! Have any questions? Comment below or tweet us @activelamp.

Apr 28 2016
Apr 28
what-is-a-style-guide

One of our talented designers recently spoke at an event hosted by Drupal 4 Gov covering everything you need to know about style guides. Check out his presentation below as well as our Drupal page.  

The post Style Guides: What They Consist Of, The Benefits, And How To Get Started appeared first on .

Apr 22 2016
Apr 22
Evan has a passion for design, is a born problem solver, and loves technology. Evan has a bachelor of science in graphic design from the Art Institute, and has been designing since the turn of the century. Evan leads the creative efforts of ActiveLAMP, and ensures the stuff we build aesthetically looks good.

In his free time, Evan enjoys going to the gym, spending time with his wife and daughter, reading, and long walks on the beach.</span>

Apr 09 2016
Apr 09

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Apr 04 2016
Apr 04

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Mar 02 2016
Mar 02

Drupal 8Drupal 8

With over 200 new features, Drupal 8 is officially here! Drupal is one of the world’s favorite open source content management platform.. and it just got even better. Here are some of the ways that Drupal 8 will benefit various groups of people.

Developers:

  • Configuration management – In prior versions of Drupal, most of the configuration was stored in the database.  The problem with this is that it is very difficult to keep track of versions of the configuration when it changes.  The only way to get configuration out of the database was to use a combination of modules such as strongarm and features to export things from the database into code.  This was often time-consuming and error prone.  Now with Drupal 8, configuration management is built-in so that carrying over configuration from development to production is a breeze.
  • Web services – Drupal 8 can now be used as a data source to output content as structured data such as XML or JSON.  This means that Drupal 8 can strictly be used as a back-end while the front-end could be developed completely separate with a framework such as AngularJS or Ember.  In other words “Headless Drupal” capabilities are now built-in instead of requiring various addon modules and lots of custom development.  

Content Editors:

  • Bundled WYSIWYG editor – Drupal 8 is the first version of Drupal to come with a bundled WYSIWYG editor.  Previously it was possible to add one of many different editors into Drupal but the setup was often time consuming and confusing.  Additionally there were so many choices that some users felt lost about which one to choose.  Over time CKEditor has become the most popular WYSIWYG editor for Drupal and now it is included out of the box.
  • In place editing – In addition to having CKEditor bundled in with Drupal 8, the Spark initiative is taking WYSIWYG concept a step further with true in place editing.  This would give editors the ability to change content, menus, etc. directly from the front-end view of the site without having to navigate to an admin page on the back end.  More info about the Spark initiative can be found here:  http://buytaert.net/spark-update-in-line-editing-in-drupal

End Users:

  • Mobile First – Previous versions of Drupal allowed developers to create responsive themes.  However some modules were not 100% compatible with responsive layouts.  Now with Drupal 8 all themes are mobile first which means that all community modules will be compatible with responsive layouts.  Additionally the default Drupal admin theme will be mobile friendly which should improve the experience for editors who want to author content from mobile devices.

Accessibility and Languages – Drupal 8 now has extensive support for accessibility standards including the adoption of many WAI-ARIA practices.  This will make content structures easier to understand for people with disabilities.  In addition to the accessibility improvements Drupal 8 now has multi-lingual support included.  Drupal 8 has the capability to reach more users than any previous version of Drupal.

Feb 11 2016
Feb 11

amazon-web-services-drupal-architectureamazon-web-services-drupal-architecture

Mobomo believes in partnering. Over the years we have partnered with Amazon, IBM, Tracx, and a number of other companies and organizations. We are pleased to announce our recent partnership with the Drupal Association (https://assoc.drupal.org), Drupal has been a major contributor in the community for many years. 

Drupal is an open-source content management system framework used to make many of the websites and applications that you use every day. Drupal has great standard features like easy content authoring, reliable performance, and excellent security. But what sets Drupal apart from other solutions is its flexibility and extensibility; modularity is one of its core principles. Drupal allows you to build the versatile, structured content that is needed for engaging and dynamic web experiences.

We are very pleased to be a part of the Drupal community, since we have developed Drupal solutions for major Federal Government websites in the past this partnership only makes sense. We are excited about our partnerships and look forward to building bigger and better things as a supporting partner of Drupal.org. Be sure to visit our Drupal page.

Jan 27 2016
Jan 27

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Jan 24 2016
Jan 24
[embedded content]

Drupal Console is software which allows you to alter your Drupal installation through the command line. According to the official website, “The Drupal Console is a CLI tool to generate boilerplate code, interact and debug Drupal 8.” Unlike Drush, Drupal Console is specifically for Drupal 8, the latest major release.

Although Drupal Console and Drush share many capabilities such as clearing the cache, generating one-time login links, or un/installing modules/themes, one distinct functionality that comes out of the box with Drupal Console is that it can generate boilerplate code for modules, themes, controllers, forms, blocks, and much more.

Installing Drupal Console

Setting up Drupal Console is just as easy as executing the following commands found on the homepage of the website:

# Run this in your terminal to get the latest project version: curl https://drupalconsole.com/installer -L -o drupal.phar # Or if you don't have curl: php -r "readfile('https://drupalconsole.com/installer');" > drupal.phar # Accessing from anywhere on your system: mv drupal.phar /usr/local/bin/drupal # Apply executable permissions on the downloaded file: chmod +x /usr/local/bin/drupal # Run this in your terminal to get the latest project version:curl https://drupalconsole.com/installer -L -o drupal.phar# Or if you don't have curl:php -r "readfile('https://drupalconsole.com/installer');" > drupal.phar# Accessing from anywhere on your system:mv drupal.phar /usr/local/bin/drupal# Apply executable permissions on the downloaded file:chmod +x /usr/local/bin/drupal

To check if Drupal Console is working, execute drupal list. You should see a list of commands:

$ drupal list Drupal Console version 0.10.5 Usage: command [options] [arguments] Options: ... Available commands: ... module ... multisite ... site site:debug List all known local and remote sites. site:install Install a Drupal project site:new Create a new Drupal project theme ... yaml ... $ drupal listDrupal Console version 0.10.5command [options] [arguments]Available commands:site:debug List all known local and remote sites.site:install Install a Drupal projectsite:new Create a new Drupal project

Downloading Drupal

Let’s start from the beginning. We can actually get any version of Drupal by running drupal site:new:

$ drupal site:new Enter the directory name when downloading Drupal: > drupal Getting releases for Drupal Select a core release: [0 ] 8.0.2 [1 ] 8.0.1 [2 ] 8.0.0 [3 ] 8.0.0-rc4 [4 ] 8.0.0-rc3 [5 ] 8.0.0-rc2 [6 ] 8.0.0-rc1 [7 ] 8.0.0-beta16 [8 ] 8.0.0-beta15 [9 ] 8.0.0-beta14 [10] 8.0.0-beta13 [11] 8.0.0-beta12 [12] 8.0.0-beta11 [13] 8.0.0-beta10 [14] 8.0.0-beta9 > 0 Downloading drupal 8.0.2 [OK] Drupal 8.0.2 was downloaded in directory /home/ubuntu/workspace/drupal $ drupal site:newEnter the directory name when downloading Drupal:Getting releases for DrupalSelect a core release:[0 ] 8.0.2[1 ] 8.0.1[2 ] 8.0.0[3 ] 8.0.0-rc4[4 ] 8.0.0-rc3[5 ] 8.0.0-rc2[6 ] 8.0.0-rc1[7 ] 8.0.0-beta16[8 ] 8.0.0-beta15[9 ] 8.0.0-beta14[10] 8.0.0-beta13[11] 8.0.0-beta12[12] 8.0.0-beta11[13] 8.0.0-beta10[14] 8.0.0-beta9Downloading drupal 8.0.2[OK] Drupal 8.0.2 was downloaded in directory /home/ubuntu/workspace/drupal

Installing Drupal

Now, you might be thinking to go visit the running website to install Drupal. Wrong! Well, right you can do that. But, it is also possible via Drupal Console by simply running drupal site:install inside the drupal directory:

$ drupal site:install Select Drupal profile to be installed: [0] Minimal [1] Standard > 1 Select language for your Drupal installation [English]: > Drupal Database type: [0] MySQL, MariaDB, Percona Server, or equivalent [1] SQLite [2] PostgreSQL > 0 Database Host [127.0.0.1]: > Database Name: > drupal Database User: > {YOUR_DATABASE_USERNAME} Database Pass [ ]: > {YOUR_DATABASE_PASSWORD} Database Port [3306]: > Database Prefix [ ]: > Provide your Drupal site name [Drupal 8 Site Install]: > Drupal Console is Awesome! Provide your Drupal site mail [[email protected]]: > {YOUR_SITE_MAIL} Provide your Drupal administrator account name [admin]: > {YOUR_USER_NAME} Provide your Drupal administrator account mail [[email protected]]: > {YOUR_USER_MAIL} Provide your Drupal administrator account password: > {YOUR_USER_PASSWORD} Starting Drupal 8 install process [OK] Your Drupal 8 installation was completed successfully $ drupal site:installSelect Drupal profile to be installed:[0] Minimal[1] StandardSelect language for your Drupal installation [English]:Drupal Database type:[0] MySQL, MariaDB, Percona Server, or equivalent[1] SQLite[2] PostgreSQLDatabase Host [127.0.0.1]:Database Name:Database User:> {YOUR_DATABASE_USERNAME}Database Pass [ ]:> {YOUR_DATABASE_PASSWORD}Database Port [3306]:Database Prefix [ ]:Provide your Drupal site name [Drupal 8 Site Install]:> Drupal Console is Awesome!Provide your Drupal site mail [admin@example.com]:> {YOUR_SITE_MAIL}Provide your Drupal administrator account name [admin]:> {YOUR_USER_NAME}Provide your Drupal administrator account mail [admin@example.com]:> {YOUR_USER_MAIL}Provide your Drupal administrator account password:> {YOUR_USER_PASSWORD}Starting Drupal 8 install process[OK] Your Drupal 8 installation was completed successfully

Our new Drupal site should now be ready to be worked on:
Drupal Home

Activating Maintenance Mode

It is best to set a website in production to maintenance mode when being worked on. Be sure to log in first, so you can view the development of the site. Then you can turn on maintenance mode with drupal site:maintenance on:

$ drupal site:maintenance on Operating in maintenance mode on Rebuilding cache(s), wait a moment please. [OK] Done clearing cache(s). $ drupal site:maintenance onOperating in maintenance mode onRebuilding cache(s), wait a moment please.[OK] Done clearing cache(s).

Now, this is what regular users will see when visiting the website:

Drupal Maintenance

Creating a Hello World Module

Defining Module Parameters

Drupal Generate Module Directory Structure
First let’s generate code to define the module such as the .info.yml and the composer.json files by running drupal generate:module:

$ drupal generate:module // Welcome to the Drupal module generator Enter the new module name: > Hello World Enter the module machine name [hello_world]: > Enter the module Path [/modules/custom]: > Enter module description [My Awesome Module]: > Say Hello World Enter package name [Custom]: > Enter Drupal Core version [8.x]: > Do you want to generate a .module file (yes/no) [no]: > Define module as feature (yes/no) [no]: > Do you want to add a composer.json file to your module (yes/no) [yes]: > Would you like to add module dependencies (yes/no) [no]: > Do you confirm generation? (yes/no) [yes]: > Generated or updated files Site path: /home/ubuntu/workspace/drupal 1 - modules/custom/hello_world/hello_world.info.yml 2 - modules/custom/hello_world/composer.json $ drupal generate:module// Welcome to the Drupal module generatorEnter the new module name:> Hello WorldEnter the module machine name [hello_world]:Enter the module Path [/modules/custom]:Enter module description [My Awesome Module]:> Say Hello WorldEnter package name [Custom]:Enter Drupal Core version [8.x]:Do you want to generate a .module file (yes/no) [no]:Define module as feature (yes/no) [no]:Do you want to add a composer.json file to your module (yes/no) [yes]:Would you like to add module dependencies (yes/no) [no]:Do you confirm generation? (yes/no) [yes]:Generated or updated filesSite path: /home/ubuntu/workspace/drupal1 - modules/custom/hello_world/hello_world.info.yml2 - modules/custom/hello_world/composer.json

Installing the Module

Again, we can just enable to module using the command line using drupal module:install {PLUGIN_MACHINE_NAME}:

$ drupal module:install hello_world [OK] The following module(s) were installed successfully: hello_world Rebuilding cache(s), wait a moment please. [OK] Done clearing cache(s). $ drupal module:install hello_world[OK] The following module(s) were installed successfully: hello_worldRebuilding cache(s), wait a moment please.[OK] Done clearing cache(s).

Generating the Controller

Now, we need to generate a controller that will show the “Hello World” page. Do this using drupal generate:controller:

$ drupal generate:controller // Welcome to the Drupal Controller generator Enter the module name [hello_world]: > Enter the Controller class name [DefaultController]: > HelloWorldController Enter the Controller method title (leave empty and press enter when done) [ ]: > Hello World Enter the action method name [hello]: > helloWorld Enter the route path [hello_world/hello/{name}]: > /hello/world Enter the Controller method title (leave empty and press enter when done) [ ]: > Do you want to generate a unit test class (yes/no) [yes]: > Do you want to load services from the container (yes/no) [no]: > Do you confirm generation? (yes/no) [yes]: > Generated or updated files Site path: /home/ubuntu/workspace/drupal 1 - modules/custom/hello_world/src/Controller/HelloWorldController.php 2 - modules/custom/hello_world/hello_world.routing.yml 3 - modules/custom/hello_world/Tests/Controller/HelloWorldControllerTest.php Rebuilding routes, wait a moment please [OK] Done rebuilding route(s). $ drupal generate:controller// Welcome to the Drupal Controller generatorEnter the module name [hello_world]:Enter the Controller class name [DefaultController]:> HelloWorldControllerEnter the Controller method title (leave empty and press enter when done) [ ]:> Hello WorldEnter the action method name [hello]:> helloWorldEnter the route path [hello_world/hello/{name}]:> /hello/worldEnter the Controller method title (leave empty and press enter when done) [ ]:Do you want to generate a unit test class (yes/no) [yes]:Do you want to load services from the container (yes/no) [no]:Do you confirm generation? (yes/no) [yes]:Generated or updated filesSite path: /home/ubuntu/workspace/drupal1 - modules/custom/hello_world/src/Controller/HelloWorldController.php2 - modules/custom/hello_world/hello_world.routing.yml3 - modules/custom/hello_world/Tests/Controller/HelloWorldControllerTest.phpRebuilding routes, wait a moment please[OK] Done rebuilding route(s).

Drupal Hello World Directory Structure

This will automatically generate our controller file, routing file, and even our Test file.

Finished Module

Now, if we visit http://www.mydrupalwebsite.com/hello/world, we should see:

Drupal Hello World 1

This way of generating modules is much quicker than normal. You can compare it to normally creating a Hello World module.

Creating a Block

We can also make a block to go along with this module and have it hold a configuration value. This can be done by running drupal generate:plugin:block:

$ drupal generate:plugin:block // Welcome to the Drupal Plugin Block generator Enter the module name [hello_world]: > Plugin class name [DefaultBlock]: > Plugin label [Default block]: > Plugin id [default_block]: > Theme region to render Plugin Block [ ]: > Do you want to load services from the container (yes/no) [no]: > You can add input fields to create special configurations in the block. This is optional, press enter to continue Do you want to generate a form structure? (yes/no) [yes]: > Type [ ]: > textfield Input label: > Content Input machine name [content]: > Maximum amount of characters [64]: > Width of the textfield (in characters) [64]: > Description [ ]: > Content to Display Default value [ ]: > Weight for input item [0]: > Type [ ]: > Do you confirm generation? (yes/no) [yes]: > Generated or updated files Site path: /home/ubuntu/workspace/drupal 1 - modules/custom/hello_world/src/Plugin/Block/DefaultBlock.php Rebuilding cache(s), wait a moment please. [OK] Done clearing cache(s). $ drupal generate:plugin:block// Welcome to the Drupal Plugin Block generatorEnter the module name [hello_world]:Plugin class name [DefaultBlock]:Plugin label [Default block]:Plugin id [default_block]:Theme region to render Plugin Block [ ]:Do you want to load services from the container (yes/no) [no]:You can add input fields to create special configurations in the block.This is optional, press enter to continueDo you want to generate a form structure? (yes/no) [yes]:> textfieldInput label:Input machine name [content]:Maximum amount of characters [64]:Width of the textfield (in characters) [64]:Description [ ]:> Content to DisplayDefault value [ ]:Weight for input item [0]:Do you confirm generation? (yes/no) [yes]:Generated or updated filesSite path: /home/ubuntu/workspace/drupal1 - modules/custom/hello_world/src/Plugin/Block/DefaultBlock.phpRebuilding cache(s), wait a moment please.[OK] Done clearing cache(s).

Drupal Plugin Block Directory Structure

If we now go to Admin > Structure > Block layout > Sidebar second > Place Block, we can see our newly generated block there and add it to the sidebar:

Drupal Place Block
Drupal Configure Block
Drupal Default Block

Generating Random Nodes

If we are just testing our website for development and need some content, nodes can be easily generated using drupal create:nodes:

$ drupal create:nodes // Welcome to the Drupal nodes generator Select content type(s) to be used on node creation: [0] Article [1] Basic page > 0 Enter how many nodes would you like to generate [10]: > Enter the maximum number of words in titles [5]: > How far back in time should the nodes be dated?: [0] N | Now [1] H | 1 hour ago [2] D | 1 day ago [3] W | 1 week ago [4] M | 1 month ago [5] Y | 1 year ago > 5 --------- -------------- ---------------------------- --------------------- Node Id Content type Title Created Time --------- -------------- ---------------------------- --------------------- 1 Article Capto Eros Haero Uxor 2016-01-18 03:23:09 2 Article Nunc Voco 2015-02-08 11:23:37 3 Article Abdo Eu Pala Plaga 2015-10-16 02:26:27 4 Article Commoveo Erat Iustum 2015-09-15 05:52:22 5 Article Olim 2015-08-13 10:50:27 6 Article Comis Damnum Ille Vulpes 2015-12-11 03:19:55 7 Article Gravis Probo 2015-04-16 07:38:44 8 Article Nobis Quidem Torqueo Zelus 2015-10-25 06:47:05 9 Article Commoveo Duis Metuo Quidne 2015-05-30 10:21:16 10 Article Illum Mos Obruo Tamen 2015-04-22 01:47:57 --------- -------------- ---------------------------- --------------------- [OK] Created 10 nodes successfully $ drupal create:nodes// Welcome to the Drupal nodes generatorSelect content type(s) to be used on node creation:[0] Article[1] Basic pageEnter how many nodes would you like to generate [10]:Enter the maximum number of words in titles [5]:How far back in time should the nodes be dated?:[0] N | Now[1] H | 1 hour ago[2] D | 1 day ago[3] W | 1 week ago[4] M | 1 month ago[5] Y | 1 year ago--------- -------------- ---------------------------- ---------------------Node Id Content type Title Created Time--------- -------------- ---------------------------- ---------------------1 Article Capto Eros Haero Uxor 2016-01-18 03:23:092 Article Nunc Voco 2015-02-08 11:23:373 Article Abdo Eu Pala Plaga 2015-10-16 02:26:274 Article Commoveo Erat Iustum 2015-09-15 05:52:225 Article Olim 2015-08-13 10:50:276 Article Comis Damnum Ille Vulpes 2015-12-11 03:19:557 Article Gravis Probo 2015-04-16 07:38:448 Article Nobis Quidem Torqueo Zelus 2015-10-25 06:47:059 Article Commoveo Duis Metuo Quidne 2015-05-30 10:21:1610 Article Illum Mos Obruo Tamen 2015-04-22 01:47:57--------- -------------- ---------------------------- ---------------------[OK] Created 10 nodes successfully

Now, you can see some of these articles posted to the front page with images:

Drupal Front Page

Generating a Theme

Apart from generating modules and content, we can also generate themes. Let’s generate a basic theme that extends the classy base theme by running drupal generate:theme:

$ drupal generate:theme // Welcome to the Drupal theme generator Enter the new theme name []: > New Theme Enter the module machine name [new_theme]: > Enter the theme Path [/themes/custom]: > Enter theme description [My Awesome theme]: > Plain Theme Enter package name [Other]: > Enter Drupal Core version [8.x]: > Base theme (i.e. classy, stable) [bartik]: > classy Enter the global styling library name [global-styling]: > Do you want to generate the theme regions (yes/no) [yes]: > Enter region name [Content]: > Enter region machine name [content]: > Do you want add another region (yes/no) [yes]: > no Do you want to generate the theme breakpoints (yes/no) [yes]: > Enter breakpoint name [narrow]: > Enter breakpoint label [narrow]: > Enter breakpoint media query [all and (min-width: 560px) and (max-width: 850px)]: > Enter breakpoint weight [1]: > Enter breakpoint multipliers [1x]: > Do you want to add another breakpoint (yes/no) [yes]: > no Do you confirm generation? (yes/no) [yes]: > Generated or updated files Site path: /home/ubuntu/workspace/drupal 1 - themes/custom/new_theme/new_theme.info.yml 2 - themes/custom/new_theme/new_theme.theme 3 - themes/custom/new_theme/new_theme.breakpoints.yml $ drupal generate:theme// Welcome to the Drupal theme generatorEnter the new theme name []:> New ThemeEnter the module machine name [new_theme]:Enter the theme Path [/themes/custom]:Enter theme description [My Awesome theme]:> Plain ThemeEnter package name [Other]:Enter Drupal Core version [8.x]:Base theme (i.e. classy, stable) [bartik]:Enter the global styling library name [global-styling]:Do you want to generate the theme regions (yes/no) [yes]:Enter region name [Content]:Enter region machine name [content]:Do you want add another region (yes/no) [yes]:Do you want to generate the theme breakpoints (yes/no) [yes]:Enter breakpoint name [narrow]:Enter breakpoint label [narrow]:Enter breakpoint media query [all and (min-width: 560px) and (max-width: 850px)]:Enter breakpoint weight [1]:Enter breakpoint multipliers [1x]:Do you want to add another breakpoint (yes/no) [yes]:Do you confirm generation? (yes/no) [yes]:Generated or updated filesSite path: /home/ubuntu/workspace/drupal1 - themes/custom/new_theme/new_theme.info.yml2 - themes/custom/new_theme/new_theme.theme3 - themes/custom/new_theme/new_theme.breakpoints.yml

Drupal Theme Directory Structure

Three new files should have been generated in the themes folder. You should now be able to install the theme by running drupal theme:install {THEME_MACHINE_NAME}:

$ drupal theme:install new_theme The New Theme theme has been installed successfully Rebuilding cache(s), wait a moment please. [OK] Done clearing cache(s). $ drupal theme:install new_themeThe New Theme theme has been installed successfullyRebuilding cache(s), wait a moment please.[OK] Done clearing cache(s).

The theme still needs to be set as default before we can see it in action. However, it will just be a plain style-less theme because it just extends classy:

Drupal New Theme

Deactivating Maintenance Mode

As we are done with our development, stop maintenance mode by running drupal site:maintenance off:

$ drupal site:maintenance off Operating in maintenance mode off Rebuilding cache(s), wait a moment please. [OK] Done clearing cache(s). $ drupal site:maintenance offOperating in maintenance mode offRebuilding cache(s), wait a moment please.[OK] Done clearing cache(s).

This post just covered the surface of what all Drupal Console can do. It can do so much more such as generating Forms, Permissions, and even most of the features from Drush. Remember to explore all the commands by running drupal list. It is a tool every Drupal developer should be taking advantage of. You can learn more of the commands and shortcuts on the Drupal Console Documentation.

Share this:

Author: Akshay Kalose

A teenager, who is interested in Computer Science, Information Technology, Programming, Web Designing, Engineering and Physical Sciences.

Jan 20 2016
Jan 20

This post is part 4 in the series "Hashing out a docker workflow". For background, checkout my previous posts.

My previous posts talked about getting your local environment setup using the Drupal Docker image with Vagrant. It's now time to bake a Docker image with our custom application code within the container, so that we can deploy containers implementing the immutable server pattern. One of the main reasons we starting venturing down the Docker path was to achieve deployable fully baked containers that are ready to run in whatever environment you put them in, similar to what we've done in the past with Packer, as I've mentioned in a previous post.

Review

The instructions in this post are assumming you followed my previous post to get a Drupal environment setup with the custom "myprofile" profile. In that post we brought up a Drupal environment by just referencing the already built Drupal image on DockerHub. We are going to use that same Docker image, and add our custom application to that.

All the code that I'm going to show below can be found in this repo on Github.

Putting the custom code into the container

We need to create our own image, create a Dockerfile in our project that extends the Drupal image that we are pulling down.

Create a file called Dockerfile in the root of your project that looks like the following:

FROM drupal:7.41

ADD drupal/profiles/myprofile /var/www/html/profiles/myprofile

We are basically using everything from the Drupal image, and adding our installation profile to the profiles directory of the document root.

This is a very simplistic approach, typically there are more steps than just copying files over. In more complex scenarios, you will likely run some sort of build within the Dockerfile as well, such as Gulp, Composer, or Drush Make.

Setting up Jenkins

We now need to setup a Jenkins server that will checkout our code and run docker build and docker push. Let's setup a local jenkins container on our Docker host to do this.

Open up the main Vagrantfile in the project root and add another container to the file like the following:





Vagrant.configure(2) do |config|

config.vm.define "jenkins" do |v|
v.vm.provider "docker" do |d|
d.vagrant_vagrantfile = "./host/Vagrantfile"
d.build_dir = "./Dockerfiles/jenkins"
d.create_args = ['--privileged']
d.remains_running = true
d.ports = ["8080:8080"]
d.name = "jenkins-container"
end
end

config.vm.define "drupal" do |v|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
end

Two things to notice from the jenkins container definition, 1) The Dockerfile for this container is in the Dockerfiles/jenkins directory, and 2) we are passing the --privileged argument when the container is run so that our container has all the capabilities of the docker host. We need special access to be able to run Docker within Docker.

Lets create the Dockerfile:

$ mkdir -p Dockerfiles/jenkins
$ cd !$
$ touch Dockerfile

Now open up that Dockerfile and install Docker onto this Jenkins container:

FROM jenkins:1.625.2

USER root


RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D


RUN echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list

VOLUME /var/lib/docker

RUN apt-get update && \
  apt-get -y install \
    docker-engine

ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh

ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/dockerjenkins.sh" ]

We are using a little script that is found in The Docker Book as our entry point to start the docker daemon, as well as Jenkins. It also does some stuff on the filesystem to ensure cgroups are mounted correctly. If you want to read more about running Docker in Docker, go check out this article

Boot up the new container

Before we boot this container up, edit your host Vagrantfile and setup the port forward so that 8080 points to 8080:





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.network :forwarded_port, guest: 8080, host: 8080
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

Now bring up the new container:

$ vagrant up jenkins

or if you've already brought it up once before, you may just need to run reload:

\$ vagrant reload jenkins

You should now be able to hit Jenkins at the URL http://localhost:8080

Jenkins Dashboard

Install the git plugins for Jenkins

Now that you have Jenkins up and running, we need to install the git plugins. Click on the "Manage Jenkins" link in the left navigation, then click "Manage Plugins" in the list given to you, and then click on the "Available" Tab. Filter the list with the phrase "git client" in the filter box. Check the two boxes to install plugins, then hit "Download now and install after restart".

Jenkins Plugin Install

On the following screen, check the box to Restart Jenkins when installation is complete.

Setup the Jenkins job

It's time to setup Jenkins. If you've never setup a Jenkins job, here is a quick crash course.

  1. Click the New Item link in the left navigation. Name your build job, and choose Freestyle project. Click Ok. New Build Job
  2. Configure the git repo. We are going to configure Jenkins to pull code directly from your repository and build the Docker image from that. Add the git Repo
  3. Add the build steps. Scroll down toward the bottom of the screen and click the arrow next to Add build step and choose Execute Shell. We are going to add three build steps as shown below. First we build the Docker image with docker build -t="tomfriedhof/docker_blog_post" . (notice the trailing dot) and give it a name with the -t parameter, then we login to DockerHub, and finally push the newly created image that was created to DockerHub. Jenkins Build Steps
  4. Hit Save, then on the next screen hit the button that says Build Now

If everything went as planned, you should have a new Docker image posted on DockerHub: https://hub.docker.com/r/tomfriedhof/dockerblogpost/

Wrapping it up

There you have it, we now have an automated build that will automatically create and push Docker images to DockerHub. You can add on to this Jenkins job so that it polls your Github Repository so that it automatically runs this build anytime something changes in the tracking repo.

As another option, if you don't want to go through all the trouble of setting up your own Jenkins server just to do what I just showed you, DockerHub can do this for you. Go checkout their article on how to setup automated builds with Docker.

Now that we have a baked container with our application code within it, the next step is to deploy the container. That is the next post in this series. Stay tuned!

Jan 09 2016
Jan 09
[embedded content]

Drupal 8 came out with many new features and updates at the end of 2015. As Drupal 8 is object oriented and enforces PSR-4 standards, the way you make modules has significantly changed. However, this change makes modules much more organized to fit today’s coding practices. I will be demonstrating how to create a simple “Hello World!” module in Drupal 8.

Getting Started

I will be assuming you already have a working Drupal 8 website.

Creating The Module Directory

Drupal Module Directory Structure 1Drupal 8 has a much cleaner folder structure than before. All the modules will go in the module folder. Create a hello_world folder here. This will serve as the machine name for the module and will contain all the necessary files for our module to function.

Defining Parameters

Module Information

First, parameters such as title and description need to be set. These keys will be defined in the file hello_world.info.yml. Unlike Drupal 7, Drupal 8 uses YAML (YAML Ain’t Markup Language) for these types of files. Create this file and enter these definitions:

name: Hello World type: module description: Say Hello World package: Custom core: 8.x name: Hello Worldtype: moduledescription: Say Hello Worldpackage: Custom
  • name: Hello World
    • This is the title of our module shown on the extend page.
  • type: module
    • This is to tell Drupal what we are making is a module.
  • description: Say Hello World
    • This is a description shown alongside the title on the extend page.
  • package: Custom
    • This is what category our module will be listed under on the extend page.
  • core: 8.x
    • This tells Drupal our module is compatible with Drupal 8.x core.

You will now be able to see the module listed under the Custom package category:

Drupal Module List

Let’s enable this module:

Drupal Module Enabled

Module Routes

Next, we will need to tell Drupal where the module can be accessed. This is done by creating a route for our module. Create the file hello_world.routing.yml and enter the following parameters:

hello_world: path: /hello/world defaults: _controller: Drupal\hello_world\Controller\HelloWorldController::hello requirements: _permission: 'access content' hello_world:    path: /hello/world    defaults:        _controller: Drupal\hello_world\Controller\HelloWorldController::hello    requirements:        _permission: 'access content'
  • path: /hello/world
    • This tells Drupal what path will be used to access our module.
  • _controller: Drupal\hello_world\Controller\HelloWorldController::hello
    • This is the method Drupal will call to process a request to our path.
  • _permission: 'access content'
    • This is to ensure only users who can access content will be able to see our Hello World page.

Quotes are not required around the values, however, you can include them to be safe.

Drupal Module Directory Structure 2

Your current directory structure should now look like this:

Coding the Module

After defining all the parameters for the module, we still need to add functionality to our module. We need to write code that will live in a “Controller”. Since Drupal 8 follows PSR-4 standards, there is a specific directory structure to follow. All code will exist inside the src folder. After creating this, make a Controller folder. This is where the PHP Class will be made. Here, create HelloWorldController.php and define the as so:

<?php class HelloWorldController { } class HelloWorldController {

Any Drupal 8 class will also need to be namespaced. This is just to make sure the right classes are referenced from outside if some classes have the same name and it makes it more organized. Add this above the class definition:

<?php namespace Drupal\hello_world\Controller; class HelloWorldController { } namespace Drupal\hello_world\Controller;class HelloWorldController {

Remember in the routing file we told Drupal to call the hello method in the class Drupal\hello_world\Controller\HelloWorldController. Let’s define this in our class:

<?php namespace Drupal\hello_world\Controller; class HelloWorldController { public function hello() { } } namespace Drupal\hello_world\Controller;class HelloWorldController {    public function hello() {

Since Drupal 8 is based on the Symfony Framework, a new Symfony\Component\HttpFoundation\Response object can be returned. However, since this is Drupal, you can also return a render array which Drupal will process and automatically add theming. For our simple page, we can return an array with keys #title and #markup:

<?php namespace Drupal\hello_world\Controller; class HelloWorldController { public function hello() { return array( '#title' => 'Hello World!', '#markup' => 'Here is some content.', ); } } namespace Drupal\hello_world\Controller;class HelloWorldController {    public function hello() {        return array(                '#title' => 'Hello World!',                '#markup' => 'Here is some content.',            );

Finishing Up

Before we visit our custom page, we need to clear the cache, specifically the router cache. Otherwise, you will see a 404 Not Found page. You can clear the cache via Drupal Console:

Drupal Router Rebuild

Or through the Admin Control Panel:

Drupal Clear All Caches

Now, if we visit http://www.mydrupalwebsite.com/hello/world, we can see our custom page:

Drupal Hello World

The Full Module

Drupal Module Directory Structure FinalHere is the final directory structure and code of our Hello World module:

*/

That’s it! Creating Drupal modules is as easy as that! Now that you
know how to define module parameters and set up your own Controller to handle specific route requests, you can move on to create more complex Drupal modules using other components such as Forms and implementing Services.

Share this:

Author: Akshay Kalose

A teenager, who is interested in Computer Science, Information Technology, Programming, Web Designing, Engineering and Physical Sciences.

Dec 26 2015
Dec 26

It has been a long time coming. With over 200 new features developed during the last 5 years by more than 2,000 contributors, the most anticipated version of Drupal was finally announced November 19th, 2015. This new release of Drupal contains spectacular features worth upgrading for than any previous release. The improvements evident in Drupal 8 justify that this is one of the best and largest updates in Drupal history.

User Interface and Experience

User

Posts and Pages

Drupal 8 makes use of a library called CKEditor in the admin panel. This library allows you to edit new posts and pages in a rich WYSIWYG editor unlike before. Also, thanks to “Quick Edit”, content is now able to be changed right in the front-end when you are viewing the post.

Mobile-First

As more and more websites are being accessed from mobile phones, it is necessary that websites are first made to support mobile and then desktop. This is exactly what Drupal 8 is doing. Drupal 8 is responsive both in the front-end and admin panel. Images and the site design will rearrange depending on device width. All themes including the defaults, Bartik and Seven, work seamlessly on mobile devices, tablets, and desktops.

Multilingual

Internationalization is now built into Drupal. This means that no contributed modules are needed for translation. All the modules are included in a fresh install of Drupal 8. Everything is translatable such as Content, Blocks, Menus, Views, Comments, Feeds and more. You can choose which language to use on your Drupal website on the first page of the installation process and the translations will be automatically downloaded.

Developers

Code

Front-end

Drupal 8 has switched its templating engine to Twig from PHPTemplate which has a more friendly syntax for designers. HTML5 Forms are also included which means mobile users can be displayed a variety of keyboards for different types of input. Some of the new fields include Date, Email, Link, Reference, and Telephone.

Object Oriented PHP

Drupal 8 has adopted the latest best practices for PHP. It uses PHP 5.4+, Classes and Interfaces, Namespaces, Traits, Dependency Injection, and conforms to the PSR-4 standards for file paths to autoload classes. Drupal 8 is built on top of several proven Symfony 2 modules and utilizes several other libraries such as PHPUnit, Composer, and jQuery. To create modules, themes, blocks and more you will now need to use classes and new info files in YAML.

Configuration Management

There is now a central location to store and retrieve saved information and configurations. This will improve organization and simplify creating modules for Drupal 8. These configurations can be easily exported and imported into another Drupal 8 website.

Anyone running Drupal 6 or 7 is encouraged to update to Drupal 8, especially if you are running Drupal 6 because this will no longer be supported. Since no more updates will be provided, your website will be vulnerable to security risks in the future. Regardless of security risks, these spectacular features should be enough to convince you to upgrade to Drupal 8!

For more information about Drupal 8, Click Here. This post was created during Google Code-in 2015, an open source competition for high school students.

Share this:

Author: Akshay Kalose

A teenager, who is interested in Computer Science, Information Technology, Programming, Web Designing, Engineering and Physical Sciences.

Dec 15 2015
Dec 15

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Dec 02 2015
Dec 02

Now that the release of Drupal 8 is finally here, it is time to adapt our Drupal 7 build process to Drupal 8, while utilizing Docker. This post will take you through how we construct sites on Drupal 8 using dependency managers on top of Docker with Vagrant.

Keep a clean upstream repo

Over the past 3 or 4 years developing websites has changed dramatically with the increasing popularity of dependency management such as Composer, Bundler, npm, Bower, etc... amongst other tools. Drupal even has it's own system that can handle dependencies called Drush, albiet it is more than just a dependency manager for Drupal.

With all of these tools at our disposal, it makes it very easy to include code from other projects in our application while not storing any of that code in the application code repository. This concept dramatically changes how you would typically maintain a Drupal site, since the typical way to manage a Drupal codebase is to have the entire Drupal Docroot, including all dependencies, in the application code repository. Having everything in the docroot is fine, but you gain so much more power using dependency managers. You also lighten up the actual application codebase when you utilize dependency managers, because your repo only contains code that you wrote. There are tons of advantages to building applications this way, but I have digressed, this post is about how we utilize these tools to build Drupal sites, not an exhaustive list of why this is a good idea. Leave a comment if you want to discuss the advantages / disadvantages of this approach.

Application Code Repository

We've got a lot going on in this repository. We won't dive too deep into the weeds looking at every single file, but I will give a high level overview of how things are put together.

Installation Automation (begin with the end in mind)

The simplicity in this process is that when a new developer needs to get a local development environment for this project, they only have to execute two commands:

$ vagrant up --no-parallel
$ make install

Within minutes a new development environment is constructed with Virtualbox and Docker on the developers machine, so that they can immediately start contributing to the project. The first command boots up 3 Docker containers -- a webserver, mysql server, and jenkins server. The second command invokes Drush to build the document root within the webserver container and then installs Drupal.

We also utilize one more command to keep running within a seperate terminal window, to keep files synced from our host machine to the Drupal 8 container.

$ vagrant rsync-auto drupal8

Breaking down the two installation commands

vagrant up --no-parallel

If you've read any of my previous posts, I'm a fan of using Vagrant with Docker. I won't go into detail about how the environment is getting set up. You can read my previous posts on how we used Docker with Vagrant. For completeness, here is the Vagrantfile and Dockerfile that vagrant up reads to setup the environment.

Vagrantfile





require 'fileutils'

MYSQL_ROOT_PASSWORD="root"

unless File.exists?("keys")
Dir.mkdir("keys")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
File.open("keys/id_rsa.pub", 'w') { |file| file.write(ssh_pub_key) }
end

unless File.exists?("Dockerfiles/jenkins/keys")
Dir.mkdir("Dockerfiles/jenkins/keys")
FileUtils.copy("#{Dir.home}/.ssh/id_rsa", "Dockerfiles/jenkins/keys/id_rsa")
end

Vagrant.configure("2") do |config|

config.vm.define "mysql" do |v|

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.image = "mysql:5.7.9"
      d.env = { :MYSQL_ROOT_PASSWORD => MYSQL_ROOT_PASSWORD }
      d.name = "mysql-container"
      d.remains_running = true
      d.ports = [
        "3306:3306"
      ]
    end

end

config.vm.define "jenkins" do |v|

    v.vm.synced_folder ".", "/srv", type: "rsync",
        rsync__exclude: get_ignored_files(),
        rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"]

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "./Dockerfiles/jenkins"
      d.name = "jenkins-container"
      
      d.volumes = [
          "/home/rancher/.composer:/root/.composer",
          "/home/rancher/.drush:/root/.drush"
      ]
      d.remains_running = true
      d.ports = [
          "8080:8080"
      ]
    end

end

config.vm.define "drupal8" do |v|

    v.vm.synced_folder ".", "/srv/app", type: "rsync",
      rsync__exclude: get_ignored_files(),
      rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"],
      rsync__chown: false

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "."
      d.name = "drupal8-container"
      d.remains_running = true
      
      d.volumes = [
        "/home/rancher/.composer:/root/.composer",
        "/home/rancher/.drush:/root/.drush"
      ]
      d.ports = [
        "80:80",
        "2222:22"
      ]
      d.link("mysql-container:mysql")
    end

end

end

def get_ignored_files()
ignore_file = ".rsyncignore"
ignore_array = []

if File.exists? ignore_file and File.readable? ignore_file
File.read(ignore_file).each_line do |line|
ignore_array << line.chomp
end
end

ignore_array
end

One of the cool things to point out that we are doing in this Vagrantfile is setting up a VOLUME for the composer and drush cache that should persist beyond the life of the container. When our application container is rebuilt we don't want to download 100MB of composer dependencies every time. By utilizing a Docker VOLUME, that folder is mounted to the actual Docker host.

Dockerfile (drupal8-container)

FROM ubuntu:trusty


ENV PROJECT_ROOT /srv/app
ENV DOCUMENT_ROOT /var/www/html
ENV DRUPAL_PROFILE=apa_profile


RUN apt-get update
RUN apt-get install -y \
	vim \
	git \
	apache2 \
	php-apc \
	php5-fpm \
	php5-cli \
	php5-mysql \
	php5-gd \
	php5-curl \
	libapache2-mod-php5 \
	curl \
	mysql-client \
	openssh-server \
	phpmyadmin \
	wget \
	unzip \
	supervisor
RUN apt-get clean


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer


RUN mkdir /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN mkdir /var/run/sshd && chmod 0755 /var/run/sshd
RUN mkdir -p /root/.ssh
COPY keys/id_rsa.pub /root/.ssh/authorized_keys
RUN chmod 600 /root/.ssh/authorized_keys
RUN sed '[email protected]\s*required\s*[email protected] optional [email protected]' -i /etc/pam.d/sshd


RUN composer global require drush/drush:8.0.0-rc3
RUN ln -nsf /root/.composer/vendor/bin/drush /usr/local/bin/drush


RUN mv /root/.composer /tmp/


RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/apache2/php.ini
RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/cli/php.ini


RUN sed -i 's/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf
RUN a2enmod rewrite


RUN echo '[program:apache2]\ncommand=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"\nautorestart=true\n\n' >> /etc/supervisor/supervisord.conf
RUN echo '[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n' >> /etc/supervisor/supervisord.conf





WORKDIR $PROJECT_ROOT
EXPOSE 80 22
CMD exec supervisord -n

We have xdebug commented out in the Dockerfile, but it can easily be uncommented if you need to step through code. Simply uncomment the two RUN commands and run vagrant reload drupal8

make install

We utilize a Makefile in all of our projects whether it be Drupal, nodejs, or Laravel. This is so that we have a similar way to install applications, regardless of the underlying technology that is being executed. In this case make install is executing a drush command. Below is the contents of our Makefile for this project:

all: init install

init:
vagrant up --no-parallel

install:
bin/drush @dev install.sh

rebuild:
bin/drush @dev rebuild.sh

clean:
vagrant destroy drupal8
vagrant destroy mysql

mnt:
sshfs -C -p 2222 [email protected]:/var/www/html docroot

What this commmand does is ssh into the drupal8-container, utilizing drush aliases and drush shell aliases.

install.sh

The make install command executes a file, within the drupal8-container, that looks like this:

#!/usr/bin/env bash

echo "Moving the contents of composer cache into place..."
mv /tmp/.composer/* /root/.composer/

PROJECT_ROOT=$PROJECT_ROOT DOCUMENT_ROOT=$DOCUMENT_ROOT $PROJECT_ROOT/bin/rebuild.sh

echo "Installing Drupal..."
cd $DOCUMENT_ROOT && drush si $DRUPAL_PROFILE --account-pass=admin -y
chgrp -R www-data sites/default/files
rm -rf ~/.drush/files && cp -R sites/default/files ~/.drush/

echo "Importing config from sync directory"
drush cim -y

You can see on line 6 of install.sh file that it executes a rebuild.sh file to actually build the Drupal document root utilizing Drush Make. The reason for separating the build from the install is so that you can run make rebuild without completely reinstalling the Drupal database. After the document root is built, the drush site-install apa_profile command is run to actually install the site. Notice that we are utilizing Installation Profiles for Drupal.

We utilize installation profiles so that we can define modules available for the site, as well as specify default configuration to be installed with the site.

We work hard to achieve the ability to have Drupal install with all the necessary configuration in place out of the gate. We don't want to be passing around a database to get up and running with a new site.

We utilize the Devel Generate module to create the initial content for sites while developing.

rebuild.sh

The rebuild.sh file is responsible for building the Drupal docroot:

#!/usr/bin/env bash

if [ -d "$DOCUMENT_ROOT/sites/default/files" ]
then
echo "Moving files to ~/.drush/..."
mv \$DOCUMENT_ROOT/sites/default/files /root/.drush/
fi

echo "Deleting Drupal and rebuilding..."
rm -rf \$DOCUMENT_ROOT

echo "Downloading contributed modules..."
drush make -y $PROJECT_ROOT/drupal/make/dev.make $DOCUMENT_ROOT

echo "Symlink profile..."
ln -nsf $PROJECT_ROOT/drupal/profiles/apa_profile $DOCUMENT_ROOT/profiles/apa_profile

echo "Downloading Composer Dependencies..."
cd $DOCUMENT_ROOT && php $DOCUMENT_ROOT/modules/contrib/composer_manager/scripts/init.php && composer drupal-update

echo "Moving settings.php file to $DOCUMENT_ROOT/sites/default/..."
rm -f $DOCUMENT_ROOT/sites/default/settings\*
cp $PROJECT_ROOT/drupal/config/settings.php $DOCUMENT_ROOT/sites/default/
cp $PROJECT_ROOT/drupal/config/settings.local.php $DOCUMENT_ROOT/sites/default/
ln -nsf $PROJECT_ROOT/drupal/config/sync $DOCUMENT_ROOT/sites/default/config
chown -R www-data \$PROJECT_ROOT/drupal/config/sync

if [ -d "/root/.drush/files" ]
then
cp -Rf /root/.drush/files $DOCUMENT_ROOT/sites/default/
    chmod -R g+w $DOCUMENT_ROOT/sites/default/files
chgrp -R www-data sites/default/files
fi

This file essentially downloads Drupal using the dev.make drush make file. It then runs composer drupal-update to download any composer dependencies in any of the modules. We use the composer manager module to help with composer dependencies within the Drupal application.

Running the drush make dev.make includes two other Drush Make files, apa-cms.make (the application make file) and drupal-core.make. Only dev dependencies should go in dev.make. Application dependencies go into apa-cms.make. Any core patches that need to be applied go into drupal-core.make.

Our Jenkins server builds the prod.make file, instead of dev.make. Any production specific modules would go in prod.make file.

Our make files for this project look like this so far:

dev.make

core: "8.x"

api: 2

defaults:
  projects:
    subdir: "contrib"

includes:
  - "apa-cms.make"

projects:
  devel:
    version: "1.x-dev"

apa-cms.make

core: "8.x"

api: 2

defaults:
projects:
subdir: "contrib"

includes:

- drupal-core.make

projects:
address:
version: "1.0-beta2"

composer_manager:
version: "1.0-rc1"

config_update:
version: "1.x-dev"

ctools:
version: "3.0-alpha17"

draggableviews:
version: "1.x-dev"

ds:
version: "2.0"

features:
version: "3.0-alpha4"

field_collection:
version: "1.x-dev"

field_group:
version: "1.0-rc3"

juicebox:
version: "2.0-beta1"

layout_plugin:
version: "1.0-alpha19"

libraries:
version: "3.x-dev"

menu_link_attributes:
version: "1.0-beta1"

page_manager:
version: "1.0-alpha19"

pathauto:
type: "module"
download:
branch: "8.x-1.x"
type: "git"
url: "http://github.com/md-systems/pathauto.git"

panels:
version: "3.0-alpha19"

token:
version: "1.x-dev"

zurb_foundation:
version: "5.0-beta1"
type: "theme"

libraries:
juicebox:
download:
type: "file"
url: "https://www.dropbox.com/s/hrthl8t1r9cei5k/juicebox.zip?dl=1"

(once this project goes live, we will pin the version numbers)

drupal-core.make

core: "8.x"

api: 2

projects:
  drupal:
    version: 8.0.0
    patch:
      - https://www.drupal.org/files/issues/2611758-2.patch

prod.make

core: "8.x"

api: 2

includes:

- "apa-cms.make"

projects:
apa_profile:
type: "profile"
subdir: "."
download:
type: "copy"
url: "file://./drupal/profiles/apa_profile"

At the root of our project we also have a Gemfile, specifically to install the compass compiler along with various sass libraries. We install these tools on the host machine, and "watch" those directories from the host. vagrant rsync-auto watches any changed files and rsyncs them to the drupal8-container.

bundler

From the project root, installing these dependencies and running a compass watch is simple:

$ bundle
$ bundle exec compass watch path/to/theme

bower

We pull in any 3rd party front-end libraries such as Foundation, Font Awesome, etc... using Bower. From within the theme directory:

\$ bower install

There are a few things we do not commit to the application repo, as a result of the above commands.

  • The CSS directory
  • Bower Components directory

Deploy process

As I stated earlier, we utilize Jenkins CI to build an artifact that we can deploy. Within the jenkins job that handles deploys, each of the above steps is executed, to create a document root that can be deployed. Projects that we build to work on Acquia or Pantheon actually have a build step to also push the updated artifact to their respected repositories at the host, to take advantage of the automation that Pantheon and Acquia provide.

Conclusion

Although this wasn't an exhaustive walk thru of how we structure and build sites using Drupal, it should give you a general idea of how we do it. If you have specific questions as to why we go through this entire build process just to setup Drupal, please leave a comment. I would love to continue the conversation.

Look out for a video on this topic in the next coming weeks. I covered a lot in this post, without going into much detail. The intent of this post was to give a 10,000 foot view of the process. The upcoming video on this process will get much closer to the Tarmac!

As an aside, one caveat that we did run into with setting up default configuration in our Installation Profile was with Configuration Management UUID's. You can only sync configuration between sites that are clones. We have overcome this limitation with a workaround in our installation profile. I'll leave that topic for my next blog post in a few weeks.

Nov 14 2015
Nov 14

Shoov.io is a nifty website testing tool created by Gizra. We at ActiveLAMP were first introduced to Shoov.io at DrupalCon LA, in fact, Shoov.io is built on, you guessed it, Drupal 7 and it is an open source visual regression toolkit.

Shoov.io uses webdrivercss, graphicsmagick, and a few other libraries to compare images. Once the images are compared you can visually see the changes in the Shoov.io online app. When installing Shoov you can choose to install it directly into your project directory/repository or you can use a separate directory/repository to house all of your tests and screenshots. Initially when testing Shoov we had it contained in a separate directory but with our most recent project, we opted to install Shoov directly into our project with the hopes to have it run on a commit or pull request basis using Travis CI and SauceLabs.

[embedded content]

Installation

To get Shoov installed into your project, I will, for this install, assume that you want to install it into your project, navigate into your project using the terminal.

Install the Yeoman Shoov generator globally (may have to sudo)

npm install -g mocha yo generator-shoov

Make sure you have Composer installed globally

curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer

Make sure you have Brew installed (MacOSX)

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install GraphicsMagick (MacOSX)

brew install graphicsmagick

Next you will need to install dependencies if you don't have them already.

npm install -g yo npm install -g file-type

Now we can build the test suite using the Yeoman generator*

yo shoov --base-url=http://activelamp.com

*running this command may give you more dependencies you need to install first.

This generator will scaffold all of the directories that you need to start writing tests for your project. It will also give you some that you may not need at the moment, such as behat. You will find the example test within the directory test in the visual-monitor directory named test.js. We like to split our tests into multiple files so we might rename our test to homepage.js. Here is what the homepage.js file looks like when you first open it.

'use strict'

var shoovWebdrivercss = require('shoov-webdrivercss')






var capsConfig = {
  chrome: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    resolution: '1024x768',
  },
  ie11: {
    browser: 'IE',
    browser_version: '11.0',
    os: 'Windows',
    os_version: '7',
    resolution: '1024x768',
  },
  iphone5: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    chromeOptions: {
      mobileEmulation: {
        deviceName: 'Apple iPhone 5',
      },
    },
  },
}

var selectedCaps = process.env.SELECTED_CAPS || undefined
var caps = selectedCaps ? capsConfig[selectedCaps] : undefined

var providerPrefix = process.env.PROVIDER_PREFIX
  ? process.env.PROVIDER_PREFIX + '-'
  : ''
var testName = selectedCaps
  ? providerPrefix + selectedCaps
  : providerPrefix + 'default'

var baseUrl = process.env.BASE_URL
  ? process.env.BASE_URL
  : 'http://activelamp.com'

var resultsCallback = process.env.DEBUG
  ? console.log
  : shoovWebdrivercss.processResults

describe('Visual monitor testing', function() {
  this.timeout(99999999)
  var client = {}

  before(function(done) {
    client = shoovWebdrivercss.before(done, caps)
  })

  after(function(done) {
    shoovWebdrivercss.after(done)
  })

  it('should show the home page', function(done) {
    client
      .url(baseUrl)
      .webdrivercss(
        testName + '.homepage',
        {
          name: '1',
          exclude: [],
          remove: [],
          hide: [],
          screenWidth: selectedCaps == 'chrome' ? [640, 960, 1200] : undefined,
        },
        resultsCallback
      )
      .call(done)
  })
})

Modifications

We prefer not to repeat configuration in our projects. We move the configuration setup to a file outside of the test folder and require it. We make this file by copying and removing the config from the above file and adding module.exports for each of the variables. Our config file looks like this

var shoovWebdrivercss = require('shoov-webdrivercss')






var capsConfig = {
  chrome: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    resolution: '1024x768',
  },
  ie11: {
    browser: 'IE',
    browser_version: '11.0',
    os: 'Windows',
    os_version: '7',
    resolution: '1024x768',
  },
  iphone5: {
    browser: 'Chrome',
    browser_version: '42.0',
    os: 'OS X',
    os_version: 'Yosemite',
    chromeOptions: {
      mobileEmulation: {
        deviceName: 'Apple iPhone 5',
      },
    },
  },
}

var selectedCaps = process.env.SELECTED_CAPS || undefined
var caps = selectedCaps ? capsConfig[selectedCaps] : undefined

var providerPrefix = process.env.PROVIDER_PREFIX
  ? process.env.PROVIDER_PREFIX + '-'
  : ''
var testName = selectedCaps
  ? providerPrefix + selectedCaps
  : providerPrefix + 'default'

var baseUrl = process.env.BASE_URL
  ? process.env.BASE_URL
  : 'http://activelamp.com'

var resultsCallback = process.env.DEBUG
  ? console.log
  : shoovWebdrivercss.processResults

module.exports = {
  caps: caps,
  selectedCaps: selectedCaps,
  testName: testName,
  baseUrl: baseUrl,
  resultsCallback: resultsCallback,
}

Once we have this setup, we need to require it into our test and rewrite the variables from our test to make it work with the new configuration file. That file now looks like this.

'use strict'

var shoovWebdrivercss = require('shoov-webdrivercss')
var config = require('../configuration.js')

describe('Visual monitor testing', function() {
  this.timeout(99999999)
  var client = {}

  before(function(done) {
    client = shoovWebdrivercss.before(done, config.caps)
  })

  after(function(done) {
    shoovWebdrivercss.after(done)
  })

  it('should show the home page', function(done) {
    client
      .url(config.baseUrl)
      .webdrivercss(
        config.testName + '.homepage',
        {
          name: '1',
          exclude: [],
          remove: [],
          hide: [],
          screenWidth:
            config.selectedCaps == 'chrome' ? [640, 960, 1200] : undefined,
        },
        config.resultsCallback
      )
      .call(done)
  })
})

Running the test

Now we can run our test. For initial testing, if you don't have a BrowserStack account or SauceLabs, you can test using phantom js

Note: You must have a repository or the test will fail.

In another terminal window run:

phantomjs --webdriver=4444

Return to the original terminal window and run:

SELECTED_CAPS=chrome mocha

This will run the tests specified for "chrome" in the configuration file and the screenWidths from within each test as specified by the default test.

Once the test runs you should see that it has passed. Of course, our test passed because we didn't have anything to compare it to. This test will create your initial baseline images. You will want to review these images in the webdrivercss directory and decide if you need to fix your site, your tests, or both. You may have to remove, exclude or hide elements from your tests. Removing an element will completely rip it from the dom for the test and will shift your site around. Excluding will create a black box over the content that you want to not show up, this is great for areas that you want to keep a consistent layout and the item is a fixed size. Hiding an element will hide the element from view, works similar to remove but works better with child elements outside of the parent. Once you review the baseline images you may want to take the time to commit and push the new images to GitHub (this commit will be the one that appears in the interface later)

Comparing the Regressions

Once you modify your test or site you can test it against the baseline that exists. Now that you probably have a regression you can go to the Shoov interface. From within the interface, you will select Visual Regression. The commit from your project will appear in a list and you will click the commit to be able to view the regressions and take action on any other issues that exist or you can save your new baseline. Only images with a regression will show up in the interface and only tests with regressions will show up on the list.

What's Next

You can view the standalone GitHub repository here.

This is just the tip of the iceberg for us with Visual Regression testing. We hope to share more about our testing process and how we are using Shoov for our projects. Don't forget to share or comment if you like this post.

Oct 17 2015
Oct 17

A little over a year ago the ActiveLAMP website had undergone a major change -- we made the huge decision of moving away from using Drupal to manage its content in favor of building it as a static HTML site using Jekyll, hosted on Amazon S3. Not only did this extremely simplify our development stack, it also trimmed down our server requirements to the very bare minimum. Now, we are just hosting everything on a file storage server like it's 1993.

A few months ago we identified the need to restructure our URL schemes as part of an ongoing SEO campaign. As easy as that sounds, this, however, necessitates the implementation of 301 redirects from the older URL scheme to their newer, more SEO-friendly versions.

I'm gonna detail how I managed to (1) implement these redirects quite easily using an nginx service acting as a proxy, and (2) achieve parity between our local and production environments while keeping everything light-weight with the help of Docker.

Nginx vs Amazon S3 Redirects

S3 is a file storage service offered by AWS that not only allows you to store files but also allows you to host static websites in conjunction with Route 53. Although S3 gives you the ability to specify redirects, you'll need to use S3-specific configuration and routines. This alone wouldn't be ideal because not only would it tie us to S3 by the hips, but it is not a methodology that we could apply to any other environment (i.e. testing and dev environments on our local network and machines). For these reasons, I opted to use nginx as a very thin reverse proxy to accomplish the job.

Configuring Nginx

Rather than compiling the list of redirects manually, I wrote a tiny Jekyll plugin that can do it faster and more reliably. The plugin allows me to specify certain things within the main Jekyll configuration file and it will generate the proxy.conf file for me:



nginx:
    proxy_host: ></span>;
    proxy_port: 80
    from_format: "/:year/:month/:day/:title/"
    
    redirects:
        - { from: "^/splash(.*)", to: "/$1" type: redirect }

With this in place, I am able to generate the proxy.conf by simply issuing this command:

> jekyll nginx_config > proxy.conf

This command will produce a proxy.conf file which will look like this:



rewrite ^/2008/09/21/drupalcampla\-revision\-control\-presentation/?(\?.*)?$ /blog/drupal/drupalcampla-revision-control-presentation$1 permanent;




rewrite ^/blog/development/aysnchronous\-php\-with\-message\-queues/?(\?.*)?$ /blog/development/asynchronous-php-with-message-queues/$1 permanent;


location / {
	proxy_set_header Host <S3 bucket URL>;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_pass http://<S3 bucket URL>:80;
}

You probably noticed that this is not a complete nginx configuration. However all that is need to be done is to define a server directive which will import this file:



server {
    listen 80;
    include /etc/nginx/proxy.conf;
}

Here we have an nginx proxy listening on port 80 that knows how to redirect old URLs to the new ones and pass on any request to the S3 bucket on AWS.

From this point, I was able to change the DNS records on our domain to point to the nginx proxy service instead of pointing directly at S3.

Check out the documentation for more ways of specifying redirects and more advanced usage.

Docker

Spinning up the proxy locally is a breeze with the help of Docker. Doing this in Vagrant and a provisioner would require a lot of boilerplate and code. With Docker, everything (with the exception of the config file that is automatically generated), is under 10 lines!

The Dockerfile

Here I am using the official nginx image straight from DockerHub but added some minor modifications:



FROM nginx


RUN rm /etc/nginx/conf.d/default.conf


ADD server.conf /etc/nginx/conf.d/
ADD proxy.conf /etc/nginx/

The build/nginx directory will contain everything the nginx proxy will need: the server.conf that you saw from the previous section, and the proxy.conf file which was generated by the jekyll nginx_config command.

Automating it with Grunt

Since we are using generator-jekyllrb, a Yeoman generator for Jekyll sites which uses Grunt to run a gamut of various tasks, I just had to write a grunt proxy task which does all the needed work when invoked:



...

grunt.initConfig({
    ...
    local_ip: process.env.LOCAL_IP,
    shell: {
        nginx_config: {
            command: 'jekyll nginx_config --proxy_host=192.168.0.3
            --proxy_port=8080 --config=_config.yml,_config.build.yml --source=app > build/nginx/proxy.conf'
        }
        docker_build: {
            command: 'docker build -t jekyll-proxy build/nginx'
        },
        docker_run: {
            command: 'docker run -d -p 80:80 jekyll-proxy'
        }
    },
    ...
});

...

grunt.registerTask('proxy', [
      'shell:nginx_config',
      'shell:docker_build',
      'shell:docker_run'
]);

This requires grunt-shell

With this in place, running grunt proxy will prepare the configuration, build the image, and run the proxy on http://192.168.99.100 where 192.168.99.100 is the address to the Docker host VM on my machine.

Note that this is a very simplified version of the actual Grunt task config that we actually use which just serves to illustrate the meager list of commands that is required to get the proxy configured and running.

I have set up a GitHub repository that replicates this set-up plus the actual Grunt task configuration we use that adds more logic around things like an auto-rebuilding the Docker image, cleaning up of stale Docker processes, configuration for different build parameters for use in production, etc. You can find it here: bezhermoso/jekyll-nginx-proxy-with-docker-demo.

Oct 10 2015
Oct 10

Running headless Drupal with a separate javascript framework on the front-end can provide amazing user experiences and easy theming. Although, working with content editors with this separation can prove to be a tricky situation.

Problem

User story As part of the content team, I need to be able to see a preview of the article on a separate front-end (Ember) application without ever saving the node.

As I started reading this user story I was tasked with, the wheels started turning as to how I would complete this. I had to get information not saved anywhere to an Ember front-end application with no POST abilities. I love challenges that take a lot of thought before jumping in. I talked with some of the other developers here and came up with a pretty nifty solution to the problem, which come to find out was just a mis-communication on the story... and when I was done the client was still excited at the possibilities it could open up; but keep in mind this was still in it's early stages and there are still a lot of holes to fill.

Solution

The first thing I did was add a Preview link and attach a javascript file to the node (article type) edit page via a simple hook_form_FORM_ID_alter().



 function mymodule_form_article_node_form_alter(&$form, &$form_state, $form_id) {
	$form['#attached']['js'][] = drupal_get_path('module', 'mymodule') . '/js/preview.js'
	$form['actions']['preview_new'] = [
		'#weight' => '40',
		'#markup' => 'Preview on front-end'
	];
}

Pretty simple. Since it was just the beginning stages I just threw it in an anchor tag to get the link there. So it wasn't pretty. Now to start gathering the node's data since it won't be saved anywhere.

Next step: Gather node info in js

So obviously the next step was to gather up the node's information from the node edit screen to prepare for shipping to the front-end application. Most time spent here was just trying to mimic the api that the Ember front-end was already expecting.


(function ($, Drupal) {
  Drupal.behaviors.openNewSite = {
    attach : function(context, settings) {
      viewInNewSite.init();
    }
  }
})(jQuery, Drupal);

var viewInNewSite = (function($) {

  
  var getMedia = function() {...};

  
  var getMessage = function() {...};

 
  var attachPreviewClick = function(link) {
    var $link = $(link);
    $link.on('click', function() {
      var newWindow = window.open('http://the-front-end.com/articles/preview/preview-article');
      var message = getMessage();
      
      setTimeout(function() {
        newWindow.postMessage(JSON.stringify(message), 'http://the-front-end.com');
      }, 1000);
    });
  };

  
  var init = function() {
    attachPreviewClick('#preview-new');
  };

  return {
    init: init
  }
})(jQuery);

Using window.open() to open what will be the route in the Ember application returns the newWindow for us to post a message to. (I used a setTimeout() here because the opening application took a while to get started and the message would get missed... this held me up for a while since I knew the message should be getting sent.) Then using postMessage() on newWindow to ship off a message (our json object) to the opening window, regardless of if it is another domain. Insert security concerns here... but now we're ready to setup the front end Ember application route.

To Ember!

The next step was to set up the Ember front-end application to listen for the message from the original window. Set up the basic route:


this.resource('articles', function() {
  this.route('preview', { path: '/preview/article-preview' })
  this.route('article', { path: '/:article_id' })
})

The application that I was previewing articles into already had a way to view articles by id as you see in the above code. So I didn't want to have to duplicate anything... I wanted to use the same template and model for articles that were already developed. Since that was taken care of for me, it was just time to create a model for this page and make sure that I use the correct template. So start by creating the ArticlesPreviewRoute:

App.ArticlesPreviewRoute = (function() {
 return Ember.Route.extend({

renderTemplate: function() {
this.render('articles.article');
},
controllerName: 'articles.article',

setupController: function(controller, model) {
controller.set('model', model);
},
model: function (params) {
return getArticle();
}
});

/\*\*

- Adds event listener to the window for incoming messages.
-
- @return {Promise}
  \*/
  var getArticle = function() {
  return new Promise(function(resolve, reject) {
  window.addEventListener('message', function(event) {
  if (event.origin.indexOf('the-back-end.com') !== -1) {
  var data = JSON.parse(event.data);
  resolve(data.articles[0]);
  }
  }, false);
  });
  };
  })();

The getArticle() function above returns a new Promise and adds an event listener that verifies that the message is coming from the correct origin. Clicking the link from Drupal would now take content to a new tab and load up the article. There would be some concerns that need to be resolved such as security measures and if a user visits the path directly.

To cover the latter concern, a second promise would either resolve the promise or reject it if the set amount of time has passed without a message coming from the origin window.

App.ArticlesPreviewRoute = (function() {
  return Ember.Route.extend({
    ...
    model: function (params) {
	  var self = this;
      var article = previewWait(10, getArticle()).catch(function() {
        self.transitionTo('not-found');
      });
      return article;
    }
  });

  var getArticle = function() {
	return new Promise(function(resolve, reject) {...}
  };

  
  var previewWait = function(seconds, promise) {
    return new Promise(function(resolve, reject) {
      setTimeout(function() {
        reject({'Bad': 'No data Found'});
      }, seconds * 1000);
      promise.then(function(data) {
        resolve(data);
      }).catch(reject);
    });
  };
})();

There you have it! A way to preview an article from a Drupal backend to an Ember front-end application without ever saving the article. A similar approach could be used for any of your favorite Javascript frameworks. Plus, this can be advanced even further into an "almost live" updating front-end that constantly checks the state of the fields on the Drupal backend. There have been thoughts of turning this into a Drupal module with some extra bells and whistles for configuring the way the json object is structured to fit any front-end framework... Is there a need for this? Let us know in the comments below or tweet us! Now, go watch the video!

Sep 23 2015
Sep 23

This post is part 3 in the series "Hashing out a docker workflow". For background, checkout my previous posts.

Now that I've laid the ground work for the approach that I want to take with local environment development with Docker, it's time to explore how to make the local environment "workable". In this post we will we will build on top of what we did in my last post, Docker and Vagrant, and create a working local copy that automatically updates the code inside the container running Drupal.

Requirements of a local dev environment

Before we get started, it is always a good idea to define what we expect to get out of our local development environment and define some requirements. You can define these requirements however you like, but since ActiveLAMP is an agile shop, I'll define our requirements as users stories.

User Stories

As a developer, I want my local development environment setup to be easy and automatic, so that I don't have to spend the entire day following a list of instructions. The fewer the commands, the better.

As a developer, my local development environment should run the same exact OS configuration as stage and prod environments, so that we don't run into "works on my machine" scenario's.

As a developer, I want the ability to log into the local dev server / container, so that I can debug things if necessary.

As a developer, I want to work on files local to my host filesystem, so that the IDE I am working in is as fast as possible.

As a developer, I want the files that I change on my localhost to automatically sync to the guest filesystem that is running my development environment, so that I do not have to manually push or pull files to the local server.

Now that we know what done looks like, let's start fulfilling these user stories.

Things we get for free with Vagrant

We have all worked on projects that have a README file with a long list of steps just to setup a working local copy. To fulfill the first user story, we need to encapsulate all steps, as much as possible, into one command:

\$ vagrant up

We got a good start on our one command setup in my last blog post. If you haven't read that post yet, go check it out now. We are going to be building on that in this post. My last post essentially resolves the first three stories in our user story list. This is the essence of using Vagrant, to aid in setting up virtual environments with very little effort, and dispose them when no longer needed with vagrant up and vagrant destroy, respectively.

Since we will be defining Docker images and/or using existing docker containers from DockerHub, user story #2 is fulfilled as well.

For user story #3, it's not as straight forward to log into your docker host. Typically with vagrant you would type vagrant ssh to get into the virtual machine, but since our host machine's Vagrantfile is in a subdirectory called /host, you have to change directory into that directory first.

$ cd host
$ vagrant ssh

Another way you can do this is by using the vagrant global-status command. You can execute that command from anywhere and it will provide a list of all known virtual machines with a short hash in the first column. To ssh into any of these machines just type:

\$ vagrant ssh <short-hash>

Replace with the actual hash of the machine.

Connecting into a container

Most containers run a single process and may not have an SSH daemon running. You can use the docker attach command to connect to any running container, but beware if you didn't start the container with a STDIN and STDOUT you won't get very far.

Another option you have for connecting is using docker exec to start an interactive process inside the container. For example, to connect to the drupal-container that we created in my last post, you can start an interactive shell using the following command:

$ sudo docker exec -t -i drupal-container /bin/bash

This will return an interactive shell on the drupal-container that you will be able to poke around on. Once you disconnect from that shell, the process will end inside the container.

Getting files from host to app container

Our next two user stories have to do with working on files native to the localhost. When developing our application, we don't want to bake the source code into a docker image. Code is always changing and we don't want to rebuild the image every time we make a change during the development process. For production, we do want to bake the source code into the image, to achieve the immutable server pattern. However in development, we need a way to share files between our host development machine and the container running the code.

We've probably tried every approach available to us when it comes to working on shared files with vagrant. Virtualbox shared files is just way too slow. NFS shared files was a little bit faster, but still really slow. We've used sshfs to connect the remote filesystem directly to the localhost, which created a huge performance increase in terms of how the app responded, but was a pain in the neck in terms of how we used VCS as well as it caused performance issues with the IDE. PHPStorm had to index files over a network connection, albiet a local network connection, but still noticebly slower when working on large codebases like Drupal.

The solution that we use to date is rsync, specifically vagrant-gatling-rsync. You can checkout the vagrant gatling rsync plugin on github, or just install it by typing:

\$ vagrant plugin install vagrant-gatling-rsync

Syncing files from host to container

To achieve getting files from our localhost to the container we must first get our working files to the docker host. Using the host Vagrantfile that we built in my last blog post, this can be achieved by adding one line:

config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'

Your Vagrantfile within the /host directory should now look like this:





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

We are syncing a drupal profile from a within the drupal directory off of the project root to a the /srv/myprofile directory within the docker host.

Now it's time to add an argument to run when docker run is executed by Vagrant. To do this we can specify the create_args parameter in the container Vagrant file. Add the following line into the container Vagrantfile:

docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']

This file should now look like:





Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

This parameter that we are passing maps the directory we are rsyncing to on the docker host to the profiles directory within the Drupal installation that was included in the Drupal docker image from DockerHub.

Create the installation profile

This blog post doesn't intend to go into how to create a Drupal install profile, but if you aren't using profiles for building Drupal sites, you should definitely have a look. If you have questions regarding why using Drupal profiles are a good idea, leave a comment.

Lets create our simple profile. Drupal requires two files to create a profile. From the project root, type the following:

$ mkdir -p drupal/profiles/myprofile
$ touch drupal/profiles/myprofile/{myprofile.info,myprofile.profile}

Now edit each file that you just created with the minimum information that you need.

myprofile.info

name = Custom Profile
description = My custom profile
core = 7.x

myprofile.profile


function myprofile_install_tasks() {
  
}

Start everything up

We now have everything we need in place to just type vagrant up and also have a working copy. Go to the project root and run:

\$ vagrant up

This will build your docker host as well as create your drupal container. As I mentioned in a previous post, starting up the container sometimes requires me to run vagrant up a second time. I'm still not sure what's going on there.

After everything is up and running, you will want to run the rsync-auto command for the docker host, so that any changes you make locally traverses down to the docker host and then to the container. The easiest way to do this is:

$ cd host
$ vagrant gatling-rsync-auto

Now visit the URL to your running container at http://localhost:4567 and you should see the new profile that you've added.

Custom Install Profile

Conclusion

We covered a lot of ground in this blog post. We were able to accomplish all of the stated requirements above with just a little tweaking of a couple Vagrantfiles. We now have files that are shared from the host machine all the way down to the container that our app is run on, utilizing features built into Vagrant and Docker. Any files we change in our installation profile on our host immediately syncs to the drupal-container on the docker host.

At ActiveLAMP, we use a much more robust approach to build out installation profiles, utilizing Drush Make, which is out of scope for this post. This blog post simply lays out the general idea of how to accomplish getting a working copy of your code downstream using Vagrant and Docker.

In my next post, I'll continue to build on what I did here today, and introduce automation to automatically bake a Docker image with our source code baked in, utilizing Jenkins. This will allow us to release any number of containers easily to scale our app out as needed.

Sep 22 2015
D
Sep 22

RNG has tagged numerous alpha releases and maintained stability with Drupal core. Because of this stability I will be accellerating to 1.0 by skipping beta and going straight to RC. However RC will not occur until the following requirements are satisfied:

  • Drupal 8 must tag its first release candidate.
Aug 14 2015
Aug 14

I recently had time to install and take a look at Drupal 8. I am going to share my first take on Drupal 8 and some of the hang-ups that I came across. I read a few other blog posts that mentioned not to rely too heavily on one source for D8 documentation with the rapid changing pace of D8 the information has become outdated rather quickly.

Getting Started

My first instinct was to run over to drupal.org and grab a copy of the code base and set it up on MAMP. Then I saw an advertisement for running Drupal 8 on Acquia Cloud Free and decided that would probably be a great starting point. Running through the setup for Acquia took only about eight minutes. This was great, having used Acquia and Pantheon before this was an easy way to get started.

Next, I decided to pull down the code and get it running locally so that I could start testing out adding my own theme. Well... What took 8 minutes for Acquia took relatively longer for me.

Troubleshooting and Upgrading

The first roadblock that I ran into was that my MAMP was not running the required version of PHP ( 5.5.9 or higher) and I decided to upgrade to MAMP 3 to make life a little bit nicer. After setting up MAMP from scratch and making sure the other projects that I had installed with MAMP still work correctly I was able to continue with the site install.

The second roadblock that I came across was not having Drush 7+ installed. It doesn't come out and say in the command line that you need to upgrade Drush (it does in the docs on drupal.org and if you search the error it is one of the first results). It just spits out this error:

Fatal error: Class 'Drupal\Core\Session\AccountInterface' not found in .../docroot/core/includes/bootstrap.inc on line 64
Drush command terminated abnormally due to an unrecoverable error.

The next roadblock was that I was trying to clear cache with Drush and didn't bother to read the documentation on drupal.org that outlined that drush cc all no longer exists and is replaced by drush cr. Drush now uses the cache-rebuild command. However, this is not exactly clear given that if you run drush cc all you get the same exact error as the one above.

Finally everything was setup and working properly. I decided to look around for a good guide to jumpstart getting the theme setup and landed here at this Lullabot article. For the most part, the article was straight forward. Some parts didn't work and I skipped and others didn't work and I tried to figure out why. Here is the list of things that I couldn't figure out:

  • Drush wouldn't change the default theme (complained about bootstrap level even though I was in my docroot)
  • Stylesheets-remove didn't work inside of my theme.info.yml file
  • Specifying my CSS in my theme.libraries.yml file seemed to be problematic but got it working after some time. (probably user error)

Conclusion

Drupal 8 looks clean and feels sleek and slimmed down. I'm really excited for the direction that Drupal is headed. Overall the interface within Drupal hasn't changed too drastically (maybe some naming conventions ie. extend over modules). It looks like one of our current sites running Panopoly which has a great look and feel over out-of-the-box D7. I really like the simplicity of separating out yml files for specific needs and setting up the theme.

I look forward to writing more blog posts about Drupal 8 and maybe some tutorials and insights. Let us know your thoughts and ideas in the comments section below.

Jul 19 2015
Jul 19

This post is part 2 in a series of Docker posts hashing out a new docker workflow for our team. To gain background of what I want to accomplish with docker, checkout my previous post hashing out a docker workflow.

In this post, we will venture into setting up docker locally, in the same repeatable way from developer to developer, by using Vagrant. By the end of this post, we'll have Drupal running in a container, using Docker. This post is focused on hashing out a Docker workflow with Vagrant, less about Drupal itself. I want to give a shout out to the folks that maintain the Drupal Docker image in the Docker Registry. It's definitely worth checking out, and using that as a base to build FROM for your custom images.

Running Docker Locally

There are several ways to go about setting up Docker locally. I don't intend to walk you through how to install Docker, you can find step-by-step installation instructions based on your OS in the Docker documentation. However, I am leaning toward taking an unconventional approach to installing Docker locally, not mentioned in the Docker documentation. Let me tell you why.

Running a Docker host

For background, we are specifically an all Mac OS X shop at ActiveLAMP, so I'll be speaking from this context.

Unfortunately you can't run Docker natively on OS X, Docker needs to run in a Virtual Machine with an operating system such as Linux. If you read the Docker OS X installation docs, you see there are two options for running Docker on Mac OS X, Boot2Docker or Kitematic.

Running either of these two packages looks appealing to get Docker up locally very quickly (and you should use one of these packages if you're just trying out Docker), but thinking big picture and how we plan to use Docker in production, it seems that we should take a different approach locally. Let me tell you why I think you shouldn't use Boot2Docker or Kitematic locally, but first a rabbit trail.

Thinking ahead (the rabbit trail)

My opinion may change after gaining more real world experience with Docker in production, but the mindset that I'm coming from is that in production our Docker hosts will be managed via Chef.

Our team has extensive experience using Chef to manage infrastructure at scale. It doesn't seem quite right to completely abandon Chef yet, since Docker still needs a machine to run the Docker host. Chef is great for machine level provisioning.

My thought is that we would use Chef to manage the various Docker hosts that we deploy containers to and use the Dockerfile with Docker Compose to manage the actual app container configuration. Chef would be used in a much more limited capacity, only managing configuration on a system level not an application level. One thing to mention is that we have yet to dive into the Docker specific hosts such as AWS ECS, dotCloud, or Tutum. If we end up adopting a service like one of these, we may end up dropping Chef all together, but we're not ready to let go of those reigns yet.

One step at a time for us. The initial goal is to get application infrastructure into immutable containers managed by Docker. Not ready to make a decision on what is managing Docker or where we are hosting Docker, that comes next.

Manage your own Docker Host

The main reason I was turned off from using Boot2Docker or Kitematic is that it creates a Virtual Machine in Virtualbox or VMWare from a default box / image that you can't easily manage with configuration management. I want control of the host machine that Docker is run on, locally and in production. This is where Chef comes into play in conjunction with Vagrant.

Local Docker Host in Vagrant

As I mentioned in my last post, we are no stranger to Vagrant. Vagrant is great for managing virtual machines. If Boot2Docker or Kitematic are going to spin up a virtual machine behind the scenes in order to use Docker, then why not spin up a virtual machine with Vagrant? This way I can manage the configuration with a provisioner, such as Chef. This is the reason I've decided to go down the Vagrant with Docker route, instead of Boot2Docker or Kitematic.

The latest version of Vagrant ships with a Docker provider built-in, so that you can manage Docker containers via the Vagrantfile. The Vagrant Docker integration was a turn off to me initially because it didn't seem it was very Docker-esque. It seemed Vagrant was just abstracting established Docker workflows (specifically Docker Compose), but in a Vagrant syntax. However within the container Vagrantfile, I saw you can also build images from a Dockerfile, and launch those images into a container. It didn't feel so distant from Docker any more.

It seems that there might be a little overlap in areas between what Vagrant and Docker does, but at the end of the day it's a matter of figuring out the right combination of using the tools together. The boundary being that Vagrant should be used for "orchestration" and Docker for application infrastructure.

When all is setup we will have two Vagrantfiles to manage, one to define containers and one to define the host machine.

Setting up the Docker Host with Vagrant

The first thing to do is to define the Vagrantfile for your host machine. We will be referencing this Vagrantfile from the container Vagrantfile. The easiest way to do this is to just type the following in an empty directory (your project root):

\$ vagrant init ubuntu/trusty64

You can configure that Vagrantfile however you like. Typically you would also use a tool like Chef solo, Puppet, or Ansible to provision the machine as well. For now, just to get Docker installed on the box we'll add to the Vagrantfile a provision statement. We will also give the Docker host a hostname and a port mapping too, since we know we'll be creating a Drupal container that should EXPOSE port 80. Open up your Vagrantfile and add the following:

config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567

This ensures that Docker is installed on the host when you run vagrant up, as well as maps port 4567 on your local machine to port 80 on the Docker host (guest machine). Your Vagrantfile should look something like this (with all the comments removed):





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
end

Note: This post is not intended to walk through the fundamentals of Vagrant, for further resources on how to configure the Vagrantfile check out the docs.

As I mentioned earlier, we are going to end up with two Vagrantfiles in our setup. I also mentioned the container Vagrantfile will reference the host Vagrantfile. This means the container Vagrantfile is the configuration file we want used when vagrant up is run. We need to move the host Vagrantfile to another directory within the project, out of the project root directory. Create a host directory and move the file there:

$ mkdir host
$ mv Vagrantfile !\$

Bonus Tip: The !$ destination that I used when moving the file is a shell shortcut to use the last argument from the previous command.

Define the containers

Now that we have the host Vagrantfile defined, lets create the container Vagrantfile. Create a Vagrantfile in the project root directory with the following contents:





Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

To summarize the configuration above, we are using the Vagrant Docker provider, we have specified the path to the Docker host Vagrant configuration that we setup earlier, and we defined a container using the Drupal image from the Docker Registry along with exposing some ports on the Docker host.

Start containers on vagrant up

Now it's time to start up the container. It should be as easy as going to your project root directory and typing vagrant up. It's almost that easy. For some reason after running vagrant up I get the following error:

A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.

Command: "docker" "ps" "-a" "-q" "--no-trunc"

Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

Stdout:

I've gotten around this is by just running vagrant up again. If anyone has ideas what is causing that error, please feel free to leave a comment.

Drupal in a Docker Container

You should now be able to navigate to http://localhost:4567 to see the installation screen of Drupal. Go ahead and install Drupal using an sqlite database (we didn't setup a mysql container) to see that everything is working. Pretty cool stuff!

Development Environment

There are other things I want to accomplish with our local Vagrant environment to make it easy to develop on, such as setting up synced folders and using the vagrant rsync-auto tool. I also want to customize our Drupal builds with Drush Make, to make developing on Drupal much more efficient when adding modules, updating core, etc... I'll leave those details for another post, this post has become very long.

Conclusion

As you can see, you don't have to use Boot2Docker or Kitematic to run Docker locally. I would advise that if you just want to figure out how Docker works, then you should use one of these packages. Thinking longer term, your local Docker Host should be managed the same way your production Docker Host(s) are managed. Using Vagrant, instead of Boot2Docker or Kitematic, allows me to manage my local Docker Host similar to how I would manage production Docker Hosts using tools such as Chef, Puppet, or Ansible.

In my next post, I'll build on what we did here today, and get our Vagrant environment into a working local development environment.

Jul 15 2015
Jul 15

Regardless of industry, staff size, and budget, many of today’s organizations have one thing in common: they’re demanding the best content management systems (CMS) to build their websites on. With requirement lists that can range from 10 to 100 features, an already short list of “best CMS options” shrinks even further once “user-friendly”, “rapidly-deployable”, and “cost-effective” are added to the list.

There is one CMS, though, that not only meets the core criteria of ease-of-use, reasonable pricing, and flexibility, but a long list of other valuable features, too: Drupal.

With Drupal, both developers and non-developer admins can deploy a long list of robust functionalities right out-of-the-box. This powerful, open source CMS allows for easy content creation and editing, as well as seamless integration with numerous 3rd party platforms (including social media and e-commerce). Drupal is highly scalable, cloud-friendly, and highly intuitive. Did we mention it’s effectively-priced, too?

In our “Why Drupal?” 3-part series, we’ll highlight some features (many which you know you need, and others which you may not have even considered) that make Drupal a clear front-runner in the CMS market.

For a personalized synopsis of how your organization’s site can be built on or migrated to Drupal with amazing results, grab a free ticket to Drupal GovCon 2015 where you can speak with one of our site migration experts for free, or contact us through our website.

_______________________________

SEO + Social Networking:

Unlike other content software, Drupal does not get in the way of SEO or social networking. By using a properly built theme–as well as add-on modules–a highly optimized site can be created. There are even modules that will provide an SEO checklist and monitor the site’s SEO performance. The Metatags module ensures continued support for the latest metatags used by various social networking sites when content is shared from Drupal.

SEO Search Engine Optimization, Ranking algorithmSEO Search Engine Optimization, Ranking algorithm

E-Commerce:

Drupal Commerce is an excellent e-commerce platform that uses Drupal’s native information architecture features. One can easily add desired fields to products and orders without having to write any code. There are numerous add-on modules for reports, order workflows, shipping calculators, payment processors, and other commerce-based tools.

E-Commerce-SEO-–-How-to-Do-It-RightE-Commerce-SEO-–-How-to-Do-It-Right

Search:

Drupal’s native search functionality is strong. There is also a Search API module that allows site managers to build custom search widgets with layered search capabilities. Additionally, there are modules that enable integration of third-party search engines, such as Google Search Appliance and Apache Solr.

Third-Party Integration:

Drupal not only allows for the integration of search engines, but a long list of other tools, too. The Feeds module allows Drupal to consume structured data (for example, .xml and .json) from various sources. The consumed content can be manipulated and presented just like content that is created natively in Drupal. Content can also be exposed through a RESTful API using the Services module. The format and structure of the exposed content is also highly configurable, and requires no programming.

Taxonomy + Tagging:

Taxonomy and tagging are core Drupal features. The ability to create categories (dubbed “vocabularies” by Drupal) and then create unlimited terms within that vocabulary is connected to the platform’s robust information architecture. To make taxonomy even easier, Drupal even provides a drag-n-drop interface to organize the terms into a hierarchy, if needed. Content managers are able to use vocabularies for various functions, eliminating the need to replicate efforts. For example, a vocabulary could be used for both content tagging and making complex drop-down lists and user groups, or even building a menu structure.

YS43PYS43P

Workflows:

There are a few contributor modules that provide workflow functionality in Drupal. They all provide common functionality along with unique features for various use cases. The most popular options are Maestro and Workbench.

Security:

Drupal has a dedicated security team that is very quick to react to vulnerabilities that are found in Drupal core as well as contributed modules. If a security issue is found within a contrib module, the security team will notify the module maintainer and give them a deadline to fix it. If the module does not get fixed by the deadline, the security team will issue an advisory recommending that the module be disabled, and will also classify the module as unsupported.

Cloud, Scalability, and Performance:

Drupal’s architecture makes it incredibly “cloud friendly”. It is easy to create a Drupal site that can be setup to auto-scale (i.e., add more servers during peak traffic times and shut them down when not needed). Some modules integrate with cloud storage such as S3. Further, Drupal is built for caching. By default, Drupal caches content in the database for quick delivery; support for other caching mechanisms (such as Memcache) can be added to make the caching lightning fast.

cloud-computingcloud-computing

Multi-Site Deployments:

Drupal is architected to allow for multiple sites to share a single codebase. This feature is built-in and, unlike WordPress, it does not require any cumbersome add-ons. This can be a tremendous benefit for customers who want to have multiple sites that share similar functionality. There are few–if any–limitations to a multi-site configuration. Each site can have its own modules and themes that are completely separate from the customer’s other sites.

Want to know other amazing functionalities that Drupal has to offer? Stay tuned for the final installment of our 3-part “Why Drupal?” series!

Jul 08 2015
Jul 08

why drupalwhy drupal

Regardless of industry, staff size, and budget, many of today’s organizations have one thing in common: they’re demanding the best content management systems (CMS) to build their websites on. With requirement lists that can range from 10 to 100 features, an already short list of “best CMS options” shrinks even further once “user-friendly”, “rapidly-deployable”, and “cost-effective” are added to the list.

There is one CMS, though, that not only meets the core criteria of ease-of-use, reasonable pricing, and flexibility, but a long list of other valuable features, too: Drupal.

With Drupal, both developers and non-developer admins can deploy a long list of robust functionalities right out-of-the-box. This powerful, open source CMS allows for easy content creation and editing, as well as seamless integration with numerous 3rd party platforms (including social media and e-commerce). Drupal is highly scalable, cloud-friendly, and highly intuitive. Did we mention it’s effectively-priced, too?

In our “Why Drupal?” 3-part series, we’ll highlight some features (many which you know you need, and others which you may not have even considered) that make Drupal a clear front-runner in the CMS market.

For a personalized synopsis of how your organization’s site can be built on or migrated to Drupal with amazing results, grab a free ticket to Drupal GovCon 2015 where you can speak with one of our site migration experts for free, or contact us through our website.

______

Drupal in Numbers (as of June 2014):

  • Market Presence: 1.5M sites
  • Global Adoption: 228 countries
  • Capabilities: 22,000 modules
  • Community: 80,000 members on Drupal.org
  • Development: 20,000 developers

Open Source:

drupalOSdrupalOS

The benefits of open source are exhaustively detailed all over the Internet. Drupal itself has been open source since its initial release on January 15, 2000. With thousands of developers reviewing and contributing code for over 15 years, Drupal has become exceptionally mature. All of the features and functionality outlined in our “Why Drupal?” series can be implemented with open source code.

Startup Velocity:

Similar to WordPress, deploying a Drupal site takes mere minutes, and the amount of out-of-the-box functionality is substantial. While there is a bit of a learning curve with Drupal, an experienced admin (non-developer) can have a small site deployed in a matter of days.

drupal-the-oniondrupal-the-onion

Information Architecture:

The ability to create new content types and add unlimited fields of varying types is a core Drupal feature. Imagine you are building a site that hosts events, and an “Event” content type is needed as part of the information architecture. With out-of-the-box Drupal, you can create the content type with just a few clicks–absolutely no programming required. Further, you can add additional fields such as event title, event date, event location, keynote speaker. Each field has a structured data type, which means they aren’t just open text fields. Through contrib modules, there are dozens of other field types such as mailing address, email address, drop-down list, and more. Worth repeating: no programming is required to create new content types, nor to create new fields and add them to a new content type.

admin-screenshotadmin-screenshot

Asset Management:

There are a number of asset management libraries for Drupal, ensuring that users have the flexibility to choose the one that best suits their needs. One newer and increasingly popular asset management module in particular is SCALD (https://www.drupal.org/project/scald). One of the most important differences between SCALD and other asset management tools is that assets are not just files. In fact, files are just one type of asset. Other asset types include YouTube videos, Flickr galleries, tweets, maps, iFrames–even HTML snippets. SCALD also provides a framework for creating new types of assets (called providers). For more information on SCALD, please visit: https://www.drupal.org/node/2101855 and https://www.drupal.org/node/1895554

turner.premshow2turner.premshow2

Curious about the other functionalities Drupal has to offer? Stay tuned for Part 2 of our “Why Drupal?” series!

Jul 03 2015
Jul 03

The Picture module is a backport of Drupal 8 Responsive Image module. It allows you to select different images to be loaded for different devices and resolutions using media queries and Drupal’s image styles. You can also use the Image Replace module to specify a different image to load at certain breakpoints.

The Picture module gives you the correct formatting for an HTML5 “” element and includes a polyfill library for backwards compatibility with unsupported browsers. Unfortunately, this includes IE, Edge, Safari, iOS Safari, and Opera Mini. For more information, Can I Use.

The picture module works great with views. It supports WYSIWYG and creditor. It can be implemented directly with code if necessary.

Installation

Picture has two important dependencies

  1. Breakpoints module
  2. Ctools module

You can install the Picture module using Drush.

You will also want the Media module and Image Replace module.

Setting Up

Once you have the modules installed and enabled you can start to configure your theme.

1. Set up your breakpoints

You can add breakpoint settings directly in Drupal under Configuration > Media > Breakpoints. Or if you prefer you can add them to the .info file of your theme. We added ours to our theme like this.

breakpoints[xxlarge] = (min-width: 120.063em)
breakpoints[xlarge]  = (min-width: 90.063em) and (max-width: 120em)
breakpoints[large]   = (min-width: 64.063em) and (max-width: 90em)
breakpoints[medium]  = (min-width: 40.063em) and (max-width: 64em)
breakpoints[small]   = (min-width: 0em) and (max-width: 40em)

2. Add Responsive Styles

From within the Drupal UI, you will want to go to Configuration > Media > Breakpoints

  1. Click “Add Responsive Style”.
  2. Select which breakpoints you want to use for this specific style.
  3. Then you will choose an existing image style from the drop-down list that this style will clone.
  4. Finally you give it a base name for your new image styles. I would recommend naming it something logical and ending it with “_”.

You can now edit your image styles, you can do your normal scale crop for different sizes or you can setup image replace.

3. Setting up Picture Mappings

Once you have your image styles you have to create a mapping for them to associate with if you go to Configuration > Media > Picture Mappings

  1. Click Add to add a new picture mapping
  2. Select your breakpoint group.
  3. Give your mapping an Administrative title (something you can pick out of a select list)
  4. Then you will select "use image style" and select each of the corresponding image styles that you setup from the select list that appears.
Once you have that created you can now select that mapping and use it in different areas of your site. For example, you can use it in the Display of a node using the Manage Display section and formatting it as picture.
You can use it in views by selecting picture format and selecting you picture mapping.

4. Setting up Image Replace

Click edit on your desired image style and as an effect you can add the Replace Image setting. Replace can be combined with other styles as well so if you need to scale and crop you can do that too

Once you have your style setup you need to specify a field to replace with the image with a specific dimensions. We added a secondary field to our structure. We named them with horizontal and vertical because it made sense for us because we only use the vertical field when we are at smaller widths and the content stretches move vertically. You can use whatever naming convention will work best for you.

On the main image that you added to your view or display you edit the field settings and there is a section called Image Replace. Find the image style you set Replace image on and select your field you want to replace your current field.

Finished Product

Once that is done you are all set. If you have any questions or comments be sure to leave your comment in the comments section below.

Here is an example of how we used image replace.

Jun 16 2015
D
Jun 16

After 5 months of development, RNG is ready for its first alpha release. A milestone where there are no major issues, and schema is not anticipated to change leading up until v1.0.

RNG is a Drupal 8 module implementing a core toolset for allowing users to register for events.

Jun 02 2015
Jun 02

In April 2015, NASA unveiled a brand new look and user experience for NASA.gov. This release revealed a site modernized to 1) work across all devices and screen sizes (responsive web design), 2) eliminate visual clutter, and 3) highlight the continuous flow of news updates, images, and videos.

With its latest site version, NASA—already an established leader in the digital space—has reached even higher heights by being one of the first federal sites to use a “headless” Drupal approach. Though this model was used when the site was initially migrated to Drupal in 2013, this most recent deployment rounded out the endeavor by using the Services module to provide a REST interface, and ember.js for the client-side, front-end framework.

Implementing a “headless” Drupal approach prepares NASA for the future of content management systems (CMS) by:

  1. Leveraging the strength and flexibility of Drupal’s back-end to easily architect content models and ingest content from other sources. As examples:

  • Our team created the concept of an “ubernode”, a content type which homogenizes fields across historically varied content types (e.g., features, images, press releases, etc.). Implementing an “ubernode” enables easy integration of content in web services feeds, allowing developers to seamlessly pull multiple content types into a single, “latest news” feed. This approach also provides a foundation for the agency to truly embrace the “Create Once, Publish Everywhere” philosophy of content development and syndication to multiple channels, including mobile applications, GovDelivery, iTunes, and other third party applications.

  • Additionally, the team harnessed Drupal’s power to integrate with other content stores and applications, successfully ingesting content from blogs.nasa.gov, svs.gsfc.nasa.gov, earthobservatory.nasa.gov, www.spc.noaa.gov, etc., and aggregating the sourced content for publication.

  1. Optimizing the front-end by building with a client-side, front-end framework, as opposed to a theme. For this task, our team chose ember.js, distinguished by both its maturity as a framework and its emphasis of convention over configuration. Ember embraces model-view-controller (MVC), and also excels at performance by batching updates to the document object model (DOM) and bindings.

In another stride toward maximizing “Headless” Drupal’s massive potential, we configured the site so that JSON feed records are published to an Amazon S3 bucket as an origin for a content delivery network (CDN), ultimately allowing for a high-security, high-performance, and highly available site.

Below is an example of how the technology stack which we implemented works:

Using ember.js, the NASA.gov home page requests a list of nodes of the latest content to display. Drupal provides this list as a JSON feed of nodes:

Ember then retrieves specific content for each node. Again, Drupal provides this content as a JSON response stored on Amazon S3:

Finally, Ember distributes these results into the individual items for the home page:

The result? A NASA.gov architected for the future. It is worth noting that upgrading to Drupal 8 can be done without reconfiguring the ember front-end. Further, migrating to another front-end framework (such as Angular or Backbone) does not require modification of the Drupal CMS.

May 14 2015
May 14

As Drupal has evolved, it has become more than just a CMS. It is now a fully fledged Web Development Platform, enabling not just sophisticated content management and digital marketing capabilities but also any number of use cases involving data modelling and integration with an endless variety of applications and services. In fact, if you need to build something which responds to an HTTP request, then you can pretty much find a way to do it in Drupal.

“Just because you can, doesn’t mean you should.”

However, the old adage is true. Just because you can use use a sledgehammer to crack a nut, that doesn’t mean you’re going to get the optimal nut-consumption-experience at the end of it.

Drupal’s flexibility can lead to a number of different integration approaches, all of which will “work”, but some will give better experiences than others.

On the well trodden development path of Drupal 8, giant steps have been taken in making the best of what is outside of the Drupal community and “getting off the island”, and exciting things are happening in making Drupal less of a sledgehammer, and more of a finely tuned nutcracker capable of cracking a variety of different nuts with ease.

In this post, I want to explore ways in which Drupal can create complex systems, and some general patterns for doing so. You’ll see a general progression in line with that of the Drupal community in general. We’ll go from doing everything in Drupal, to making the most of external services. No option is more “right” than others, but considering all the options can help make sure you pick the approach that is right for you and your use case.

Build it in Drupal

One option, and probably the first that occurs to many developers, is to implement business logic, data structures and administration of a new applications or services using Drupal and its APIs. After all, Entity API and the schema system give us the ability to model custom objects and store them in the Drupal database; Views gives us the means to retrieve that data and display it in a myriad of ways. Modules like Rules; Features and CTools provide extensive options for implementing specific business rules to model your domain specific data and application needs.

This is all well and good, and uses the strengths of Drupal core and the wide range of community contributed modules to enable the construction of complex sites with limited amounts of coding required, and little need to look outside Drupal. The downside can come when you need to scale the solution. Depending on how the functionality has been implemented you could run into performance problems caused by large numbers of modules, sub-optimal queries, or simply the amount of traffic heading to your database - which despite caching strategies, tuning and clustering is always likely to end up being the performance bottleneck of your Drupal site.

It also means your implementation is tightly coupled to Drupal - and worse, most probably the specific version of Drupal you’ve implemented. With Drupal 8 imminent this means you’re most likely increasing the amount of re-work required when you come to upgrade or migrate between versions.

It’s all PHP

Drupal sites can benefit hugely from being part of the larger PHP ecosystem. With Drush make, the Libraries API, Composer Manager, and others providing the means of pulling external, non-Drupal PHP libraries into a Drupal site, there are huge opportunities for building complexity in your Drupal solution without tying yourself to specific Drupal versions, or even to Drupal at all. This could become particularly valuable as we enter the transition period between Drupal 7 and 8.

In this scenario, custom business logic can be provided in a framework agnostic PHP library and a Naked Module approach can be used to provide the glue between that library and Drupal - utilising Composer to download and install dependencies.

This approach is becoming more and more widespread in the Drupal community with Commerce Guys (among others) taking a libraries first approach to many components of Commerce 2.x which will have generic application outside of Drupal Commerce.

The major advantage of building framework agnostic libraries is that if you ever come to re-implement something in another framework, or a new version of Drupal, the effort of migrating should be much lower.

Integrate

Building on the previous two patterns, one of Drupal’s great strengths is how easy it is to integrate with other platforms and technologies. This gives us great opportunity to implement functionality in the most appropriate technology and then simply connect to it via web services or other means.

This can be particularly useful when integrating with “internal” services - services that you don’t intend to expose to the general public (but may still be external in the sense of being SaaS platforms or other partners in a multi-supplier ecosystem). It is also a useful way to start using Drupal as a new part of your ecosystem, consuming existing services and presenting them through Drupal to minimise the amount of architectural change taking place at one time.

Building a solution in this componentised and integrated manner gives several advantages:

  • Separation of concerns - the development, deployment and management of the service can be run by a completely separate team working in a different bounded context. It also ensures logic is nicely encapsulated and can be changed without requiring multiple front-end changes.
  • Horizontal scalability - implementing services in alternate technologies lets us pick the most appropriate for scalability and resilience.
  • Reduce complex computation taking place in the web tier and let Drupal focus on delivering top quality web experience to users. For example, rather than having Drupal publish and transform data to an external platform, push the raw data into a queue which can be consumed by “non-Drupal” processes to do the transform and send.
  • Enable re-use of business logic outside of the web tier, on other platforms or with alternative front ends.

Nearly-Headless Drupal

Headless Drupal is a phrase that has gained a lot of momentum in the Drupal community - the basic concept being that Drupal purely responds with RESTful endpoints, and completely independant front-end code using frameworks such as Angular.js is used to render the data and completely separate content from presentation.

Personally, I prefer to think of a “nearly headless” approach - where Drupal is still responsible for the initial instantiation of the page, and a framework like Angular is used to control the dynamic portion of the page. This lets Drupal manage the things it’s good at, like menus, page layout and content management, whilst the “app” part is dropped into the page as another re-usable component and only takes over a part of the page.

For an example use case, you may have business requirements to provide data from a service which is also provided as an API for consumption by external parties or mobile apps. Rather than building this service in Drupal, which while possible may not provide optimal performance and management opportunities, this could be implemented as a standalone service which is called by Drupal as just another consumer of the API.

From an Angular.js (or insert frontend framework of choice) app, you would then talk directly to the API, rendering the responses dynamically on the front end, but still use Drupal to build everything and render the remaining elements of the page.

Summing up

As we’ve seen, Drupal is an incredibly powerful solution, providing the capability for highly-consolidated architectures encapsulated in a single tool, a perfect enabler for projects with low resources and rapid development timescales. It’s also able to take its place as a mature part of an enterprise architecture, with integration capabilities and rich programming APIs able to make it the hub of a Microservices or Service Oriented Architecture.

Each pattern has pros and cons, and what is “right” will vary from project to project. What is certain though, is that Drupal’s true strength is in its ability to play well with others and do so to deliver first class digital experiences.

New features in Drupal 8 will only continue to make this the case, with more tools in core to provide the ability to build rich applications, RESTful APIs for entities out of the box allowing consumption of that data on other platforms (or in a headless front-end), improved HTTP request handling with Guzzle improving options for consuming services outside of Drupal, and much more.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web