Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough

Local Drupal development using Docker

Parent Feed: 

One of the biggest arguments for using Docker to develop your app is to have isolated environments for different setups(The classic case of two different versions of PHP for two projects). Even that is sometimes not convincing enough. I find Docker to be damn useful when building production parity on local(Think trying to reproduce a production bug on your local). If you've faced this problem and want to solve it, read on.

I wanted to switch over to Docker when I first heard about it. It sounded like Docker would address all the problems posed by Vagrant, without compromising the benefits. I adopted Docker4Drupal, a comprehensive Docker based setup open sourced by the nice folks at Wodby. I even use Lando these days. Its quite handy if you're working on non-Drupal projects as well. Why would I spin my own Docker setup with all these wonderful tools around?

With the rationale to "reinvent the wheel" out of the way, let's build our custom setup.

First, a humble docker-compose to get the setup up and running. As an aside, a docker-compose file is a declarative configuration(in YAML) of how your web stack should roll out as docker containers. We shall use v2 version of docker-compose as its more widespread. I'll update this post for a v3 version sometime in future.

We're building a LEMP stack, which involves a web app(PHP + Nginx) talking to a DB service(MariaDB). Docker compose serves as a single "source of truth" specification for the stack. Also, it builds all the associated containers in the same docker network so that its easier for the services to discover each other.

As we are particular about having the same configuration in both local and production, we use the concept of docker-compose inheritance to have the commonalities in one place and add env specific configuration in a respective file. Our setup consists of 3 containers,

Here's how the base compose version looks like.

We can add more optional containers like PHPMyAdmin if needed.

We have the option to use a pre-built Docker image or build our own. This is based on few considerations:

  • PHP

    We start with `php:7.1-fpm` image as the base image. This strikes a good balance between bleeding edge and stability. Apart from installing the basic Drupal related dependencies, we install some binaries like Git, Wget and Composer. We also configure a working directory where the code gets injected, called /code. There is more room for improvement, like running the main processes like Nginx and PHP FPM as non root users. But that's the topic of a later blog post.

    Our PHP docker image looks like this,

    FROM php:7.1-fpm
    
    RUN apt-get update \
        && apt-get install -y libfreetype6-dev libjpeg62-turbo-dev libpng-dev wget mysql-client git
    
    RUN  docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
         && docker-php-ext-install gd \
         && :\
         && docker-php-ext-install pdo pdo_mysql opcache zip \
         && docker-php-ext-enable pdo pdo_mysql opcache zip
    
    # Install Composer
    RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
    
    WORKDIR /code
    
    

    Also, I'd prefer that search engines don't crawl my non production sites. I'd go with a .htaccess based approach for this, but it apparently [[][incurs a performance penality]]. I write some extra stuff in my dockerfile to address this.

    RUN if [ -n "$NON_PROD" ] ; then printf "User-agent: *\nDisallow: /\nNoindex: /\n" > ./web/robots.txt; fi
    
    
  • mariadb

    There are 2 things to take care of when building the database containers.

    • Injecting database credentials and exposing them to other containers who want to use it,
    • Persisting databases even if containers are killed and restarted.

    For the former, we use a .env file which docker reads. This will supply all the environment variables needed to build our image. Here's how our .env will look:

    MYSQL_ROOT_PASSWORD=supersecretroot123
    MYSQL_DATABASE=drupaldb
    MYSQL_USER=drupaluser
    MYSQL_PASSWORD=drupaluserpassword123
    
    

    Let's tweak our local docker compose file to pick up these variables.

    services:
      mariadb:
        extends:
          file: docker-compose.yml
          service: mariadb
        environment:
          MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
          MYSQL_DATABASE: ${MYSQL_DATABASE}
          MYSQL_USER: ${MYSQL_USER}
          MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    
    

    Note the use of extends construct, which picks up everything else about the mariadb container from the base compose file and "extends" it. While we are at it, let's add persistence power to our database container, by mounting volumes. We map 2 volumes, one for storing the actual database data(which resides at /var/lib/mysql inside the container), another to supply init scripts to MySQL. The official MariaDB containers ship with a way to initiate the database with sample data. This path is at /docker-entrypoint-initdb.d. We will see that this is pretty useful to build production replicas of our Drupal site. Let's add those to the local compose file.

    services:
      mariadb:
        extends:
          file: docker-compose.yml
          service: mariadb
        environment:
          # ...
        volumes:
          - ./mariadb-init:/docker-entrypoint-initdb.d
          - ./mariadb-data:/var/lib/mysql
    
    
  • Nginx config

    The Nginx container requires 2 things,

    • where the code resides inside the container.
    • the nginx configuration for running our Drupal site.

    Both these inputs can be supplied by mounting volumes.

    I picked up the Nginx configuration from here. I separated them into 2 files(one more generic and another file included in this which is specific to Drupal) to have a cleaner and more modular setup. This can be a single file as well.

    Here's how my Nginx container spec looks like for local compose file,

    nginx:
      extends:
        file: docker-compose.yml
        service: nginx
      volumes:
        - ./:/code
        - ./deploy/nginx/config/local:/etc/nginx/conf.d/default.conf
        - ./deploy/nginx/config/drupal:/etc/nginx/include/drupal
      ports:
        - '8000:80'
    
    

    Finally, I run Nginx in port 8000 and expose this from the container. Feel free to change 8000 to anything else you feel appropriate.

  • First spin at local

    Before we boot our containers, we have to make a few small tweaks to our PHP setup. We mount the code directory inside PHP because the PHP FPM process requires it. Also, as a 12 factor app best practice, we expose DB specific details as environment variables.

    Here's how our PHP container spec looks on local compose.

    php:
      extends:
        file: docker-compose.yml
        service: php
      volumes:
        - ./:/code
      environment:
        PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
        PHP_FPM_CLEAR_ENV: "no"
        DB_HOST: mariadb
        DB_USER: ${MYSQL_USER}
        DB_PASSWORD: ${MYSQL_PASSWORD}
        DB_NAME: ${MYSQL_DATABASE}
        DB_DRIVER: mysql
        NON_PROD: 1
    
    

    Let's update our settings.php to read DB credentials from environment variables.

    $databases['default']['default'] = array (
        'database' => getenv('DB_NAME'),
        'username' => getenv('DB_USER'),
        'password' => getenv('DB_PASSWORD'),
        'prefix' => '',
        'host' => getenv('DB_HOST'),
        'port' => '3306',
        'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
        'driver' => 'mysql',
    );
    
    

    NOTE the settings.php file, by deliberate design is blacklisted by Git from being checked in as it might contain sensitive information about your environment like passwords or API keys. For this setup to work, you will have to checkin the settings.php file, and doubly ensure that it does not contain any sensitive information. If this is the case, you inject them into your app using environment variables like how we did for the DB credentials above.

  • Booting our app

    Let's boot our full docker stack.

    $ docker-compose -f local.yml up --build -d
    
    

    To check the logs, run,

    $ docker-compose -f local.yml logs <container-name>
    
    

    The app can be accessed at localhost:8000 or whatever port you supplied for Nginx in local compose file. Make sure you run composer install before setting up Drupal.

    $ docker-compose -f local.yml run php composer install
    
    

    To run drush, you have to supply the full path of the drush executable and the root directory where Drupal is installed.

    $ docker-compose -f local.yml run php ./vendor/bin/drush --root=/code/web pm-list
    
    

    If you add /code/vendor/bin to PATH when building the container and create a drush alias with /code/web as the root directory, then we can run drush in a more elegant manner, but that's totally optional.

    Finally, to stop the setup, we run

    $ docker-compose -f local.yml down
    
    

    That's pretty much how you run Drupal 8 on Docker in your local. We shall see how to translate this into a production setup in the next installment.

  • Author: 
    Original Post: 

    About Drupal Sun

    Drupal Sun is an Evolving Web project. It allows you to:

    • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
    • Facet based on tags, author, or feed
    • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
    • View the entire article text inline, or in the context of the site where it was created

    See the blog post at Evolving Web

    Evolving Web