Feb 21 2017
Feb 21

My last post talked about how Docker microcontainers speed up the software development workflow. Now it’s time to dive into how all this applies to Drupal.

I created a collection of Docker configuration files and scripts to make it easy to run Drupal. If you want to try it out, follow the steps in the README file.

The repository is designed using the microcontainers concept, so that each Drupal site will end up with 3 containers of it’s own (Apache, MySQL and Drush containers), which are linked together, to run our application. If you want to serve a new site, you need to create separate containers.

In theory you could re-use containers for different web applications. However, in practice, Docker containers are resource-cheap and easy to spin up. So it’s less work to run separate containers for separate applications than it is to configure each application to play nice with the other applications running on the same container (e.g.: configuring VirtualHosts and port mappings). Or at least this is what my colleague M Parker believes.

Plus, configuring applications to play nice with each other in the same container kind of violates the “create once, run anywhere” nature of Docker.

How it works

My repository uses the docker-compose program. Docker-compose is controlled with the docker-compose.yml file, which tells Docker which containers to start, how to network them together so they serve Drupal, and how to connect them to the host machine. This means serving the Drupal repository filesystem and mapping a port on the host machine to one of the ports in one of the containers.

A useful tip to remember is that docker-compose ps will tell you the port mappings as shown in the screenshot below. This is useful if you don’t map them explicitly to ports on the host machine.

Docker terminal

Networking

If you’ve ever tried setting up a bunch of containers manually (without docker-compose), it is worth noting (and not very well documented in the Docker docs, unfortunately) that you don’t need to explicitly map port 3306:3306 for the mysql container, because docker-compose sets up a miniature network for containers run from the same docker-compose.yml. It also sets up hostnames between each container in the same docker-compose.yml. This means that the web container can refer to the mysql-server machine with the hostname mysql-server, and, even if you implicitly map 3306 to some random port on the​ host machine, web can talk to mysql-server on port 3306.

Note in this case that the container running MySQL is named db, so, when you’re installing Drupal, on step 4 (“Database configuration”) of the Drupal 7 install script, you have to expand “Advanced options”, and change “Database host” from localhost to db!

Filesystem

It is possible to put the Drupal filesystem into a container (which you might want to do if you wanted to deploy a container to a public server). However, it doesn’t really make sense for development, because most of the time, you’re changing the files quite frequently.

To get around this requirement for a development environment, we mount the current folder (often referred to as ‘.’) to /var/www/html in the container, which matches where the current directory is mounted in all three containers. This is done with the ‘volumes’ directive in the docker-compose.yml file. The ’working_dir’ directive says “when you run the Drush command in the Drush container, pretend it’s running from /var/www/html”, which is the equivalent of ‘cd /var/ww/html’ before you run a drush command.

So when you run the Drush command in the Drush container, it sees that it’s currently in a Drupal directory and proceeds to load the database connection information from sites/default/settings.php which tells it how to connect to the mysql on the db container with the correct credentials. (recall the links directive makes sure that the drush container can access the db container so it can connect to it on port 3306).

The Drush container

The drush container is a bit special because it runs a single command, and is re-created every time a Drush command is used.

If you look at the step 9 of my Docker configuration files you’ll see it says…

  • Run Drush commands with:
USER_ID=$(id -u) docker-compose run --rm drush $rest_of_drush_command

… i.e.: docker-compose run --rm drush i.e. start the container named drush, pass it $rest_of_drush_command

Docker terminal containers

If you look at the Dockerfile for Mparker’s Dockerfile, you’ll see it contains a line saying ‘ENTRYPOINT [“drush”]’. ENTRYPOINT is a variant of the CMD command which passes all the rest of the ‘docker run’ parameters to the command specified by the ENTRYPOINT line.

So what happens when you run that ‘docker-compose run’ line is that it creates a new container from the ‘mparker17/mush’ image, with all the configuration from the ‘docker-compose.yml’ file. When that container runs, it automatically runs the ‘drush’ command, and docker-compose passes ‘$rest_of_drush_command’ to the ‘drush’ command. When the ‘drush’ command is finished, the container stops, and the ‘–rm’ thing we specified deletes the container afterwards

Running USER_ID=$(id -u) before a command sets an environment variable that persists for that command; i.e.: when docker-compose runs, an environment variable $USER_ID exists; but $USER_ID goes away when docker-compose is finished running. You can leave out the USER_ID=$(id -u) if you add that line to your shell’s configuration. Essentially what that environment variable does is set the user account that the Drush command runs as. If you don’t specify the user account, then Docker defaults to root.

The main reason why I do this is so that if I ask Drush to make changes to the filesystem (e.g.: download a module, run drush make, etc.) that the files are owned by me, not root (i.e.: so I don’t have to go around changing ownership permissions after I run the drush command)

It may only be necessary on Windows/Macintosh, because the virtual machine that Docker runs in on Win/Mac has different user IDs — I think if you run a Docker command from a Linux machine, your user id is already correct; but because a Docker command on a Mac/Win is run with your Mac/Win user ID (e.g.: 501) but gets passed to the docker VM’s ‘docker’ user (which runs as user 1000), some problems arise unless you’re explicit about it.

Acknowledgements Lastly, I would like to thank Matt Parker here, who has been mentoring me since the beginning of setting up docker and telling me better ways to do it. He also recommends reading the Docker book if you want to explore this further.

Feb 11 2017
Feb 11

Docker, a container-based technology which I just came across, is great for setting up environments. It was first introduced to the world by Solomon Hykes, founder and CEO of dotCloud at Python Developers Conference in Santa Clara, California, in March 2013. The project was quickly open-sourced and made available on GitHub, where anyone can download and contribute to it.

Containers vs. Virtual Machines

You might be wondering, “What is the difference between Containers (like Docker) and Virtual Machines”?

Well, virtual machines (VM) work by creating a virtual copy of a computer’s hardware, and running a full operating-system on that virtual hardware. Each new VM that you create results in a new copy of that virtual hardware, which is computationally expensive. Many people use VMs because they allow you to run an application in a separate environment which can have it’s own versions of software and settings, which are different from the host machine.

On the other hand, container technologies like Docker, isolate the container’s environment, software, and settings, in a sandbox; but all sandboxes share the same operating-system kernel and hardware as the host computer. Each new container results in a new sandbox. This enables us to pack a lot more applications into a single physical server as compared to a virtual machine.

Docker containers are isolated enough that the root process in a container cannot see the host machine’s processes or filesystem. However, it may still be able to make certain system calls to the kernel that a regular user would not, because in Docker, the kernel is shared with the host machine. This is also why Docker containers are not virtual machines and thus a lot faster.

Note, however, that Docker relies on a technology which is only available in the Linux kernel. When you run Docker on a Windows or Macintosh host machine, Docker and all it’s containers run in a virtual machine.

That said, there are two projects trying to bring Docker-style containers natively to OS/X , Dlite and Xhyve. But last I heard, these projects were still very experimental. So consider yourself warned.

When you are done with a container, on a Mac host machine, it’s probably good to suspend the containers, because they run in a virtual machine and that has a lot of overhead. But on a Linux host machine, there would be no need to suspend them because they would not create (much) additional overhead (no more than, say, MAMP).

Docker is a tool that promises to scale into any environment, streamlining the workflow and responsiveness of agile software organizations.

Docker’s Architecture

This is a diagram explaining the basic client-server architecture which docker uses. Docker architecture

Image source: http://www.docker.com

Important Terminology

  • Docker daemon: A Docker engine which runs on the host machine as shown in the image above.
  • Docker client: A Docker cli which is used to interact with the daemon.

Workflow components

  • Docker image: A read-only disk image in which environment & your application resides.
  • Docker container: A read/writeable instance of an image, which you can start, stop, move, and delete.
  • Docker registry: A public or private repository to store images.
  • Dockerfile: A Dockerfile is instructions for how to build a single image. You can think of a Dockerfile as kind of Vagrantfile, or a single Chef cookbook, or an Ansible script, or a Puppet script.

Microservices

Because Docker allows you to run so many containers at the same time, it has popularized the idea of microservices: a collection of containers, each of which contain a single program, all of which work together to run a complex application (e.g. Drupal).

Taking Drupal as an example, every Drupal site has at least two dependencies: an HTTP server (Apache, Nginx, etc.) running PHP; and MySQL. The idea of microservices would involve packaging Apache+PHP separately from MySQL; as opposed to most Drupal virtual machine images which bundle them together into the same VM. For more complicated setups, you could add another container for Solr, another container for LDAP, etc.

For me, the main advantage of using microservices is that it’s easier to update or swap one dependency of an application without affecting the rest of it. Another way of looking at this is that microcontainers make it easier to modify one piece without waiting a long time for the virtual machine to rebuild.

When I was using a virtual machine on a particularly complex project, if I needed to make a change to a setting, I had to make that change in the Puppet config, then run vagrant destroy && vagrant up and wait two hours for it to tell me that the new configuration wasn’t compatible with some other piece of the system. At which point I had to repeat the two hour process, which wasted a lot of time.

If I had been using Docker (properly), then I could have just changed the setting for that one program, rebuild that program’s container (5 seconds), and not have to worry that one piece of the machine needed at least Java 6 and the other piece of the machine could not work without Java 5.

Now that you know the possibilities with Docker, watch this space to find out how all this applies to Drupal.

Aug 25 2015
Aug 25

Currently Drupal has naming conventions for branches and tags in git for contrib module. These are based on the core version then the module version, for example 7.x-1.x would create a dev version 1 for Drupal 7, 7.x-2.3 would create a stable release version 2.3 for Drupal 7.

As we head towards Drupal 8 there has been a lot of talk about versioning for contrib. Core has already moved to using semantic versioning (semver), which is widely adopted by a lot of software now. Contrib is still on the old version numbering format. There is a lot of discussion about switching to something like semantic versioning for contrib. It’d be ideal to keep the core version somewhere in the number, there have been many suggestions, some of which are:

My preferred, and it seems the favourite so far is 8.2.3.0, this is much like semver but the major version has been bumped down to make way for the core version, therefore core.major.minor.patch. The only possible issue here is if using composer for contib modules, composer will think the core version number is the major version number, and it will think the major version number is the minor version number. I feel that this is an ok compromise and we as developers can take this into account when using the syntax in composer.json.

Another versioning related issue is that we currently add the version number into the contrib info file as part of the packager that creates the module zip or tarball files. If more people pull modules straight from git (via composer) they won’t have this version number, causing the core update module not to work.

I say we should start asking module maintainers to add the version number to the info file. They already have to add the version number to the tag or branch, surely it’s no extra effort to add it to the info file, and saves so much effort trying to find a fancy programatic solution.

Update:
As part of this I have opened another issue on d.o to discuss how we can make update module work for modules installed via git (including via composer).

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Feb 26 2015
Feb 26

This is the third in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

  • In part 1, I introduced the series and showed how for some modules, the exact same code can be used with both Drupal and Backdrop.
  • In part 2, I showed what to do when you want to port a Drupal module to a separate Backdrop version and get it up and running on GitHub.
  • In part 3 (this post), I'll wrap up the series by explaining how to link the Backdrop module to the Drupal.org version and maintain them simultaneously.

Linking the Backdrop Module to the Drupal.org Version and Maintaining Them Simultaneously

In part 2 I took a small Drupal module that I maintain (User Cancel Password Confirm) and ported it to Backdrop. In the end, I wound up with two codebases for the same module, one on Drupal.org for Drupal 7, and one on GitHub for Backdrop.

However, the two codebases are extremely similar. When I fix a bug or add a feature to the Drupal 7 version, it's very likely that I'll want to make the exact same change (or at least an extremely similar one) to the Backdrop version. Wouldn't it be nice if there were a way to pull in changes automatically without having to do everything twice manually?

If you're a fairly experienced Git user, you might already know that the answer is "yes". But if you're not, the process isn't necessarily straightforward, so I'm going to document it step by step here.

Overall, what we're doing is simply taking advantage of the fact that when we imported the Drupal.org repository into GitHub in part 2, we pulled in the entire history of the repository, including all of the Drupal commits. Because our Backdrop repository knows about these existing commits, it can also figure out what's different and pull in the new ones when we ask it to.

In what follows, I'm assuming a workflow where changes are made to the Drupal.org version of the module and pulled into Backdrop later. However, it should be relatively straightforward to reverse these instructions to do it the other way around (or even possible, but perhaps less straightforward, to have a setup where you can do it in either direction).

  1. To start off, we need to make our local clone of the Backdrop repository know about the Drupal.org repository. (A local clone is obtained simply by getting the "clone URL" from the GitHub project page and copying it locally, for example with the command shown below.)
    git clone [email protected]:backdrop-contrib/user_cancel_password_confirm.git
    

    First let's check what remote repositories it knows about already:

    $ git remote -v
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (fetch)
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (push)
    

    No surprise there; it knows about the GitHub version of the repository (the "origin" repository that it was cloned from).

    Let's add the Drupal.org repository to this list and check again:

    $ git remote add drupal http://git.drupal.org/project/user_cancel_password_confirm.git
    $ git remote -v
    drupal    http://git.drupal.org/project/user_cancel_password_confirm.git (fetch)
    drupal    http://git.drupal.org/project/user_cancel_password_confirm.git (push)
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (fetch)
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (push)
    

    The URL I used here is the same one I used in part 2 to import the repository to GitHub (that is, it's the public-facing Git URL of my project on Drupal.org, available from the "Version control" tab of the drupal.org project page, after unchecking the "Maintainer" checkbox - if it’s present - so that the public URL is displayed). I've also chosen to give this repository the name "drupal". (Usually the convention is to use "upstream" for something like this, but in GitHub-land "upstream" is often used in a slightly different context involving development forks of one GitHub repository to another. So for clarity, I'm using "drupal" here. You can use anything you want to.)

  2. Next let's pull in everything from the remote Drupal repository to our local machine:
    $ git fetch drupal
    remote: Counting objects: 4, done.
    remote: Compressing objects: 100% (2/2), done.
    remote: Total 3 (delta 0), reused 0 (delta 0)
    Unpacking objects: 100% (3/3), done.
    From http://git.drupal.org/project/user_cancel_password_confirm
    * [new branch]          7.x-1.x -> drupal/7.x-1.x
    * [new branch]          master  -> drupal/master
    * [new tag]             7.x-1.0-rc1 -> 7.x-1.0-rc1
    

    You can see it has all the branches and tags that were discussed in part 2 of this series. However, although I pulled the changes in, they are completely separate from my Backdrop code (the Backdrop code lives in "origin" and the Drupal code lives in "drupal").

    If you want to see a record of all changes that were made to port the module to Backdrop at this point, you could run git diff drupal/7.x-1.x..origin/1.x-1.x to examine them.

  3. Now let's fix a bug on the Drupal.org version of the module. I decided to do a simple documentation fix: Fix documentation of form API functions to match coding standards

    I made the code changes on my local checkout of the Drupal version of the module (which I keep in a separate location on my local machine, specifically inside the sites/all/modules directory of a copy of Drupal so I can test any changes there), then committed and pushed them to Drupal.org as normal.

  4. Back in my Backdrop environment, I can pull those changes in to the "drupal" remote and examine them using git log:
    $ git fetch drupal
    remote: Counting objects: 5, done.
    remote: Compressing objects: 100% (3/3), done.
    remote: Total 3 (delta 2), reused 0 (delta 0)
    Unpacking objects: 100% (3/3), done.
    From http://git.drupal.org/project/user_cancel_password_confirm
      7a70138..997d82d  7.x-1.x     -> drupal/7.x-1.x
    
    $ git log origin/1.x-1.x..drupal/7.x-1.x
    commit 997d82dce1a4269a9cee32d3f6b2ec2b90a80b33
    Author: David Rothstein 
    Date:   Tue Jan 27 13:30:00 2015 -0500
    
            Issue #2415223: Fix documentation of form API functions to match coding standards.
    

    Sure enough, this is telling me that there is one commit on the Drupal 7.x-1.x version of the module that is not yet on the Backdrop 1.x-1.x version.

  5. Now it's time to merge those changes to Backdrop. We could just merge the changes directly and push them to GitHub and be completely done, but I'll follow best practice here and do it on a dedicated branch with a pull request. (In reality, I might be doing this for a more complicated change than a simple documentation fix, or perhaps with a series of Drupal changes all at once rather than a single one. So I might want to formally review the Drupal changes before accepting them into Backdrop.)

    By convention I'm going to use a branch name ("drupal-2415223") based on the Drupal.org issue number:

    $ git checkout 1.x-1.x
    Switched to branch '1.x-1.x'
    
    $ git checkout -b drupal-2415223
    Switched to a new branch 'drupal-2415223'
    
    $ git push -u origin drupal-2415223
    Total 0 (delta 0), reused 0 (delta 0)
    To [email protected]:backdrop-contrib/user_cancel_password_confirm.git
    * [new branch]          drupal-2415223 -> drupal-2415223
    Branch drupal-2415223 set up to track remote branch drupal-2415223 from origin.
    
    $ git merge drupal/7.x-1.x
    Auto-merging user_cancel_password_confirm.module
    Merge made by the 'recursive' strategy.
    user_cancel_password_confirm.module |   10 ++++++++--
    1 file changed, 8 insertions(+), 2 deletions(-)
    

    In this case, the merge was simple and worked cleanly. Of course, there might be merge conflicts here or other changes that need to be made. You can do those at this time, and then git push to push the changes up to GitHub.

  6. Once the changes are pushed, I went ahead and created a pull request via the GitHub user interface, with a link to the Drupal.org issue for future reference (I could have created a corresponding issue in the project's GitHub issue tracker also, but didn't bother):
    • Fix documentation of form API functions to match coding standards (pull request) (diff)

    Merging this pull request via the GitHub user interface gets it onto the official 1.x-1.x Backdrop branch, and into the Backdrop version of the module.

    Here's the commit for Drupal, and the same one for Backdrop:

    http://cgit.drupalcode.org/user_cancel_password_confirm/commit/?id=997d8...
    https://github.com/backdrop-contrib/user_cancel_password_confirm/commit/...

Using the above technique, it's possible to have one main issue (in this case on Drupal.org) for any change you want to make to the module, do essentially all the work there, and then easily and quickly merge that change into the Backdrop version without the hassle of repeating lots of manual, error-prone steps.

Hopefully this technique will be useful to developers who want to contribute their work to Backdrop while also continuing their contributions to Drupal, and will help the two communities continue to work together. Thanks for reading!

Further Backdrop Resources

Do you have any thoughts or questions, or experiences of your own trying to port a module to Backdrop? Leave them in the comments.

Feb 17 2015
Feb 17

This is the second in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

  • In part 1, I introduced the series and showed how for some modules, the exact same code can be used with both Drupal and Backdrop.
  • In part 2 (this post), I'll explain what to do when you want to port a Drupal module to a separate Backdrop version and get it up and running on GitHub.
  • In part 3, I'll explain how to link the Backdrop module to the Drupal.org version and maintain them simultaneously.

Porting a Drupal Module to Backdrop and Getting it Up and Running on GitHub

For this post I’ll be looking at User Cancel Password Confirm, a very small Drupal 7 module I wrote for a client a couple years back to allow users who are canceling their accounts to confirm the cancellation by typing in their password rather than having to go to their email and click on a confirmation link there.

We learned in part 1 that adding a backdrop = 1.x line to a module’s .info file is the first (and sometimes only) step required to get it working with Backdrop. In this case, however, adding this line to the .info file was not enough. When I tried to use the module with Backdrop I got a fatal error about a failure to open the required includes/password.inc file. What's happening here is simply that Backdrop (borrowing a change that's also in Drupal 8) reorganized the core directory structure compared to Drupal 7 to put most core files in a directory called "core". When my module tries to load the includes/password.inc file, it needs to load it from core/includes/password.inc in Backdrop instead.

This is a simple enough change that I could just put a conditional statement into the Drupal code so that it loads the correct file in either case. However, over the long run this would get unwieldy. Furthermore, if I had chosen a more complicated module to port, one which used Drupal 7's variable or block systems (superseded by the configuration management and layout systems in Backdrop) it is likely I'd have more significant changes to make.

So, this seemed like a good opportunity to go through the official process for porting my module to Backdrop.

Backdrop contrib modules, like Backdrop core, are currently hosted on GitHub. Regardless of whether you're already familiar with GitHub from other projects, there are some steps you should follow that might not be familiar, to make sure your Backdrop module's repository is set up properly and ultimately to get it included on the official list of Backdrop contributed projects.

Importing to GitHub

The best way to get a Drupal module into GitHub is to import it; this preserves the pre-Backdrop commit history which becomes important later on.

Before you do this step, if you're planning to port a Drupal module that you don't maintain, it's considered best practice to notify the current maintainer and see if they'd like to participate or lead the Backdrop development themselves (see the "Communicating" section of the Drupal 7 to Backdrop conversion documentation for more information). In my case I'm already the module maintainer, so I went ahead and started the import:

  1. Go to the GitHub import page and provide the public URL of the Drupal project's Git repository (which I got from going to the project page on Drupal.org, clicking the "Version control" tab, and then - assuming you are importing a module that you maintain - making sure to uncheck the "Maintainer" checkbox so that the public URL is displayed). Drupal.org gives me this example code:


    git clone --branch 7.x-1.x http://git.drupal.org/project/user_cancel_password_confirm.git

    So I just grabbed the URL from that.

  2. Where GitHub asks for the project name, use the same short name (in this case "user_cancel_password_confirm") that the Drupal project uses.
  3. Import the project into your own GitHub account for starters (unless you're already a member of the Backdrop Contrib team - more on that later).

Here's what it looks like:
GitHub import

Submitting this form resulted in a new GitHub repository for my project at https://github.com/DavidRothstein/user_cancel_password_confirm.

As a final step, I edited the description of the GitHub project to match the description from the module's .info file ("Allows users to cancel their accounts with password confirmation rather than e-mail confirmation").

Cleaning Up Branches and Tags

Next up is some housekeeping. First, I cloned a copy of the new repository to my local machine and then used git branch -r to take a look around:


$ git clone [email protected]:DavidRothstein/user_cancel_password_confirm.git
$ git branch -r
origin/7.x-1.x
origin/HEAD -> origin/master
origin/master

Like many Drupal 7 contrib projects, this has a 7.x-1.x branch where all the work is done and a master branch that isn't used. When I imported the repository to GitHub it inherited those branches. However, for Backdrop I want to do all work on a 1.x-1.x branch (where the first "1.x" refers to compatibility with Backdrop core 1.x).

  1. So let's rename the 7.x-1.x branch:


    $ git checkout 7.x-1.x
    Branch 7.x-1.x set up to track remote branch 7.x-1.x from origin.
    Switched to a new branch '7.x-1.x'
    $ git branch -m 7.x-1.x 1.x-1.x
    $ git push --set-upstream origin 1.x-1.x
    Total 0 (delta 0), reused 0 (delta 0)
    To [email protected]:DavidRothstein/user_cancel_password_confirm.git
    * [new branch] 1.x-1.x -> 1.x-1.x
    Branch 1.x-1.x set up to track remote branch 1.x-1.x from origin.

  2. And delete the old one from GitHub:


    $ git push origin :7.x-1.x
    To [email protected]:DavidRothstein/user_cancel_password_confirm.git
    - [deleted] 7.x-1.x

  3. We want to delete the master branch also, but can't do it right away since GitHub treats that as the default and doesn't let you delete the default branch.

    So I went to the module's GitHub project page, where (as the repository owner) I have a "Settings" link in the right column; via that link it's possible to change the default branch to 1.x-1.x through the user interface.

    Now back on my own computer I can delete the master branch:


    $ git push origin :master
    To [email protected]:DavidRothstein/user_cancel_password_confirm.git
    - [deleted] master

  4. On Drupal.org, this module has a 7.x-1.0-rc1 release, which was automatically imported to GitHub. This won't be useful to Backdrop users, so I followed the GitHub instructions for deleting it.
  5. Finally, let's get our local working copy somewhat in sync with the changes on the server. The cleanest way to do this is probably just to re-clone the repository, but you could also run git remote set-head origin 1.x-1.x to make sure your local copy is working off the same default branch.

The end result is:


$ git branch -r
origin/1.x-1.x
origin/HEAD -> origin/1.x-1.x

Just what we wanted, a single 1.x-1.x branch which is the default (and which was copied from the 7.x-1.x branch on Drupal.org and therefore contains all its history).

Updating the Code for Backdrop

Now that the code is on GitHub, it's time to make it Backdrop-compatible.

To do this quickly, you can just make commits to your local 1.x-1.x branch and push them straight up to the server. In what follows, though, I'll follow best practices and create a dedicated branch for each change (so I can create a corresponding issue and pull request on GitHub). For example:


$ git checkout -b backdrop-compatibility
$ git push -u origin backdrop-compatibility

Then make commits to that branch, push them to GitHub, and create a pull request to merge it into 1.x-1.x.

  1. To get the module basically working, I'll make the simple changes discussed earlier:
    • Add basic Backdrop compatibility (issue) (diff)

    If you look at the diff, you can see that instead of simply adding the backdrop = 1.x line to the .info file, I replaced the core = 7.x line with it (since the latter is Drupal-specific and does not need to be in the Backdrop version).

    With that change, the module works! Here it is in action on my Backdrop site:

    Cancel account using password

    (Also visible in this screenshot is a nice effect of Backdrop's layout system: Editing pages like this one, even though they are using the default front-end Bartik theme, have a more streamlined, focused layout than normal front-end pages of the site, without the masthead and other standard page elements.)

  2. Other code changes for this small module weren't strictly necessary, but I made them anyway to have a fully-compatible Backdrop codebase:
    • Replace usage of "drupal" with "backdrop" in the code (issue) (diff)
    • Use method on the user account object to determine its ID (issue) (diff)
  3. Next up, I want to get my module listed on the official list of Backdrop contributed projects (currently this list is on GitHub, although it may eventually move to the main Backdrop CMS website).

    I read through the instructions for applying to the Backdrop contributed project group. They're relatively simple, and I've already done almost everything I need above. The one thing I'm missing is that Backdrop requires a README.md file in the project root with some standard information in it (I like that they're enforcing this; it should help developers browsing the module list a lot), and it also requires a LICENSE.txt file. These were both easy to create following the provided templates and copying some information from the module's Drupal.org project page:

    Once that's done, and after reading through the rest of the instructions and making sure I agreed with them, I proceeded to create an issue:

    Application to join contrib team

    In my case it was reviewed and approved within a few hours (perhaps helped by the fact that I was porting a small module), and I was given access to the Backdrop contributed project team on GitHub.

  4. To get the module transferred from my personal GitHub account to the official Backdrop contrib list, I followed GitHub's instructions for transferring a repository.

    They are mostly straightforward. Just make sure to use "backdrop-contrib" as the name of the new owner (who you are transferring the repository to):

    Transfer repository to backdrop-contrib

    And make sure to check the box that gives push access to your repository to the "Authors" team within the Backdrop Contrib group (if you leave it as "Owners", you yourself wouldn't be able to push to it anymore):

    Grant access to the Authors team

    That's all it took, and my module now appears on the official list.

    You'll notice after you do this that all the URLs of your project have changed, although the old ones redirect to the new ones. That's why if you follow many of the links in this post, which point to URLs like https://github.com/DavidRothstein/user_cancel_password_confirm, you'll see that they actually redirect you to https://github.com/backdrop-contrib/user_cancel_password_confirm.

    For the same reason, you can keep your local checkout of the repository pointed to the old URL and it will still work just fine, although to avoid any confusion you might want to either do a fresh clone at this point, or run a command like the following to update the URL:

    git remote set-url origin [email protected]:backdrop-contrib/user_cancel_password_confirm.git
    

With the above steps, we’re all set; the module is on GitHub and can be developed further for Backdrop there.

But what happens later on when I make a change to the Drupal version of the module and want to make the same change to the Backdrop version (certainly a common occurrence)? Do I have to repeat the same changes manually in both places? Luckily the answer is no. In part 3 of this series, I’ll explain how to link the Backdrop module to the Drupal.org version and maintain them simultaneously. Stay tuned!

Further Backdrop Resources

Do you have any thoughts or questions, or experiences of your own trying to port a module to Backdrop? Leave them in the comments.

Oct 30 2013
Oct 30

This week we continue to re-publish lessons from the Pantheon Academy, with Jon Peck. In these free lessons we'll be taking a look at working with Git and MultiDev, the Pantheon cloud-based development environment for teams working on projects together. We'll also take a look at dealing with error codes, and setting up SSL fro your server. We're going to wrap up our current list with a checklist for taking your site live.

Keep your eyes open in the future for more lessons in the Pantheon Academy series. Starting next week, we are diving back into the world of Using Drupal, continuing on with Chapter 4: Media Management. In addition to the Using Drupal series, which uses a development version of Media Module 2, we will also be starting a parallel series using Media Module 1.

Oct 21 2013
Oct 21
Automating new dev sites for new branches

Over the past few years we've moved away from using Subversion (SVN) for version control and we're now using Git for all of our projects.  Git brings us a lot more power, but because of its different approach there are some challenges as well. 

Git has powerful branching and this opens up new opportunities to start a new branch for each new ticket/feature/client-request/bug-fix. There are several different branching strategies: Git Flow is common for large ongoing projects, or we use a more streamlined workflow.  This is great for client flexibility — a new feature can be released to production immediately after it's been approved, or you can choose to bundle several features together in an effort to reduce the time spent running deployments.  Regardless of what branching model you choose you will run into the issue where stakeholders need to review and approve a branch (and maybe send it back to developers for refinement) before it gets merged in.  If you've got several branches open at once that means you need several different dev sites for this review process to happen.  For simple tasks on simple sites you might be able to get away with just one dev site and manually check out different branches at different times, but for any new feature that requires database additions or changes that won't work. 

Another trend in web development over the past few years has been to automate as many of the boring and repetitive tasks as possible. So we've created a Drush command called Site Clone that can do it all with just a few keystrokes:

  1. Copies the codebase (excluding the files directory) with rsync to a new location.
  2. Creates a new git branch (optional).
  3. Creates a new /sites directory and settings.php file.
  4. Creates a new files directory.
  5. Copies the database.
  6. Writes database connection info to a global config file.

It also does thorough validation on the input parameters (about 20 different validations for everything from checking that the destination directory is writable, to ensuring that the name of the new database is valid, to ensuring that the new domain can be resolved).

Here's an example of how it's run:

drush advo-site-clone --destination-domain=test.cf.local --destination-db-name=test --git-branch=test
---VALIDATION---
no errors

---SUMMARY---
Source path         : /Users/dave/Sites/cf
Source site         : sites/cf.local
Source DB name      : cf
Destination path    : /Users/dave/Sites/test.cf.local
Destination site    : sites/test.cf.local
Destination DB name : test
New Git branch      : test

Do you really want to continue? (y/n): y
Starting rsync...                                             [status]
Rsync was successful.                                         [success]
Creating Git branch...                                        [status]
Switched to a new branch 'test'
Git branch created.                                           [success]
Created sites directory.                                      [success]
Updated settings.php.                                         [success]
Created files directory.                                      [success]
Starting DB copy...                                           [status]
Copied DB.                                                    [success]
Complete                                                      [success]

There are a few other things that we needed to put in place in order to get this working smoothly. We've set up DNS wildcards so that requests to a third-level subdomain end up where we want them to.  We've configured Apache with a VirtualDocumentRoot so that requests to new subdomains get routed to the appropriate webroot.  Finally we've also made some changes to our project management tool so that everyone knows which dev site to look at for each ticket.

Once you've got all the pieces of the puzzle you'll be able to have a workflow something like:

  1. Stakeholder requests a new feature (let's call it foo) for their site (let's call it bar.com).
  2. Developer clones an existing dev site (bar.advomatic.com) into a new dev site (foo.bar.advomatic.com) and creates a new branch (foo).
  3. Developer implements the request.
  4. Stakeholder reviews on the branch dev site (foo.bar.advomatic.com). Return to #3 if necessary.
  5. Merge branch foo + deploy.
  6. @todo decommission the branch site (foo.bar.advomatic.com).

Currently that last step has to be done manually.  But we should create a corresponding script to clean-up the codebase/files/database/branches.

Automate all the things!

Sep 06 2013
Sep 06

In Episode 25 of the Drupalize.Me Podcast, Kyle Hofmeyer takes time to gather some Lullabots to discuss "developer workflow". He is joined by Lullabot's Senior Technical Project Manager, Jerad Bitner, Senior Developer, Joe Shindelar, and first-time project manager, Emma Westby. In this episode the four bots share their thoughts on how they approach working in a team and the tools they use to make the process of sharing code easier. Everything from Git to peer reviews, to branch naming conventions, and establishing a workflow with your team is discussed. Lullabot has thought a lot about making the developer experience more efficient--in this podcast one of our internal pull request builders, Tugboat, is also discussed. But this podcast is really just the tip of the iceberg. If you're excited to learn more about optimizing your own processes, Joe and Emma will be leading a workshop in Prague (there are still a few spots open).

Sep 02 2013
Sep 02

You've probably heard of this magical land of version control where you can undo bad things, start over, and share your work effortlessly. It's a wonderful place that, let's face it, actually takes a bit of work to get to. It's a lot like Drupal: the more time you spend with version control, the more you forget how hard and complicated it was at the very beginning. And even when you understand it, you can still get yourself into a fuddle sometimes.

One of the things I did when I first joined Drupalize.Me was write documentation about how we were expected to work with our ticket system and branches in Git. I refer to it less these days, but it was essential in helping me to develop good habits. Internally we've documented:

  • How to work on a ticket
  • Branch types and names
  • Creating and maintaining branches
  • Checklists for peer review
  • How to conduct a peer review

The documentation includes step-by-step instructions and a copy-paste for various Git commands. Yup, even the experts have a copy-paste checklist for Git!

If you're new to Git and wondering about these commands, I encourage you to watch our video series Introduction to Git. Here our expert instructors Joe Shindelar and Blake Hall guide you through common Git commands and give you configuration tips. It's a great introduction to this popular version control system.

Still, if you've spent any time trying to shoehorn Drupal into a versioned workflow, you know that learning a few Git commands won't cut it. There have been many days when Drupal has tested and broken my patience with its refusal to play nice. You can make some things easy with the helpful module Features and the command line tool Drush (see: Introduction to Drush, beginner and Coding for Drush, advanced).

But none of these tools will replace real world experience. I've been teaching version control for almost a decade and Joe has been living it as a Drupal developer. Based on this experience, we've now merged our favorite Git workshops into one: Mastering Git for the Drupal Developer's Work Flow.

We are delighted to offer this workshop at DrupalCon Prague. We know, first-hand, exactly how frustrating the experience of putting Drupal into Git can be. We want to help you streamline the process. So in one day, we'll teach you how to create and implement a process for working with Drupal and Git. We encourage you to sign up with your co-workers for this workshop. After all, workflow is about improving your work with others.

See you in Prague!

Aug 19 2013
Aug 19

Hi, we're Affinity Bridge. You might remember us from such PNWDS 2012 sessions as Mapping and GIS on the server side in Drupal, and Going further with D7 Organic Groups. Today we're here to tell you about the 2013 Pacific Northwest Drupal Summit, and why you, fellow Drupal professional, should be in Vancouver for it on October 5th and 6th of this year. As Cypress sponsors of the Summit, , and founding sponsors of the PNWDS, we're excited to tell you more about what's in store for the Summit.

Whether you'd like to take in some sessions, attend Birds of a Feather gatherings, attend or join a panel, or socialize after hours, are an expert Drupalist or are new to Drupal, then the Summit will have something for you.

Registration is now open, and session proposals from registered participants will be accepted until August 24th, so register now if you want to submit a session proposal.

Here are the sessions we're hoping to present this year:

Once you're registered you can vote on which sessions you'd like to see (wink wink, nudge nudge).

If you're still not convinced, learn more about the summit before you register, and submit your sessions before the 24th. Keep an eye out for us when you arrive; maybe we'll see you there!

Jun 19 2013
Jun 19

Earlier this month published ‘reposervice’ to GitHub. Reposervice is a “self-contained” Islandora installation source tree that is intended to smooth the LYRASIS deployment of repository services between development servers, a staging server and production servers. It is a bit of a work-in-progress at the moment, but others might find it useful as well.

(By the way, if you had looked at Reposervice prior to June 18th, you may have noticed a missing critical element — the Drupal submodule. Not because you couldn’t add Drupal yourself but because the Reposervice clone has relative soft symlinks to the Islandora modules positioned in the top level Reposervice directory.)

The goals of this setup are listed in the README file:

  • Put (most) everything in a self-contained directory using relative paths for most components with a configuration script that generates configuration files for absolute paths.
  • Make it easy to track the upstream Islandora work so that you can bring selected commits into your own environment, if desired [using git submodules].
  • Put the configuration of Fedora Commons, FedoraGSearch, SOLR, and other associated components under version control.
  • Use Drupal Features to store the Drupal configuration and put it under version control.
  • Support multi-site setups for separate Islandora/Drupal instances using a common Fedora Commons, SOLR, and djatoka installation.

The first four bullets are there along with hints of the fifth. (There is some as-yet unfinished, uncommitted code that automates much of the work of creating multi-site setups under a single Drupal installation.)

When I sent a note about this to the islandora community mailing list, I got a helpful reply back from Nick Ruest pointing to some work that Graham Stewart of the University of Toronto had done using Chef.

Date: Thu, 13 Jun 2013 12:39:50 -0400
From: Nick Ruest
To: [email protected]
Subject: Re: [islandora] A ‘DevOps’ Configuration for Islandora

I nearly forgot! Graham Steward at UofT has a few recipes up in his
Github account[1] and there is a recording of his presentation from the
2012 Access[2].

-nruest

[1] https://github.com/librarychef
[2] http://www.youtube.com/watch?v=eTNBmy4ZznA

The recording of the presentation is a great introduction to Chef from a library perspective; Graham builds an Islandora 6.x installation from scratch in 624 seconds. The Ruby-based islandora12.rb recipe indeed has a great deal of resemblance to the bash scripts I was creating to deploy the components into the central directory tree. I’m going to have to add Chef to my list of things to learn, and Graham’s call of cooperation in building library-oriented recipes is a compelling one.

There are a few LYRASIS-specific things in here, but I’ve tried to make the basic building blocks replicable for others. This git repo, as it is further developed and documented, will be the foundation of a presentation I’m giving at Open Repositories next month. Comments, questions, observations, and even pull requests (should you find this setup useful in your own work) welcome!

(This post was updated on 29-Jan-2016.)
Apr 05 2013
Apr 05

Listen online: 

Git is often touted as among other things being extremely flexible. It's a big selling point for the software. You're not throwing all your eggs in one basket and assuming that there is one singular workflow to rule them all. This flexibility can also be a challenge though. In this podcast we'll talk about the various ways that we at Lullabot use Git when working with teams of people. Both small and large. And the organizational tricks we've learned along the way to help make sure our projects continue moving forward and don't get to messy.

Some of the things discussed include designating someone as a branch manager, working in feature/task branches to keep your code organized and easy to review, and using pull requests. Spoiler alert! We end up boiling it all down to this. There is no one perfect way, but whatever way you decided to organize your team make sure you document it and that everyone on the team knows what you decided.

Podcast notes

Ask away!

If you want to suggest your own ideas for podcasts, or have questions for us to answer on a podcast, let us know:
Twitter
Facebook
Contact us page

Release Date: April 5, 2013 - 10:00am

Album:

Length: 42:35 minutes (24.85 MB)

Format: mono 44kHz 81Kbps (vbr)

Mar 13 2013
joe
Mar 13

One of my earlier memories of creating things on a computer was the Kid Pix application that my dad purchased sometime in the early 90's. Prior to that most of my time on the computer was spent playing games and just sort of putzing around. With Kid Pix though I was quickly breaking into the age of digital publishing. One of the features that this 15+ year old application had that provided my creative process (a.k.a. holding down the mouse button while dragging the stamp tool around the screen in circles) was the concept of undo. Didn't like the placement of that stamp? Or maybe the line you just drew was a little to far to the right and you wanted to try again. No problem. Command - Z and you're right back to where you started before that simple mistake.

In college I took a painting class, and while fun, I was never really all that good at painting. I have these vivid memories though of sitting in the studio working on what I hoped would be a stunning homage to Lichtenstein and it just wasn't going as well as one would hope. I would lay some paint down on the canvas, step back, look at it, and in my head I would hit Command - Z. And of course nothing would happen, instead I was forced to paint over that same spot, over and over and over, until I got it just right. In the end it worked, but it was an interesting experience to have done something in the physical world and had my brain immediately reach for the undo key sequence.

I'm willing to bet that most of us have done this or something similar at some point.

Git: the magical undo tool

That's where Git comes in. Git is a distributed version control system (DVCS) for source code management (SCM). It's like a giant undo button for everything you've ever done on a project throughout it's entire existence. And that's just the icing on the cake. Git also provides some powerful tools for collaborating with your team, browsing a project's entire history, deploying code, and so much more. Oh, and did I mention it's fast? Like whoa fast!

Git is the version control system used for Drupal core and contributed module development and as such is used by most people building sites with Drupal to keep track of their client work as well. It's also the system used to track development of the Linux kernel, Ruby on Rails, Android, and many, many other projects.

The Introduction to Git Series

This week we're kicking off a series, Introduction to Git, that Blake Hall and I recorded, which will teach you to use this great tool. It's an in-depth series that starts with the basics of version control, establishes some terminology, and a base line workflow, then continues to build on that by going beyond the basics of the various Git commands to make the most out of your tools.

This series starts out with the basics and quickly dives into the powerful tools that Git provides. Just a few of the many things you'll learn about are:

  • The basic concepts and terminology of version control
  • Installing and configuring Git
  • Creating and using branches and tags
  • Navigating the history of a project and reviewing changes
  • How to work with conflicts and corrections
  • Using Git with remote repositories and sharing changes with your team
  • And some tips and tricks for using Git with both Drupal.org and GitHub.com

First lessons out the door!

The first lessons in the series, published today, will get you started with a background on version control, getting Git installed and set up, and how to find more help:

Git is a command line application and we'll be interacting with it through the Terminal for most of these lessons so if you're not familiar with using the command line or basic use of the vi editor I suggest you brush up on those things first as they'll come in handy while learning Git. Although there are a plethora of GUI tools that can be used in conjunction with Git, we felt it best to learn the underlying, consistently standard application first, so that you can easily translate that to the GUI of your choice down the road, if that's what floats your boat.

We will be releasing this series over the next month, so look for new Git videos every Wednesday throughout March and in to April.

Mar 07 2013
Mar 07

Average: 4.7 (3 votes)

First baseI recently ran into an issue on one of our projects with a Git repository that stumped me for a few days. It was a small project: only three developers committing to a single repository hosted on Pantheon. I kept on running into an issue where I (or any of the other developers) could ever get my local repository to a “clean” state.

That is, I’d run “git status” and see 

$ git status
# On branch master
#
# Changes not staged for commit:
#   (use "git add/rm <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: image.JPG
#
# no changes added to commit (use "git add" and/or "git commit -a")

Hmmm, that’s odd, as I didn’t modify image.JPG. No worries, let me just do a “git checkout” on it...

$ git checkout image.JPG
$ git status
# On branch master
#
# Changes not staged for commit:
#   (use "git add/rm <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: image.JPG
#
# no changes added to commit (use "git add" and/or "git commit -a")

Whaaa? But I just checked it out? How can it be modified?

Well, as you may be guessing by the capitalized “.JPG” extension, the root of this problem was with case-sensitivity.

I called upon my Git mentor, David Rogers to help me figure out where the problem was and how to remedy it. David explained to me that the issue was that due to the fact that the three of us developers were on case-insensitive machines (Macs), and some of the commits were being made on a case-sensitive server. Pantheon has a feature where files SFTP’d up to the server are automatically committed, and our client-developer was making his commits this way. 

The problem was caused because one of our developers committed the same file with two different (according to case-sensitive Linux) filenames to the repo, “image.jpg” and “image.JPG”. So, as David eloquently explained:

So Linux sees "FOO.JPG" and "foo.jpg" as two separate files... But Mac / Win see them as the SAME file... And Git, being a Linux tool, sees them as different, because it includes the name of the file in the hashing algorithm. If the hashes don't match, there's a difference. If the name is not the exact same, the hashes won’t match.

Brilliant - we had our cause, so what was the solution? It was actually a two-parter. 

First, we needed to correct the repo on a case-sensitive machine so that there was only one copy of the (lowercase-named) image. We did this by using “git rm” to remove the duplicate, upper-cased file, then committed the change, then had our developers pull the commit. 

If their local repos were still “dirty”, all that needed to be done was to locally drop the offending file, then check it back out using “git checkout”. 

The key step here is fixing the upstream repository on a case-sensitive machine, without doing this, we’d never fix the core issue. Or, as David explained, we’d have a classic “who’s on first” routine on all of our case-insensitive machines:

user: git, what's our status?
git: the file named FOO has changed...
user: what's changed about FOO?
git: it looks different.
user: uh, okay... mac, delete FOO.
mac: (silently) deleted "foo"
user: okay, git, what's our status?
git: you're missing two files: "foo" and "FOO"...
user: wait, what? Okay, check out "foo" for me...
git: (silently) checked out "foo"
user: okay, great now what's our status?
git: the file named FOO has changed...
user: this again!? mac, what files are in this directory?
mac: there's a file called "foo"
user: git, what's the status of file "foo"...?
git: the file named FOO has changed...
user: MUST KILL ROBOT!!!!

So, the moral of the story - always keep your filenames lowercase to avoid issues like this, and if you do encounter an issue, fix it on a case-sensitive machine.

David Rogers is a professional software developer, speaker, trainer, and organizer of the OrlandoPHP user group. If you’d like more git-based comedy routines or at least an entertaining solution to your development problems, you can find him on the internets.

Trackback URL for this post:

http://drupaleasy.com/trackback/560

Feb 22 2013
Feb 22

Simplifying Wordpress and Drupal configurationAt last year's Drupalcon in Denver there was an excellent session called Delivering Drupal.  It had to do with the oftentimes painful process of deploying a website to web servers.  This was a huge deep dive session that went into the vast underbelly of devops and production server deployment.  There were a ton of great nuggets and I recommend watching the session recording for serious web developers.

The most effective takeway for me was the manipulation of the settings files for your Drupal site, which was only briefly covered but not demonstrated.  The seed of this idea that Sam Boyer presented got me wondering about how to streamline my site deployment with Git.  I was using Git for my Drupal sites, but not effectively for easy site deployment.  Here are the details of what I changed with new sites that I build.  This can be applied to Wordpress as well, which I'll demonstrate after Drupal.

Why would I want to do this?

When you push your site to production you won't have to update a database connection string after the first time.  When you develop locally you won't have to update database connections, either.

Streamlining settings files in Drupal

Drupal has the following settings file for your site:

sites/yourdomain.com/settings.php

This becomes a read only file when your site is set up and is difficult to edit.  It's a pain editing it to run a local site for development.  Not to mention if you include it in your git repository, it's flagged as modified when you change it locally.

Instead, let's go ahead and create two new files:

sites/yourdomain.com/settings.local.php
sites/yourdomain.com/settings.production.php

Add the following to your .gitignore file in the site root:

sites/yourdomain.com/settings.local.php

This will put settings.php and settings.production.php under version control, while your local settings.local.php file is not.  With this in place, remove the $databases array from settings.php.  At the bottom of settings.php, insert the following:

$settingsDirectory = dirname(__FILE__) . '/';
if(file_exists($settingsDirectory . 'settings.local.php')){
    require_once($settingsDirectory . 'settings.local.php');
}else{
    require_once($settingsDirectory . 'settings.production.php');
}

This code tells Drupal to include the local settings file if it exists, and if it doesn't it will include the production settings file.  Since settings.local.php is not in Git, when you push your code to production you won't have to mess with the settings file at all.  Your next step is to populate the settings.local.php and settings.production.php files with your database configuration.  Here's my settings.local.php with database credentials obscured.  The production file looks identical but with the production database server defined:

<?php
    $databases['default']['default'] = array(
      'driver' => 'mysql',
      'database' => 'drupal_site_db',
      'username' => 'db_user',
      'password' => 'db_user_password',
      'host' => 'localhost',
      'prefix' => '',
    );

Streamlining settings files in Wordpress

Wordpress has a similar process to Drupal, but the settings files are a bit different.  The config file for Wordpress is the following in site root:

wp-config.php

Go ahead and create two new files:

wp-config.local.php
?wp-config.production.php

Add the following to your .gitinore file in the site root:

wp-config.local.php

This will make it so wp-config.php and wp-config.production.php are under version control when you create your Git repository, but wp-config.local.php is not.  The local config will not be present when you push your site to production.  Next, open the Wordpress wp-config.php and remove the defined DB_NAME, DB_USER, DB_PASSWORD, DB_HOST, DB_CHARSET, and DB_COLLATE variables.  Insert the following in their place:

/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') ) {
    define('ABSPATH', dirname(__FILE__) . '/');
}
if(file_exists(ABSPATH  . 'wp-config.local.php')){
    require_once(ABSPATH  . 'wp-config.local.php');
}else{
    require_once(ABSPATH . 'wp-config.production.php');
}

This code tells Wordpress to include the local settings file if it exists, and if it doesn't it will include the production settings file. Your next step is to populate the wp-config.local.php and wp-config.production.php files with your database configuration.  Here's my wp-config.local.php with database credentials obscured.  The production file looks identical but with the production database server defined:

<?php
// ** MySQL settings - You can get this info from your web host ** //
 
/** The name of the database for WordPress */
define('DB_NAME', 'db_name');
 
/** MySQL database username */
define('DB_USER', 'db_user');
 
/** MySQL database password */
define('DB_PASSWORD', 'db_user_password');
 
/** MySQL hostname */
define('DB_HOST', 'localhost');
 
/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');
 
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

What's next?

Now that you're all set up to deploy easily to production with Git and Wordpress or Drupal, the next step is to actually get your database updated from local to production.  This is a topic for another post, but I've created my own set of Unix shell scripts to simplify this task greatly.  If you're ambitious, go grab my MySQL Loaders scripts that I've put on Github.

Feb 06 2013
Feb 06

This week I am continuing the trend of mini-series with some lessons on deploying your code, in the FREE Deploying Your Code Without a Terminal series. The reason behind this quick set of videos is that not everyone is command line savvy, and not everyone has to be. What is important though, is getting your code into version control, and there are plenty of tools that let you do that using a graphical interface. That is all good and well, but what do you do once your code is in version control and you need to get it live on the web? Using the web tool Beanstalk, you can deploy from version control and never open the terminal. This is a great lesson for those of us who want to do things properly, but don't have the time or desire to learn 20 commands that we will forget the next day.

In addition to being a cool service, the nice people at Beanstalk have given us a coupon code to get 50% off your first month. Just use the coupon code DRUPAL when you signup!

We hope you enjoy this mini-series, and we'll have more on the way next week.

Jan 25 2013
Jan 25

The issue

These last few days, I had noticed a problem with Drush Make and patches: some patches, be they rolled by our team or from elsewhere, would apply without a glitch, but some others, which worked normally according to the test bot on Drupal.org, would fail to apply without any obvious reason.

I had mostly put it out of my list of pressing issues when I really had to use an old version of OpenLayers, 7.x-2.0-alpha2 to be specific, AND apply a patch fixing one of the bugs in that module: behaviors plugin not being located correctly (http://drupal.org/node/1898662 if you want details). So I rolled the patch, tested it locally, the qa.d.o bot applied it and did not report more errors than expected for that old version.... and my Drush Make install refused to apply it.

Here was the relevant excerpt:

projects[domain] = 3.7
projects[domain][patch][] = "http://drupal.org/files/domain-foreach_argument-1879502-1.patch"
; ...snip...
projects[openlayers] = 2.0-alpha2
projects[openlayers][patch][] = "http://drupal.org/files/0001-Fix-the-path-file-declaration-for-behaviors.patch"
The Domain patch applied normally, but the OpenLayers patch would't apply. What could be wrong ?

The diagnostic

After tracing my way into DrushMakeProject::applyPatches(), I got a more explicit message: neither git patch nor trusted old patch could locate the patched file, includes/openlayers.behaviors.inc. Why ?

Comparing a standalone checkout of OpenLayers 7.x-2.0-alpha2 and the one in /tmp/make_tmp_(some id)__build__/sites/all/modules/contrib/openlayers, the problem became more obvious: that file was missing, as well as a good number of others. What ? Download failure ?

Not in the least: checking openlayers.info, I noticed the version downloaded by OpenLayers was no longer the chosen 7.x-2.0-alpha2 specified in the Makefile and which it previously downloaded normally, but the currently latest 7.x-2.0-beta3. Ahah...

The fix

After digging a bit more into Drush, it appeared that whenever you specify an extra info bit about a project download, lines in the short format like projects[openlayers] = 2.0-alpha2 are ignored, so Drush downloads the latest published version. The fix became obvious: use the "extended syntax", like this, for the same excerpt:

projects[domain][type] = module
projects[domain][version] = 3.7
projects[domain][patch][] = "http://drupal.org/files/domain-foreach_argument-1879502-1.patch"
; ... snip ...
projects[openlayers][type] = module
projects[openlayers][version] = 2.0-alpha2
projects[openlayers][patch][] = "http://drupal.org/files/0001-Fix-the-path-file-declaration-for-behaviors.patch"

This also explained why the other patches applied normally: each of them had been rolled against the latest module version, so the specified version was ignored, but the version actually being downloaded ended up being the same, and the patch applied normally.

Dec 05 2012
Dec 05

No votes yet

DrupalCon Sydney logoGit’s a ripper Version Control System, and considering its growing adoption, you can’t afford to be a drongo when it comes to leveraging it. No worries though, just head on over to Sydney the day before DrupalCon kicks off, take DrupalEasy’s action-packed Blue Collar Git training, and Bob’s yer uncle! 

The Yank (that’d be me) from DrupalEasy is putting on this fast-paced workshop that’ll help you master Git and get gobsmacked at how much more effective you get as a Drupal developer, themer or project manager. Git’s a dinky-di super speedy and efficient version control system. Unlike the others, Git’s got a distributed approach, which gives it an edge for collaborative development, and why its adoption is going flat chat.  What’s more, as it grows, being comfy with it becomes not just a valuable tool, but a handy talent to brag about as its becoming a popular preference on job posts. 

Blue Collar Git will start just after brekkie with the basics and a look around under the bonnet, then delve into remote repositories, resolving conflicts, and working with patches in the arvo. We’ve designed Blue Collar Git to be just the script you need to empower you to start leveraging it for your everyday workflow.

This unique workshop came about from a video of the 2010 Open Source Developers Conference session "Git for Ages 4 and Up" by Michael Schwern. His Tinker toy demo helped me soak up the Git knowledge, and motivated me to teach it using a similar method. Great feedback from various Git-related meetups and camp presentations and trainings inspired the full-blown training course.

The workshop runs the full day of February 6, the day before DrupalCon at the Crowne Plaza. The cost is only $440 for the full day (includes lunch!). This is our first time bringing a DrupalEasy workshop to Oz, so we’re hoping for a bonzer of a turnout! Get on the bush telegraph and grab a Drupal mate to spend the day soaking up insight and doing lots of hands-on learning. Head to the DrupalCon Sydney site to sign up for this corker of a training course!

Trackback URL for this post:

http://drupaleasy.com/trackback/546

Oct 31 2012
Oct 31

Average: 4 (2 votes)

Using Git to move the code base of a Drupal site from a local development environment to a hosting provider (or vice-versa) is a necessary skill that most Drupal site builders should have. Configuring and utilizing a remote Git repository can be an especially daunting task for people who don't have a strong background with version control systems. 

Flow chart

While updating the DrupalEasy Career Starter Program (DCSP) Curriculum a few weeks ago, it became clear to me that effectively using Git is a skill that we need to teach our students. We had informally gone over basic Git commands earlier in the program, but feedback from students and potential employers made us realized that students need (and want!) the necessary instruction and materials to get them up-to-speed on using Git to move sites between servers.

If you're a listener or <shameless_plug>our podcast</shameless_plug>, you probably know that we've been working with the fine folks at WebEnabled.com for quite some time. They've been the exclusive sponsor of our podcast for years, and we use their development tools extensively with our clients and training programs. So, we are going to take this opportunity to give something back to both the Drupal community as well as our friends at WebEnabled by providing access to our print and video curriculum for moving Drupal sites to and from WebEnabled development servers using Git. 

A free-to-download 13-page PDF from the DCSP curriculum that details the various steps required to move a Drupal site from a local machine to a WebEnabled server and vice-versa is included at the end of this post. 

In addition, we've created two screencasts that demonstrate the process from start to finish; part of the DCSP’s "multi-modal" training approach. Students get maximum training and experience through classroom instruction on all topics (including this one), as well as written curriculum, dedicated time during lab hours where students can work cooperatively, and in some cases, companion screencasts (we're dedicated to having 100% coverage of all our technical lessons in screencast form soon). We feel presenting curriculum in multiple modes greatly increases the success of students to comprehend the subject matter and put it to practical use faster.

DCSP Screencast: moving a local Drupal site to WebEnabled using Git

DCSP Screencast: moving a WebEnabled Drupal site to a local environment using Git

If you're interested in learning more about the DrupalEasy Career Starter Program, be sure to check out DrupalEasy.com/dcsp or contact us.

Trackback URL for this post:

http://drupaleasy.com/trackback/539

AttachmentSize 947.27 KB
Oct 23 2012
Oct 23

October 23rd, 2012

PNWDS Logo

What a weekend! No wonder there was a waiting list. From late-night basement bowling battles to deliciously fresh baked lunches and insightful, popcorn filled Git presentations. This years Pacific Northwest Drupal Summit in Seattle set the bar high for us Vancouverites next year.

Right from the opening remarks is was evident that sociability was this year’s focal point. Here’s what made this year stand out.

Sociability:

Opening remarks: Hey! Find five strangers and form a group. Here, take this paper bag full of raw spaghetti noodles, make a tower three feet tall, and balance a marshmallow on it.

What?! Needless to say, most groups struggled with this one. But what a unique way to fire up the conversation, inspire collaboration and increase the sociability. Completely on par with what Drupalers all strive for with the open source community. This activity really set the pace for the rest of the summit. Each presentation, table, and break Drupalers could be found collaborating, sharing and educating one another.

Saturday night had quite the dénouement. Attendees all headed to the Garage billiards bar where the ImageX team went head to head against Affinity Bridge and The Jibe in a bowling match. Only one of them broke 100, they all better stick to web development… This was such a great party, everyone from the summit had a further chance to connect and meet fellow Drupaleers. The positivity ran rampant throughout the night as each team got to know each other and build friendships. Bowling, who knew?

The summit was set at the University of Washington in an open commons area that made it easy to meet new people and do what everyone came to do – learn. Unlike previous years, each presentation was very close knit, as there were not large keynote presentations but rather several small grouped presentations in which collaboration and sharing was an essential part of each.

University of Washington Drupal Summit Venue

Insights:

ImageX Media was prominent this year as Mt. Baker sponsors and with three presentations coming from their team. Each presentation was filmed live and will be available for viewing soon via the links below. Here’s more on the presentations:

Amongst awesome jokes, Jennifer and Shea gave a presentation and demo on OpenEDU – a custom, feature-rich content management framework built with Drupal that offers significant cost savings and efficiencies for Higher Education institutions. Shea engaged the audience while demoing one feature, a content syndication service. Syndication essentially allows for content to be written once in one convenient place and then published across all school departments with easy and user friendly sharing capabilities. Some questions came up like, can OpenEDU allow for SMS notifications for my students? The live video will be available for viewing soon.

OpenEDU tweet from PNWDS 2012

By: Brandon Shi & BJ Wilson

Brandon and BJ captivated the audience with free popcorn for their delivery of an intermediate level presentation on Drupal Gitflow. They touched on: multiple repositories, pull requests, and peer review, steps for setting up a professional project workflow designed for multiple developers, and smooth code deployment. Their initial claim of “Commit, Review, and Deploy your code smoother than a soul singer” was an accurate depiction of the overall presentation.

This panel discussion lead by Trent was a unique, developer view approach to working as a freelance vs subcontract vs employee. The panel included two members of the Cheeky Monkey Media team and Trent, a developer for ImageX Media. They dove deep into topics like: specific challenges as a Drupal developer, the pros and cons of being a freelance developer, and suggestions on how to break into developing as either freelance, contract or employee.

Delicious:

The food was delectable. It was to the point where it needed to be mentioned in this blog. Delicious fresh baked baguette sandwiches came in all varieties. There was an abundance of great, local coffee and it was always fresh. The best part is that it was all included and available at the summit venue, this allowed for further social interaction that coincided well with the social theme of the event.

Tweet from PNWDS

Years to come:

The community is committed to making each PNW Drupal Summit better than the last. With a core group of solid Drupal supporters here in Vancity, they are going to do their best to ever expand the amazing Drupal community and spread the open source love. Until next year.

Sep 05 2012
Sep 05
Lullabot logo

Lullabot has trained thousands of Drupal developers & guided the development of some of the largest Drupal websites.

Aug 06 2012
Aug 06

Posted Aug 6, 2012 // 3 comments

Git is growing in popularity amongst developers for the technical advantages it has to offer. Many would ideally wish for their clients or their supervisors to make the switch from other source control management (SCM) systems such as Concurrent Versions System (CVS) and Subversion (SVN). To most stakeholders who aren’t involved in writing code, converting to Git wouldn’t make sense from a financial standpoint.

However, there are many reasons using git can help you create better products without hemorrhaging time and money on unnecessary overhead and unrefined process. If you can make the transformation without interfering with your current day-to-day operation, I would highly recommend you do so, and here’s why:

Note: This article aims to bring to light Git's advantages from a different perspective. If you are interested in reading up on the more tech-savvy flavor of this article, look no further .

Concoct Without Fear

Developers strive for experimentation. Of course, it wouldn’t be good if untested experimental code gets committed to the repository and stifles the rest of the team’s progress because they end up troubleshooting bad code that shouldn’t be there in the first place.

Sure, you can set up a model where all developers maintain their own branch, and merge their branch to the master branch when everything is tested and approved. You can pretty much do the same thing in GIT…and more.

Remember that anything you commit to SVN will be pushed to the centralized repository. Git allows you to maintain a personal set of branches for experimentation and development on your own computer without interacting with the repository. That way, you don’t have a bunch of branches in the central repository and everyone has to coordinate which branch is whose. This also allows developers to isolate bad code before it becomes readily available to everyone else. This eliminates clutter and enhances organization and efficiency!

Keep Churning Without Interruption

SCM is a great asset to a team’s workflow if it is readily available at all times. Some say that SVN is great for storing all commits so that, in the event of computer failure, all changes won’t get lost. However, if you send your developer on a flight from New York to Los Angeles to attend a conference, he may not be able to take advantage of SVN since you theoretically need a constant internet connection at all times.

Git works on local and remote branches. So commits can be made on the developer’s computer and pushed to the central server when all is good. This way, rather than losing hours traveling, you can multi task and travel and get some work done!

(Near) Error-Free Development

One gripe I have with SVN is that it does not follow some of its basic SCM principles. In addition to creating branches, you can also create tags, which takes snapshots of the state of a branch (these tags are, in theory unchangeable). It’s implemented more like a recommendation because all you’re really doing is making copies of branches into the tags folder (kind of like how some people copy and paste folders in to their hard drive and append some sort of “backup” label).

It doesn’t mean that no one can make changes to them. You can commit changes anywhere in the repository, so there's a chance of missing changes on the next deployment if, for example, a developer commits changes to a tag (SVN doesn't have a problem with this).  With this, it's assumed that all SVN users follow a certain standard and trusts that no one deviates from the rules.

Git supports the creation of branches and tags...and enforces those principles. You can checkout copies of tags, but you cannot make changes to them, (besides deleting and recreating them again), enforcing it's true intent as a SCM. This way, no one can accidentally commit to tags accidentally and lose changes from current development branches.

Leave No Features Behind

Differentiating changes between files works differently between SVN and GIT. While GIT tracks changes on all files in the project, SVN recursively searches for changes within the directory you’re in. This is nice because you can only focus on a smaller subset of files rather than the entire project. However, I find this erroneous and dirty considering there is a chance of neglecting to commit changes and increasing chances of bugs appearing.

Sure, you could go all the way back to the base level of the project to get a full overview of changes, but again, that’s an added inconvenience that adds unnecessary time and effort to your workflow. I’d rather make sure I know exactly what’s going on throughout the entire project rather than a partially working commit.

Decrease Time Spent On Using The Tool

It is not pleasing to anyone when a large chunk of your time is spent on figuring out how to use the tools and not on the task at hand. It is not difficult to get accustom to using any source control management system, but I can argue that it does take more work to achieve certain goals.

I noticed that some of the syntaxes used in SVN are more cumbersome than their counterparts in GIT. It gets to a point where some copying and pasting is involved (at that point, using GUI like TortoiseSVN or Cornerstone would be faster). For a few examples, see below:

Creating a new branch

SVN

Git

git branch testcode

Creating new tag

SVN

Git

git tag testcode

Switch branches

SVN

Git

git checkout newbranch

Lose all changes from last commit

SVN

svn revert –R ./path/to/directory/with/files/you/want/to/revert/

Git

git stash

Merge branch

SVN

Git

git merge mergebranch 

Personally, I notice myself keeping a crib sheet for the correct paths in SVN (svn info will yield the same thing, too), but it’s still too much to remember and to do, especially when you’re working on multiple repositories. Less typing = better!

Eliminate Bureaucracy in Collaboration

Git is a distributed revision control system, which means that sharing and contributing to projects is easy (of course, one can secure repository for internal purposes using SSH keys and permission settings on either systems). And unlike SVN, where all changes are directly applied to the central repository, GIT allows developers to stage commits prior to pushing them to the central repository.

What does this all mean? When on-boarding someone up to an existing project, you can simply give them access and the URL to the repository and let them experiment and develop on their own without affecting the master branch directly (this way, a peer review process can be included in the workflow more elegantly.  Beats reverting the repository back to a previous state if bad code was committed).

Could you do the same in SVN? Possibly. But, in most cases, time may be spent on training new team members to commit to repository correctly to avoid interfering with fellow team member’s development branches (becomes even more critical when a project gets larger and larger).

Is Git Right For My Company?

Most developers would be delighted if they can change their workflow to use Git. Switching over early would be more ideal unless, of course, your SCM relies on a large network of dependent applications. If it’s not viable to change SCM systems, I would highly recommend using it on future projects.

Git is infamous for having a large suite of tools that even seasoned users need months to master. However, getting into the fundamentals of Git is simple if you’re trying to switch over from SVN or CVS. So give a try sometime.

As a Phase2 developer, Peter Cho strives to create elegant and efficient solutions to provide a positive user experience for clients.

Prior to joining Phase2, Peter worked for the Digital Forensics Center developing marketing and web ...

Aug 03 2012
Aug 03

Git is great! Git is essential if you want to be a contributor to Drupal. Use it to work on a project, even without being connected to the internet. Make commits, merge branches, roll patches. Take the time to learn its very powerful features. Then use these tips to speed up your life! For example, why would you choose to type three characters when you can type one? Or, why type nine when you can type three?

Enter aliases…

Did you know about shell aliases? I have one in my ~/.profile (akin to a ~/.bashrc file for Linux users):

alias g='git '

This means I can type "g" and it really means "git".

Just like shell aliases, Git has its own form of aliases. There are commands you can type in the terminal to make configurations, but I find it easier to edit the ~/.gitconfig file directly. Begin the listing of aliases with [alias], then follow with a list of aliases, one per line. I use the following time-saving aliases in my .gitconfig file, picked up from various sources around the web:

[alias]
  s = status
  l = log --graph --pretty=format:'%C(yellow)%h%C(cyan)%d%Creset %s %C(white)- %an, %ar%Creset'
  ll = log --stat --abbrev-commit
  d = diff
  co = checkout
  a = add
  ap = add -p
  patch = format-patch --stdout HEAD~1
  rpatch = reset --hard HEAD~1

Interestingly enough, using the "g" bash alias in conjunction with an alias like "l" that might also have a bash alias defined results in a conflict. I don't know what to do about that. Oh, well. The "patch" and "rpatch" aliases are some of my favorite. Let's say I modified a contrib module and made a commit on it locally. (Just one commit.) I want to submit this commit as a patch to the module's issue queue. In the module's root directory, I type "g patch > /tmp/issue_description-1234567-4.patch". This takes the diff between the latest commit and the project before that, and formats a patch, which also includes authorship information! Next, I type "git rpatch". This removes all trace of the last commit (the one I made), so it now matches the way it still looks on drupal.org.

Git branch name in command prompt

See my post a few weeks ago about showing the Drush site alias in the command prompt. This also works with the current Git branch as well. I believe Adam's dotfiles repo has an implementation of this.

MOD stands for module

In the ~/.gitconfig file, add this:

[url "git+ssh://[email protected]/project/"]
  insteadOf = mod:

Now, when you type "git checkout mod:views" it clones views into your current directory, down from drupal.org. No need to visit the project page and hunt for the git cloning info. Make sure to replace the "jessehs" part with your drupal.org git username.
Jul 27 2012
Jul 27

If you receive our newsletter, you may have noticed that you recently got a HUGE list of posts we've written recently. Well, except that they weren't all really that recent — some of those we two months old, and every week in between. Our regular newsletter is sent out automatically based on our RSS feed, and it turns out that our RSS feed was broken. Once we tracked it all down and got it fixed, all of the posts that had never gotten queued up for the newsletter shot out in one big go. Sorry about that. Aside from the crazy long newsletter though, I thought I'd share how I got this sorted out, because this is the kind of problem that can happen to anyone, and it is really annoying to track down.

When we finally realized there was a problem, we didn't know what was causing the problem, or therefor how to fix it. The error that Drupal.org was showing when trying to update the feed was "The feed from Drupalize.Me seems to be broken, because of error "Reserved XML Name" on line 1." That wasn't really helpful, but a little bit of Googling pointed us in the direction of having a space in the feed. Lo and behold when we went back and looked at the feed itself, you could see that there was a blank space right at the beginning of the first line. Hm, where did that come from?

We hadn't made any changes to our view for the feed, and we didn't have any custom theming going on for it, but we checked those things out anyway. Nothing proved helpful. Trying to review all of the code on our site for an extra space was massively daunting. Then it struck me that I could narrow my search very easily because we use version control. We knew the feeds worked fine originally, so we just need to check the changes between it working correctly and not. I tracked down the rough date of when the feeds started to fail, and then I found the point in time in our code where it worked correctly. We use tags for our production code, so on my local I checked out a tag that worked right (3.3.0). I rolled forward on the tags until I saw when the feed broke (3.3.2). Now since this was a hot fix release, the changes in it were pretty small and in only a few files. A quick look at the diff on the files and I saw the mistake right away: a space was accidentally added to the top of our template.php file, right before the opening PHP tag. That added a space to the beginning of ALL pages on the site.

Now, if this hadn't been so quick to find with my basic method here, the next thing to do would have been to use a really handy Git command called bisect. If you've never used bisect before, it lets you take two endpoints in your code, and then goes halfway between those two points. You can then test if the bug is still there. You can keep bisecting the commits until you zero in on the commit that introduced the bug.

So, luckily, once I remembered that version control is my friend in so many ways, I quickly found the bug, fixed it and rolled out the new hot fix. Our feeds are now working properly and we can share with the world again.

Jul 18 2012
Jul 18

In this video we walk through getting Git version control installed, and then show how to do a few basic things, including how to get a copy of the latest Drupal development code. This video follows the instructions found in the Install Git lesson on learndrupal.org.

This video is installing Git on Windows, because it has the most steps involved. Installation on Mac and Linux is very simple, in that they do not have a wizard to walk through, so they are not demonstrated. All commands used on the command line in the video work on Windows, Mac, and Linux, because Windows is using the Git Bash shell which is part of the Git installation.

Jul 13 2012
Jul 13

Months ago I was searching for a good web front end for git for doing code reviews and browsing repos. My short list ended up being Gitweb and GitLab.

Gitweb is a Perl based web front end for git that is a sub project of the official git project. Out of the box Gitweb is pretty ugly and I have never found it to be very user friendly. Even with all of its problems, it does what it does pretty well.

GitLab is a Rails based github clone that is being actively developed and shows a lot of promise. If you want a fully blown ready to go basic github clone to setup and run for your dev team, GitLab looks great. Unfortunately trying to install it on RHEL5 wasn't an easy task. GitLab offered a lot more functionality than I needed and my lack of Ruby experience meant that it was adding more complexity than I wanted.

I ended up reluctantly recommending that we use Gitweb but I have since changed my mind.

Over the last few weeks I've been playing with GitList by Klaus Silveira. GitList is a web front end for browsing git repos, that looks a little like github. Klaus describes GitList as:

GitList is an elegant and modern web interface for interacting with multiple git repositories. It allows you to browse repositories using your favorite browser, viewing files under different revisions, commit history, diffs. It also generates RSS feeds for each repository, allowing you to stay up-to-date with the latest changes anytime, anywhere. GitList was written in PHP, on top of the Silex microframework and powered by the Twig template engine. This means that GitList is easy to install and easy to customize. Also, the GitList gorgeous interface was made possible due to Bootstrap.

It is nice having an open source side project to contribute to that gets me out of the Drupal ghetto. GitList also gives me an excuse to play with Symfony components and twig more, both things that are already in Drupal 8 core. Silex has been on my "to play with list" for some time too.

I have been searching for an alternative to Gitweb that is easy to use, has a nice UI and the basic features I want in a web front end for git - commit history, diffs, blame and branch comparison. GitList ticked all the boxes, except blame was a bit broken and branch comparison wasn't implemented.

I now run GitList on my laptop for offline repo browsing and find it very useful. I will be deploying it for a client in the next few weeks as part of a broader Drupal workflow management system.

This isn't a review of GitList and I won't be providing any feature comparison of GitList vs (Gitweb|GitLab|github). Instead this post is inspired by a discussion on twitter between Klaus (@klaussilveira), Fabien Potencier (@fabpot), Peter Droogmans (@attiks) and myself (@skwashd). I want outline how we are using GitList and the underlying git library, my vision for GitList and what I can do to make it happen.

Missing Features

Based on the project description I think that GitList is almost feature complete. There are a few TODO items in the README. In my time using GitList I haven't found myself going "if only it did X (and Y and Z)" - except branch diff support. I have been working on a branch diff patch, and I hope to clean up that code this weekend and submit a pull request. I think it is good when something does one thing and does it well.

Library and App Separation

As part of the workflow management system for Drupal we're developing we needed a solid git library for PHP. The one that ships with GitList is pretty nice. I've been working with my fellow Technocrat developer, James Harvey, on extracting the git library from GitList. We still have some work to do on this, but it is usable today. We have added some enhancements to the library for our purposes and removed the Silex dependency. Our vision is to have a generic OO git library for PHP 5.3+ and that GitList use this as a sub module.

Although I am currently hosting the code in my github repo, I would love to see Klaus create a new primary repo for the library and that James, myself and others would continue to develop and support his library. It is not my intention to fork the library from GitList and maintain it myself in isolation - that is a waste of energy and resources.

Git Module for Drupal

Following on from our creation of a generic git library for PHP, we have created a git wrapper module for Drupal. We plan to release this module in the next week or so on drupal.org - we've already reserved the project namespace. As part of this change the features git module will now depend on the new library wrapper module. We will release other modules that use this library as they pass QA.

Releases

I have a hacker mentality, I am happy to clone a repo, create a branch or tag and start playing. If I find something that it broken, then there is a good chance that I'll fix it and post a patch. At the same time I understand that not everyone can work that way and that in some environments there is a focus on only using official stable releases. It is also difficult tracking bugs against git commit hashes. GitList is moving quickly, but I think that it could benefit from having official releases.

If the library and app are separated, they can have separate and independent release cycles. I am happy to work with Klaus to work out a plan for GitList releases. One of my first roles in an open source project was as a release manager.

Discussion Space

I have posted this on my blog because I don't know what the real audience is for this conversation. I know that there are 3 people interested in participating in the conversation but are there others? I know others contribute patches to GitList. I could have emailed Klaus directly, but that would have excluded others who are interested. Even though in some ways email is on the way out, some discussions require more than 140 characters. Is it worth setting up a mailing list, google group, a forum or some other channel of communication to discuss these issues?

The Future

I am already dependent on GitList for a significant piece of work. I want to work with Klaus and the rest of the GitList community to make a kick arse web front end for git, I also need a rock solid PHP lib for git. I think GitLib provides a solid foundation for both. Fabien wants to use it as a good Silex example. What do others want to do with GitList? Let the discussion begin ...

Upadte: Klaus has asked that we move the discussion to the recently created GitList mailing list/google group, so I have created a new thread to continue this conversation. I can cross an item off my list - discussion space.

Bookmark/Search this post with

Jul 04 2012
Jul 04

As a website is being developed, often times it is useful to have a server set up where clients can view the website in development, for project manager to gauge the progress, or simply just for making sure each commit does not break the site! In order to achieve this, the server must pull codes from the latest release branch, for clients viewing, or development branch, for internal purposes. Having to manually do this each time can be quite a burden! In this tutorial, we will set up a way for the codebase to be deployed to the server automatically on each new commit. Our choice of version control system will be Git. However if Git is not the version control system of your choice, then even though the codes posted here won’t be directly applicable to other version control systems, the ideas behind them should still be useful!
A PHP deployment script, is used to automate the deployment process after code changes are pushed up to a repo. The script handles the pull actions for the hosting servers, allowing them to pull down the changes without manual intervention. The keyword here is automation. Automation provides savings in time, as well as prevents careless mistakes or oversight. An increase in both efficiency and reliability? No wonder a quick Google search turns up so many examples.

Today we are going to walk through the creation of a simple deployment script, with some powerful features that could be customized to fit in with your work flow.

The Basic

Here is a layout of a basic deployment script that achieves automated code deployment, with the option of specifying which branch to pull from by supplying the bn argument. Simply place this script into the public folder of a vhost on the same server as where your websites are hosted and call it with the full path of your targets website as the subdirectories. For example if you placed the script into a vhost named “post-receive.mysrv.com” and your website is hosted in the directory “/var/www/vhosts/mywebsite.mysrv.com/public”, you would call “post-receive.mysrv.com/var/www/vhosts/mywebsite.mysrv.com/public” which will pull any new updates to your website.

If you find that you keep all your sites in the same vhosts directory, with the same name for the public folder, there is no reason to have to type out the full directory paths every time.

Let’s say you have another website hosted at “/var/www/vhosts/myotherwebsite.mysrv.com/public”, we can specify the default parent path as “/var/www/vhosts/” and the default public folder as “/public”. Now we can call the script for the two different websites by simply typing “post-receive.mysrv.com/mywebsite.mysrv.com”, and “post-receive.mysrv.com/myotherwebsite.mysrv.com”.

<?php
// request_path() is defined at the bottom
$path = '/' .

request_path

();

// Edit this string to reflect on the default location of vhost web roots
// do include the trailing slash
// Example: $default_parent_path = '/var/www/vhosts/';
$default_parent_path = '/var/www/vhosts/';

// The name of the public_html directory
// do include the leading slash
// do not include the trailing slash
// Example: $default_public_directory = '/public';
$default_public_directory = '/public';

// Specify which branch by appending a branch name variable 'bn' to the end of the url
// defaults to 'master' if none specified
// Example: http://post-receive.mysrv.com/mywebsite.mysrv.com?bn=develop
$default_pull_branch_name = 'master';
if (empty($_GET['bn'])) {
  $pull_branch_name = $default_pull_branch_name;
}
else {
  $pull_branch_name = $_GET['bn'];
}

// The idea is if only 1 argument is present, treat that as the /var/www/vhosts/<directory_name>
// and if more than 1 argument is given, treat that as the full "cd-able" path
$args = explode('/', $path);

if (count($args) === 1) {
  $working_path = $default_parent_path . $path . $default_public_directory;
}
elseif (count($args) > 1) {
  $working_path =/. $path;
}

// Do the routine only if the path is good.
// Assumes that origin has already been defined as a remote location.
// We reset the head in order to make it possible to switch to a branch that is behind the latest commits.
if (!empty($working_path) && file_exists($working_path)) {
$output = shell_exec("cd $working_path; git fetch origin; git reset —hard; git checkout $pull_branch_name; git pull origin $pull_branch_name);
echo "
<

pre

>$output</

pre

>";
} /**
 * Returns the requested url path of the page being viewed.
 *
 * Example:
 * – http://example.com/node/306 returns "

node

/306".
 *
 * See request_path() in Drupal 7 core api for more details
 */
function request_path() {
  …
}

Here we discuss many different optional features that either adds more functionality or improves convenience. The code snippets in each example builds upon the previous one and reflects all previous feature additions.

Security key

To make sure your script can only be called by you or those you trust, we are going to add a security key. The security key will be supplied by the user through another URL variable we will call `sk`, and will have to match a pre-set string.

To modify the code we simply add the `sk` URL variable and do a check for the variable that it matches with the security key before continuing. This block of code should go at the very beginning of the page.

// Checks the security key to see if it is correct, if not then quits
// Currently set to static key 'mysecuritykey'
// Example: http://post-receive.mysrv.com/mywebsite.mysrv.com?sk=mysecuritykey
if (empty($_GET['sk'])) {
  header('HTTP/1.1 400 Bad Request', true, 400);
  echo '<pre>No security key supplied</pre>';
  exit;
}
if ($_GET['sk']!='mysecuritykey') {
  header('HTTP/1.1 403 Forbidden', true, 403);
  echo '<pre>Wrong security key supplied</pre>';
  exit;
}

Tags


This is arguably one of the most versatile functionality we can add to the script. By adding the ability to pull commits based on certain tags, you can adjust this script to fit your work flow. For example, you may want a production server to only pull commits that are tagged with the latest version number. For more information on tags and how they work in Git here is a nice succinct description straight from the Git official documentation.

Once again, we had to change the shell commands in order to both retrieve tags information and to pull the appropriate commits. You can setup different tag rules by altering the regular expressions and the comparison done between tags. For example, a rule to only pull commits with tags containing the keyword beta. You can also set different rules for different branches by a switch-case structure based on the `bn` URL variable.

// Do the routine only if the path is good
if (!empty($working_path) && file_exists($working_path)) {

  // Fetch and check version numbers from tags
  $preoutput = shell_exec("cd $working_path; git fetch origin; git fetch origin --tags; git tag");
  // Finds an array of major versions by reading a string of numbers that comes after '7.x-'
  preg_match_all('/(?<=(7\.x-))[0-9]+/', $preoutput, $matches_majver);
  // Finds the latest major version by taking the version number with the greatest numerical value
  $majver = max($matches_majver[0]);
  // Finds an array of minor versions by reading a string of numbers that comes after '7.x-{$majver}.'
  // where {$majver} is the latest major version number previously found above
  preg_match_all('/(?<=(7\.x-' . $majver . '.))[0-9]+/', $preoutput, $matches_minver);
  // Finds the latest minor version by taking the version number with the greatest numerical value
  $minver = max($matches_minver[0]);
  // Concaternate version numbers together to form the highest version tag
  $topver = '7.x-' . $majver . '.' . $minver;
  echo "<pre>The latest version detected is version $topver</pre>";

$output = shell_exec("cd $working_path; git fetch origin; git reset --hard; git checkout $pull_branch_name; git fetch origin $pull_branch_name; git merge tags/$topver;");
echo "<pre>$output</pre>";
}

Drush


If you’re using Drupal as your CMS, chances are you’re using Drush. Here we will integrate the script with Drush clear cache commands. The idea is the same as the above features; we start by defining an URL variable `cc` as our drush command variable. The user can execute predetermined drush commands. Clearing cache will clean out all the cached data and force the website to rebuild itself, it is important after code changes in order for those changes to be reflected on the website, especially for the theme layer.

—Update—
As Dustin pointed out in the comments below, there is often a need to perform a database update, and for those that work with features as a tool for site building, running a feature revert is a must on any update. The addition of a few new URL variables will give us the option to do so.

if (!empty($_GET['cc'])) {
  switch ($_GET['cc']) {
    case 'all':
      shell_exec("cd $working_path; drush cc all");
      break;
    case 'cssplusjs':
      shell_exec("cd $working_path; drush cc css+js");
      break;
    case 'cssminusjs':
      shell_exec("cd $working_path; drush cc css-js");
      break;
  }
}
if (!empty($_GET['up'])) {
  shell_exec("cd $working_path; drush updatedb -y");
}
if (!empty($_GET['fr'])) {
  shell_exec("cd $working_path; drush fra -y");
}

Github integration


If your repository is hosted on Github, you can make use of their POST callback service by going to Admin>Service Hooks and adding a post commit webhook with the url of the script, complete with the security key and any other arguments. Github will then call the script whenever a new commit is pushed to the repo. BitBucket similarly offers a post commit webhook.

Twitter integration


The Github POST service sends along a payload object with quite a bit of useful information with the POST call to our deployment script. By including the twitter-php library by David Grudl we can setup a twitter account to tweet updates with information from the POST payload.

You can find the source files as well as the documentation on how to setup your twitter account here on Github.

To include this in our script we simply add the below block of code, with the
require_once dirname(__FILE__) . '/twitter-php/twitter.class.php';
//insert appropriate values for the keys and tokens
$consumerKey = 'consumerkeygoeshere';
$consumerSecret = 'consumersecretgoeshere';
$accessToken = 'accesstokengoeshere';
$accessTokenSecret = 'accesstokensecretgoeshere';// Tweet on success the total number of commits, latest commit number, which repository and branch received new commits, and who pushed the commit.
if (!empty($_POST) && !empty($_POST['payload'])) {
$payload = json_decode($_POST['payload']);
if (!empty($payload)) {
$twitter = new

Twitter

($consumerKey, $consumerSecret, $accessToken, $accessTokenSecret);
$last_commit = end($payload->commits);
$twitter->send('[' . $payload->repository->name . ':' . $pull_branch_name . '] ' . count($payload->commits) . ' commit(s) deployed. Last commit: ' . end($payload->commits)->id . ' by ' . end($payload->commits)->author->name . end($payload->commits)->author->email);
}
}

Here is the script in its entirety

<?php
require_once dirname(__FILE__) . '/twitter-php/twitter.class.php';
$consumerKey = 'consumerkeygoeshere';
$consumerSecret = 'consumersecretgoeshere';
$accessToken = 'accesstokengoeshere';
$accessTokenSecret = 'accesstokensecretgoeshere';
$path = '/' .

request_path

();
$default_parent_path = '/var/www/vhosts/';
$default_public_directory = '/public';

if (empty($_GET['sk'])) {
  header('HTTP/1.1 400 Bad Request', true, 400);
  echo '<pre>No security key supplied</pre>';
  exit;
}
if ($_GET['sk']!='mysecuritykey') {
  header('HTTP/1.1 403 Forbidden', true, 403);
  echo '<pre>Wrong security key supplied</pre>';
  exit;
}

$default_pull_branch_name = 'master';
if (empty($_GET['bn'])) {
  $pull_branch_name = $default_pull_branch_name;
}
else {
  $pull_branch_name = $_GET['bn'];
}

$args = explode('/', $path);

if (count($args) === 1) {
  $working_path = $default_parent_path . $path . $default_public_directory;
}
elseif (count($args) > 1) {
  $working_path =/. $path;
}

if (!empty($working_path) && file_exists($working_path)) {
  $preoutput = shell_exec("cd $working_path; git fetch origin; git fetch origin --tags; git tag");
  preg_match_all('/(?<=(7\.x-))[0-9]+/', $preoutput, $matches_majver);
  $majver = max($matches_majver[0]);
  preg_match_all('/(?<=(7\.x-' . $majver . '.))[0-9]+/', $preoutput, $matches_minver);
  $minver = max($matches_minver[0]);
  $topver = '7.x-' . $majver . '.' . $minver;
  echo "<pre>The latest version detected is version $topver</pre>";

  $output = shell_exec("cd $working_path; git fetch origin; git reset --hard; git checkout $pull_branch_name; git fetch origin $pull_branch_name; git merge tags/$topver;");
  echo "<pre>$output</pre>";
}

if (!empty($_GET['cc'])) {
  switch ($_GET['cc']) {
    case 'all':
      shell_exec("cd $working_path; drush cc all");
      break;
    case 'cssplusjs':
      shell_exec("cd $working_path; drush cc css+js");
      break;
    case 'cssminusjs':
      shell_exec("cd $working_path; drush cc css-js");
      break;
  }
}

if (!empty($_GET['up'])) {
  shell_exec("cd $working_path; drush updatedb -y");
}

if (!empty($_GET['fr'])) {
  shell_exec("cd $working_path; drush fra -y");
}

if (!empty($_POST) && !empty($_POST['payload'])) {
  $payload = json_decode($_POST['payload']);
  if (!empty($payload)) {
    $twitter = new Twitter($consumerKey, $consumerSecret, $accessToken, $accessTokenSecret);
    $last_commit = end($payload->commits);
    $twitter->send('[' . $payload->repository->name . ':' . $pull_branch_name . '] ' . count($payload->commits) . ' commit(s) deployed. Last commit: ' . end($payload->commits)->id . ' by ' . end($payload->commits)->author->name . end($payload->commits)->author->email);
  }
}

 /**
 * Returns the requested url path of the page being viewed.
 *
 * Example:
 * – http://example.com/node/306 returns "node/306".
 *
 * See request_path() in Drupal 7 core api for more details
 */

function request_path() {
  static $path;

  if (isset($path)) {
    return $path;
  }

  if (isset($_GET['q'])) {
    // This is a request with a ?q=foo/bar query string. $_GET['q'] is
    // overwritten in drupal_path_initialize(), but request_path() is called
    // very early in the bootstrap process, so the original value is saved in
    // $path and returned in later calls.
    $path = $_GET['q'];
  }
  elseif (isset($_SERVER['REQUEST_URI'])) {
    // This request is either a clean URL, or 'index.php', or nonsense.
    // Extract the path from REQUEST_URI.
    $request_path = strtok($_SERVER['REQUEST_URI'], '?');
    $base_path_len = strlen(rtrim(dirname($_SERVER['SCRIPT_NAME']), '\/'));
    // Unescape and strip $base_path prefix, leaving q without a leading slash.
    $path = substr(urldecode($request_path), $base_path_len + 1);
    // If the path equals the script filename, either because 'index.php' was
    // explicitly provided in the URL, or because the server added it to
    // $_SERVER['REQUEST_URI'] even when it wasn't provided in the URL (some
    // versions of Microsoft IIS do this), the front page should be served.
    if ($path == basename($_SERVER['PHP_SELF'])) {
      $path = '';
    }
  }
  else {
    // This is the front page.
    $path = '';
  }

  // Under certain conditions Apache's RewriteRule directive prepends the value
  // assigned to $_GET['q'] with a slash. Moreover we can always have a trailing
  // slash in place, hence we need to normalize $_GET['q'].
  $path = trim($path, '/');

return $path;
}

That should be enough to get you started on a deployment script. Let us know in the comments if you have any questions, or any other ideas you might have for an improved script!

Credits goes to Brandon Shi, our senior developer here at ImageX Media, for the ideas behind much of these scripts.

Jun 19 2012
Jun 19

This guide is compiled for Drupal Estonia Hackday where I held a workshop on Drupal development under Windows. For a long-time OSX user it struck me that creating a fully working development environment under Windows is not that hard at all.

Note that this guide only deals with command line and code editing experience, it does not help to set up LAMP stack for Drupal.

Getting started

  1. Install Console http://sourceforge.net/…est/download. Note that you will need to unpack, copy “Console2” dir to to “C:\Program Files (x86)” and create shortcut icon yourself.

What about Github for Windows?

Freshly baked http://windows.github.com eases some of the pain of setting up Git on Windows and also includes full Git command line client but it's best for developing for Github only. Non-Github remote repositories are supported, but they are a bit buggy, at least in initial version. At the time of writing only cloning works nicely, building local repo and assigning remote is not properly recognized and need extra steps. Also, re-using Github SSH keys requires command line knowledge and autogenerated dotfiles (.gitignore etc) might bring confusion for newcomers. Still this pretty Github application might bring a lot of potential in the future, especially for visual merging etc.

Setting command line alias for Sublime Text

  1. Run “Git Bash”
  2. Enter following commands:
echo 'alias sub="/c/Program\ Files/Sublime\ Text\ 2/sublime_text.exe"' >> ~/.profile
source ~/.profile

Now you can enter sub . in any directory in the command line to run Sublime Text.

Also, you might want to configure Sublime Text to more predictable startup experience. Go to Preferences → Settings – User and adjust following settings to false:

"hot_exit": false
"remember_open_files": false

Setting Git default editor to Sublime Text

To save you from thrown into Vim when commiting without -m option it's wise to set Git to throw you to Sublime Text instead.

git config --global core.editor "'C:/Program Files/Sublime Text 2/sublime_text.exe'"

Set up Console to run Git shell

  1. Run Console
  2. Go to “Edit → Settings”
  3. Fill “Shell” field with
C:\Windows\SysWOW64\cmd.exe /c ""C:\Program Files (x86)\Git\bin\sh.exe" --login -i"

4. Fill “Startup dir” field with your working directory, usually something like C:\Users\your_username\your_working_directory

Set up Git config

git config --global user.name "my fullname goes here"
git config --global user.email my_email_goes_here

Note that you must fill in my fullname goes here and my_email_goes_here. E-mail must match with e-mail you have used for registration in Drupal.org.

There are many additional tricks to set up a more workable Git config, for examples see here: http://coderwall.com/p/euwpig?…

Generate SSH Keys

ssh-keygen -t rsa -C "my_email_goes_here"
sub ~/.ssh/id_rsa.pub

Note that you must to fill in my_email_goes_here. Actually, it does not have to be e-mail, it can be any string but using e-mail is a convention.

Upload SSH keys

Then copy all the contents of the ~/.ssh/id_rsa.pub file to:

…to Drupal.org:

…to Github:

Finally, lets started with Git

Creating new repo

mkdir test
cd test
git init
echo "About this project" > README.txt
git add .
git commit -m "Initial commit"

Push repo to Drupal

First create a new sandbox project in http://drupal.org/…ject-project. When done, navigate to Version control tab and run

git remote add origin [email protected]:sandbox/your_drupalname/your_sandbox_id.git
git push origin master

Note that you must to fill in your_drupalname and your_sandbox_id.

…or push repo to Github

Go to https://github.com/new, ignore “Initialize this repository with a README” setting.

git remote add origin [email protected]:your_username/your_projectname.git
git push origin master

Note that you must to fill in your_username and your_projectname.

Git and Drupal development

Apply a patch with three lines

git clone http://git.drupal.org/project/drupal.git
cd drupal
curl link_to_the_patch | git apply

Note that you must to fill in link_to_the_patch with actual URL

Roll back in one line:

git reset --hard

Create a patch for Drupal:

There is comprehensive manual for creating Drupal patches http://drupal.org/…instructions but in most simple form it goes like this:

git clone http://git.drupal.org/project/drupal.git
cd drupal
sub .
...edit something and save...
git diff

This gives you a overview what you have changed. To save patch to a file, run:

git diff > ~/my_patches_dir/my_patch_name.patch

More info about Git

Jun 18 2012
Jun 18

Drupal development using Git, Sublime Text and Console under Windows

This guide is compiled for Drupal Estonia Hackday where I held a workshop on Drupal development under Windows. For a long-time OSX user it struck me that creating a fully working development environment under Windows is not that hard at all.

Note that this guide only deals with command line and code editing experience, it does not help to set up LAMP stack for Drupal.

Getting started

  1. Install Git http://git-scm.com/download/win
  2. Install Sublime Text http://www.sublimetext.com/2
  3. Install Console http://sourceforge.net/…est/download. Note that you will need to unpack, copy "Console2" dir to to "C:\Program Files (x86)" and create shortcut icon yourself.
  4. Install Drush http://drush.ws/…ws_installer

What about Github for Windows?

Freshly baked http://windows.github.com eases some of the pain of setting up Git on Windows and also includes full Git command line client but it's best for developing for Github only. Non-Github remote repositories are supported, but they are a bit buggy, at least in initial version. At the time of writing only cloning works nicely, building local repo and assigning remote is not properly recognized and need extra steps. Also, re-using Github SSH keys requires command line knowledge and autogenerated dotfiles (.gitignore etc) might bring confusion for newcomers. Still this pretty Github application might bring a lot of potential in the future, especially for visual merging etc.

Setting command line alias for Sublime Text

  1. Run "Git Bash"
  2. Enter following commands:
echo 'alias sub="/c/Program\ Files/Sublime\ Text\ 2/sublime_text.exe"' &gt;&gt; ~/.profile
source ~/.profile

Now you can enter sub . in any directory in the command line to run Sublime Text.

Also, you might want to configure Sublime Text to more predictable startup experience. Go to Preferences -> Settings – User and adjust following settings to false:

"hot_exit": false
"remember_open_files": false

Setting Git default editor to Sublime Text

To save you from thrown into Vim when commiting without -m option it's wise to set Git to throw you to Sublime Text instead.

git config —global core.editor "'C:/Program Files/Sublime Text 2/sublime_text.exe'"

Set up Console to run Git shell

  1. Run Console
  2. Go to "Edit -> Settings"
  3. Fill "Shell" field with
C:\Windows\SysWOW64\cmd.exe /c ""C:\Program Files (x86)\Git\bin\sh.exe" —login -i"

4. Fill "Startup dir" field with your working directory, usually something like C:\Users\your_username\your_working_directory

See also http://lostechies.com/…into-console

Set up Git config

git config —global user.name "my fullname goes here"
git config —global user.email my_email_goes_here

Note that you must fill in my fullname goes here and my_email_goes_here. E-mail must match with e-mail you have used for registration in Drupal.org.

There are many additional tricks to set up a more workable Git config, for examples see here: http://coderwall.com/p/euwpig?…

Generate SSH Keys

First, make sure you have Drupal.org account http://drupal.org/user/register and you have applied for Git access in Drupal http://drupal.org/node/1047190. You also might want to have Github account https://github.com/signup/free

ssh-keygen -t rsa -C "my_email_goes_here"
sub ~/.ssh/id_rsa.pub

Note that you must to fill in my_email_goes_here. Actually, it does not have to be e-mail, it can be any string but using e-mail is a convention.

Upload SSH keys

Then copy all the contents of the ~/.ssh/id_rsa.pub file to:

…to Drupal.org:

http://drupal.org/user ->Version control -> Profile -> SSH Keys (help here).

…to Github:

https://github.com/settings/ssh (help here).

Finally, lets started with Git

Creating new repo

mkdir test
cd test
git init
echo "About this project" &gt; README.txt
git add .
git commit -m "Initial commit"

Push repo to Drupal

First create a new sandbox project in http://drupal.org/…ject-project. When done, navigate to Version control tab and run

git remote add origin [email protected]:sandbox/your_drupalname/your_sandbox_id.git
git push origin master

Note that you must to fill in your_drupalname and your_sandbox_id.

…or push repo to Github

Go to https://github.com/new, ignore "Initialize this repository with a README" setting.

git remote add origin [email protected]:your_username/your_projectname.git
git push origin master

Note that you must to fill in your_username and your_projectname.

Git and Drupal development

Apply a patch with three lines

git clone &lt;a href="http://git.drupal.org/project/drupal.git" title="http://git.drupal.org/project/drupal.git"&gt;http://git.drupal.org/project/drupal.git&lt;/a&gt;
cd drupal
curl link_to_the_patch | git apply

Note that you must to fill in link_to_the_patch with actual URL

Roll back in one line:

git reset —hard

Create a patch for Drupal:

There is comprehensive manual for creating Drupal patches http://drupal.org/…instructions but in most simple form it goes like this:

git clone &lt;a href="http://git.drupal.org/project/drupal.git" title="http://git.drupal.org/project/drupal.git"&gt;http://git.drupal.org/project/drupal.git&lt;/a&gt;
cd drupal
sub .
...edit something and save...
git diff

This gives you a overview what you have changed. To save patch to a file, run:

git diff &gt; ~/my_patches_dir/my_patch_name.patch

More info about Git

http://git-scm.com

http://git-scm.com/videos

http://marketplace.tutsplus.com/…eview/135990

http://www.ralfebert.de/…t_screencast

May 19 2012
May 19

VirtualBox, Vagrant, Ruby, and Git have all gone through upgrades (major and minor) since I wrote the chapter on installing them in Multi-Site Drupal. While this hasn't made much of a difference for Mac and Linux/UNIX, Windows 7 support is now much better.

Here is a guide for installing the Multi-Site vagrant image for Drupal 7 on Windows 7. (If you're interested in running Drupal Vagrant, these instructions will work for that project as well.)

Intro

When I first wrote Multi-Site Drupal, Windows 7 support for VirtualBox, Ruby, and Vagrant were still in flux. Now, several months later, the landscape has solidified. Here are updated instructions for installing on Windows 7.

Install Some Packages

To work with the Vagrant profile, you will need to download the following packages:

  • Git: Get one of the Windows installers.
  • VirtualBox: Get the Windows installer.
  • Vagrant: Get the MSI package.

It is no longer the case that you need to install Ruby (or jRuby). That is all taken care of by Vagrant.

Each of these has a Windows installer. Run each installer. You will probably also want to set up PuTTY or some other SSH client if you plan on SSH'ing into the VM's command line.

Use Git

Open a Command prompt and go to whatever directory you normally use to store projects of this sort.

Once you're there, do this:

That will pull the MultiSite Vagrant profile for the book. (Check out this page for an explanation of that project.) You could also start from the basic Drupal Vagrant project of you just want to get a Drupal site going and aren't interested in following along with the book.

Vagrant Up

Now go into the new project directory and start Vagrant:

C:\Somewhere> cd multisite_drupal_vagrant_profile
C:\Somewhere\multisite_drupal_vagrant_profile> vagrant up

After a while (perhaps 30 minutes), you will be notified that the build is complete.

Back to the Book

This should get you running as described in the book. Make sure you follow the book's suggestions on setting an entry in your hosts file and so on.

May 08 2012
May 08

Average: 4.3 (4 votes)

Great feedback from my "Tinkertoy Git" Tampa meetup and DrupalCamp Nashville presentation has inspired me to make it bigger. Much bigger. The expanded full-day "Blue Collar Git" workshop covers not only the basics of the distributed version control system, but also delves into remote repositories, resolving conflicts, and working with patches. It will be part presentation, part hands-on, with the goal of empowering participants with the knowledge and confidence to start leveraging Git for their every day workflow.

Andrew Riley from Mediacurrent and I are teaming up to offer the first Blue Collar Git workshop on Friday, June 8 as part of DrupalCamp Charlotte (also part of the Southeast LinuxFest). The cost is only $149 for the full day if you register during the month of May. Seats are limited.

As background; the genesis for the presentation and workshop came from a video of a 2010 Open Source Developers Conference session titled "Git for Ages 4 and Up" by Michael Schwern. His use of Tinkertoys really helped me solidify my knowledge of Git, and motivated me to teach Git to people using a similar method.

Like most Drupal developers, I made the switch to Git shortly after the Drupal project moved to Git (early 2011). After years of struggling with both CVS and SVN, I decided to switch all of our current and future projects to Git and haven't looked back. As part of the process, I read numerous Git-related books and blogs (here's a partial list), and even hired a Git expert to assist (and teach me) during a particularly tricky SVN-to-Git migration for a client. Without a doubt, the decision to move to Git has streamlined our processes and has made me a better developer.

I'd love to get some feedback on what resources or learning techniques you used to learn Git so that we can share that with our students. What flipped the switch on your Git lightbulb? What one Git resource can you not live without? What is your favorite feature of Git? Let me know in the comments below!

Trackback URL for this post:

http://drupaleasy.com/trackback/495

May 03 2012
May 03

At Pantheon we do all our code management via git. This lets us track operations, correct mistakes and allows a large amount of flexibility for workflows. By tracking an upstream for updates we can maintain core patches for people while providing a robust automated update functionality.

One place we do run into trouble is when there's a mix of management techniques used. If people unpack a tarball from drupal.org and check it into git (or upload it over SFTP using On Server Development), they may break their site: they'll be losing the Pressflow core and its ability to read configuration (e.g. mysql connection data) from the server environment.

Luckily, fixing that can be as easy as restoring boostrap.inc, but the legacy of adding a tarball can have other after-effects. Notably, a tarball from drupal.org contains .info files that have automatic packaging script data appended to them, like so:

; Information added by drupal.org packaging script on 2012-05-02
version = "7.14"
project = "drupal"
datestamp = "1335997555"

Drupal's core status checking code prefers this .info file data to all other sources when trying to determine its version. Moreover, the git history doesn't have any record of this, so if you wind up with legacy packaging info in your repository because of a tarball being in the mix at some point, updating via git (as Pantheon does) will leave you with updated code, but a Drupal installation that is mis-reporting its own version based on out-dated packaging information.

This is annoying (and common) enough that we wrote a quick PHP script to help clean it up. If you want to remove this cruft release packaging text from your core files, you can run the following on an up-to-date git clone of your codebase:

You can snag the script (or suggest improvements!) from the GitHub gist.

Pantheon's code import tools don't suffer from any of these problems — we run a heuristic to determine the imported major/minor version, rebase on Pantheon's upstream, and then bring the site up to date. This situation still arises whenever core updates come out though, or as the result of some complex or work-around-y manual imports.

Hopefully this helps some of our users, or other Drupal developers looking to move to a purely git-based workflow!

Feb 18 2012
Feb 18

I migrated my SVN repository to GitHub yesterday. This should make it much easier to manage multiple branches for different Drupal core versions, and also make it possible for others to contribute in case anyone is ever interested.

In other news, I implemented an idea from my last post, and made the nocode traversal algorithm much more simple and efficient. As with the original pairing algorithm, I don't know what I was thinking two years ago.

There are now two branches, for 7.x and 8.x, with the only difference in core compatibility. If it is similarly easy to port the current module back to 6.x, I could try that as well.

Feb 18 2012
Feb 18

I migrated my SVN repository to Git yesterday and moved the XBBCode project to GitHub Drupal.org yesterday. The git migration should make it much easier to manage multiple branches for different Drupal core versions, and also make it much easier for others to contribute in case anyone is interested.

In other news, I implemented an idea from my last post, and made the nocode traversal algorithm much more simple and efficient. As with the original pairing algorithm, I don't know what I was thinking two years ago.

There are now two branches, for 7.x-1.x and 8.x-1.x, with the only difference in core compatibility. If it is similarly easy to port the current module back to 6.x, I could try that as well.

Edit: Actually, on reexamination, moved to Drupal.org. The reasons this project stayed out of d.o five years ago aren't really applicable anymore, since the module is now mature and the modules it appeared to duplicate are no longer under active development.

Feb 07 2012
Feb 07

Quite some time ago I wrote a post about how patching makes you feel good in which I talked about the motivations for, and benefits of submitting patches on drupal.org (d.o). I concluded by suggesting that project maintainers should be generous in recognising the efforts of those who submit patches.

Well, now that d.o has its magnificent git infrastructure, project maintainers have even better tools for giving credit to contributors who help fix or improve the code. There is still the well-established convention for commit messages which encourages that "others [who] have contributed to the change you are committing" are credited by name. e.g.

Issue #123456 by dww, Dries: Added project release tracking.

Similar messages are often added to the project's changelog too.

The new tool that perhaps not everyone knows about yet is the ability to assign the authorship of the commit to another user e.g.

git commit --author=[username]@[uid].no-reply.drupal.org

This is appropriate when committing a patch that is entirely somebody else's work. Perhaps some maintainers will be generous and attribute authorship even if they've had to make a few tweaks to the patch, but somebody else did the majority of the work to identify and fix a bug, for example.

When a user is credited as the author in this way, the commit will show up on their drupal.org profile page, which I think many people will feel is a great reward for the time they spent putting a patch together.

There are however some limitations and drawbacks to the system. The committer of the patch is not rewarded by seeing their commit count incremented, which some may find a disincentive for generosity in attributing authorship.

Where the maintainer might split the credit in a commit message for a fix where a user was helpful by giving a detailed bug report in the issue queue, but where they themselves had to actually fix the problem, for example, they're probably justified in leaving themselves as the author of the commit. Of course they can still mention the helpful user in the commit message and changelog.

There will surely also be less monochrome cases where authorship of the code being committed should be split between multiple users. As far as I'm aware, the git infrastructure on d.o doesn't cater for this situation, and messy workarounds such as breaking the commit up to split authorship have been suggested.

There are undoubtedly some limitations, and project maintainers will occasionally find themselves with tricky decisions to make. However, for the reasons I detailed in my patching makes you feel good post, I really encourage maintainers to be generous with the credit when it comes to patches which have been submitted in issue queues, and the option to set an author for a commit in git is a great way of doing so.

Jan 10 2012
Jan 10

In a previous post, I showed an example of cloning a module that included this command: git clone drupal://drupalcs. But I neglected to explain how this worked. I'm not sure where I picked this up (it was probably from Sam Boyer), but by adding a few lines to your ~/.gitconfig makes checking out Drupal projects and sandboxes easier:

[url "ssh://[email protected]/project/"]
    insteadOf = "drupal:"
[url "ssh://[email protected]/sandbox/"]
    insteadOf = "drupalsand:"

This allows you to use drupal://PROJECT_NAME to identify a project (module, theme) git repository, and drupalsand://USER/NID to checkout a sandbox. For example, I can clone one of my sandbox projects with this command, executed at the command line:

$ git clone drupalsand://mbutcher/1356522
Cloning into 1356522...
remote: Counting objects: 988, done.
remote: Compressing objects: 100% (466/466), done.
remote: Total 988 (delta 463), reused 878 (delta 404)
Receiving objects: 100% (988/988), 242.04 KiB | 354 KiB/s, done.
Resolving deltas: 100% (463/463), done.
warning: remote HEAD refers to nonexistent ref, unable to checkout.

Perhaps this sort of syntactic sugar isn't for everyone, but I find it to be a nice configuration short-cut.

Of course, there's no reason you need to restrict this to Drupal. If you frequently use other Git remote repositories, a similar convention can be used for generating shortcuts. I have such shortcuts for HP's internal repos. And it would be easy enough to create a scheme like that for GitHub or other hosted repo sites.

If you're the one who came up with this scheme, let me know. I feel bad for not being able to give due credit.

Sep 20 2011
Sep 20

Connecting a Github private repository to a private instance of Jenkins can be tricky. In this post I’ll show you how to configure both services so that pushes to your Github repository will automatically trigger builds in Jenkins, all while keeping both safely hidden from the general public.

Step 1: Grant your server access to your private Github repository.

You may have Jenkins running on the same machine as your webhost, or they may be on separate machines with Jenkins configured to treat the webhost as a remote node. Either way, you’re going to want to SSH into the webhost and ensure that whichever Linux user Jenkins is building jobs as, can authenticate to Github. We have a robot user called ‘Bender’ (yeah, from Futurama) exactly for this purpose, so I’ll use that in the examples.

Instead of installing your own private key to the Bender account, create a new set of private/public keys, and then either create a Github user for Bender or use the Github deploy keys feature. Follow those links for the excellent guides from Github.

There are pros and cons to each approach which are discussed on the deploy-keys help page, but if you have multiple private repositories and don’t want a separate key for each, rather create a Github user for Bender.

Don’t proceed until you get the message “You’ve successfully authenticated” when executing ssh [email protected] as Bender.

Step 2: Install the Git and Github plugins.

Under ‘Manage Jenkins’ -> ‘Manage Plugins’, select and install both Github and Git plugins. Restart to finish the installation.

Configure both of these at ‘Manage Jenkins’ -> ‘Configure System’. Make sure the path to git is correctly set, and choose ‘Manually manage hook URLs” under the ‘Github Web Hook’ section.

Step 3: Configure a Jenkins job to use your repository.

The interface for configuring a job is peppered with references to Github, so it can be confusing.

Firstly, add the https:// url for your repository in the ‘GitHub project’ textfield under the general settings.

Then you’ll need to enable Git under ‘Source Code Management’. Use the SSH style syntax for the URL repository: [email protected]:user/repo.git (this is required as it’s a private repo), and specify a branch if needed. The advanced settings are optional.

Under ‘Build Triggers’, tick ‘Build when a change is pushed to Github’.

Save and build your job. You should get a successful build that correctly clones the repository to the webhost. Confirm by SSH’ing in and inspecting it.

Step 4: Grant Github access to your private Jenkins instance.

Unfortunately, this step will require you to store a plain-text user/password combination on Github, unless you’re using the Github OAuth plugin (see below). The good news is that you can lock down the user pretty tightly, so that in the event of a security breach on Github, an attacker would not be able to do anything more malicious than build your project and view previous builds.

There are a number of different authentication options in the ‘Security Realm’ section of ‘Manage Jenkins’ -> ‘Configure System’. Depending on your setup, these steps could differ, but in essence you need to create a new user for Github (I’ll just assume you used the username ‘Github’). If you’re using ‘Unix user/group database’ method, be sure to lock that new user down by restricting the shell so that SSH sessions are denied.

If you’re using the Github OAuth plugin for Jenkins to tightly tie your access to Github accounts, you can just tick the option to allow access to the POST webhook URL. However, this option is only available when using Github as the authentication server. I won’t go into detail but this allows you to skip this step entirely, as it allows anonymous access to the URL.

In the ‘Authorization’ section, choose ‘Project-based Matrix Authorization Strategy’, so that you can give project-level permissions to the Github user. You’ll probably deny access to everything for anonymous users, then grant just one permission here for Github: ‘Overall/Read’.

In the configuration for the job that will be automatically triggered, tick ‘Enable project-based security’ and then grant ‘Job/Read’ and ‘Job/Build’ to Github.

Test the Github user by logging into Jenkins with it’s credentials and ensuring that you can only see and build the relevant job.

Step 5: Add the hooks to Github.

Click the ‘Admin’ button on the main page of your private repository in Github. Under the ‘Service hooks’ -> ‘Post-Receive URLs’ section, add the URL of your Jenkins webhook, including the credentials of the Github user you created. For example:

https://USERNAME:[email protected]/github-webhook/

It’s great that Jenkins supports authentication in this format. You can test the hook from here, and confirm that there is a result under the ‘Github Hook Log’ section of your Jenkins project (it’s in the left column).

That’s it! Push some code to your repository and your project will gracefully begin building. As an added bonus, you get great information from the Github plugin, such as links to the diff for each build shown on the build page itself:

Jul 11 2011
Jul 11

Everyone does virtual differently. Having worked in bricks-and-mortar previously, setting up a virtual work system as been something of a mix of experimentation with solutions recommended by others. Here are some of the ways we manage as a virtual company.

Client communications

Email is a place to lose things. We don't depend upon email for critical project communications. We have our own Open Atrium intranet set up for all of our clients. Here we post research documents, wireframes, flow charts, use cases, design mockups, questions, general messages, and the ever essential support tickets. Our clients can post, comment on and otherwise interact with all of these items, and all the archives are there for review later. No spam filters or non-inbox-zero obstacles!

Team communications

What can I say? We like Skype. It's stable, it's pretty much peer-to-peer, it is very good at drilling through firewalls, it's encrypted, it stores archives, we can do conference calls, share screens, etc. That said, we are eyeing other possible solutions, including google+. The main need is to have robust real-time chatrooms, with encryption, and easy voice communications.

Ticket/time tracking

Like most design and development companies, we've been pretty frustrated with a lot of the project management software solutions out there. We use Liquid Planner (affiliate link), which provides ticket-level time tracking, high/low estimation, schedule and cost projections, and some nice graphs. And unlike some other systems, we've had no problems with performance slowdowns — it's a fairly consistently moderately fast web app, given its complexity. Having tried Basecamp, Harvest, Unfuddle, Freshbooks, ActiveCollab, Trac, Jira, Rally and several desktop apps over the years, I think Liquid Planner is the closest thing to the bee's knees.

Design process

We meet with the client if at all possible. We talk. We draw pictures. We point at things. We work to understand the client as best we can. This means travel, if there's room in the budget. It ends up paying off in the end through implementation of a project better aligned with the client's needs. When not in person, we communicate through Atrium.

As for amongst ourselves, we use Dropbox (affiliate link for extra GBs of space) as an easy way to share physical files that are not suited for version control. And we talk in Skype. Sometimes we'll do mark-up on artwork or wireframes, but most of the creative collaboration happens via voice. It's somehow more human that way.

Code repository

Like many, we use GitHub. Can I just take a moment here to say how much I love Git? Escaping from Subversion was transformative. (Drupal.org's escaping from CVS will be even more so.) The biggest problem with svn is all those hidden folders. They're like pox riddled throughout your code, buried in each folder, lurking. Replace a folder with a fresh one and svn freaks out, "What did you do with my hidden folder, you malfeasant!" Git doesn't do this. Git is nice. Swap out things in your code and Git doesn't mind. Git loves it when you do things like that. Go ahead, swap out a folder, swap out 10 folders! Tell Git and Git says, "Got it!" Shiny!

Development and staging

Configure globally, code locally. As soon as we start the first development sprint for a project, we set up a staging server on a VPS. Right now we are really liking Linode (affiliate link) for the smaller sites and AWS for the bigger ones requiring more flexibility on resources. Each of us codes locally, on our own machines, merging our code changes with Git. We deploy committed changes several times a day to the staging server using Git pulls from GitHub, which serves as our canonical central repository. Configuration changes, with the exception of Featurized configurations, are all done on the staging site, so that we don't have to mess with database conflicts. (Bricks-and-mortar teams can all connect to one canonical database on the same network, but that just won't work for virtual teams. The latency is not just fatal, it's fatal on a mass extinction scale.) We periodically pull database backups from the staging site to use locally, so we're all referencing the same basic configurations.

The exceptions to this relate to Features and exportables. We love Features, but find that they can get in the way at times. We use them for special cases, especially when doing iterative development, where it's nice to be able to deploy code and configuration changes through version control. On the other hand, exportables (e.g., Views and Contexts) are nice ways of moving sometimes extensive configuration work from one machine to another.

Sales pipeline

Here I'm afraid we're amateurs. We have our inquiry form. We have phones. We have email. We do things the old fashioned way. That means that, no doubt, our sales process is probably woefully inefficient at times. We're not huge, not even big enough to have a dedicated sales person, so a robust CRM really does not seem to make sense — in fact, it would probably just make everything harder to manage, spending all our time trying to get the CRM to record things in ways that are findable. If you fill out our inquiry form, or call us, or email us, it's we at the other end who are fielding your inquiry, no salespeople or account managers. Consider it the personal touch.

Drupal community communications

Anyone in the community can tell you, the ways to communicate are: on the *.drupal.org websites, in IRC, and on the email lists.

And at Drupal events!

 Blue Angels in formation
Smooth coordination can be a challenge for any team. The Blue Angels set the standard. Photo by Joshua Davis Photography (Creative Commons).

The common thread through all of this is communication. It's not everything, but it's the most important thing.

Syndicated on BlogHer.com
Jul 07 2011
Jul 07

Occasionally, even experienced Drupal administrators will make a mistake and need to revert their latest database changes. Nobody is that good at Drupal and in some cases, it’s not anyone’s fault really, but things go wrong. It is always best to backup your Drupal website database before and after making any changes to your Drupal site like Drupal core updates or Drupal module updates. It is also wise to make weekly or daily database backups if your site receives or adds content often. Having a recent Drupal database backup will save you time and will reduce your stress level if you need to rebuild or restore your Drupal website for any reason.

Believe me, it happens to the best of us. Don’t think you are the exception to this rule for any reason. You may follow all the rules and all of Drupal’s best practices, but you still may end up blowing up your site at some point. This may scare some of you off, but there is hope. The Backup and Migrate module makes it easy for any Drupal administrator, experienced or not, to backup and restore their Drupal database within their Drupal website. Out of the box, the Backup and Migrate module is a great addition to any Drupal administrator’s toolbox. It has some fairly good default settings, but has some great configuration options that really make it the best Drupal module for backup and restoration of Drupal databases.

The Backup and Migrate module allows you to make manual backups on the fly or you can also schedule MySQL backups for your Drupal site and send them via email, ftp, or save them locally to another location on your server. Setting up scheduled backups is probably the best method to make sure your sure is backed up regularly, and you won’t forget to back the site up. Each schedule is independent, so you can setup multiple types of scheduled backups for your Drupal website. I have setup daily, weekly, and monthly backups for different types of backup scenarios. I would recommend at least a monthly backup so that you don’t lose much data from your site if you need to restore.

Aside from backing up your Drupal database, you will also want to have a fresh copy of your files directory. Luckily, there is now a Backup and Migrate Files module, which will allow you to backup your files directory similar to how you backup your site with the Backup and Migrate module. The module is still in development, but is a promising new module that will continue to ease the administration of backing up and restoring files directories within Drupal websites.

Backup and Migrate doesn’t solve every problem though for backing up and restoring a full Drupal website. You also need the Drupal core files, modules, themes, libraries, and any other files that you need for your Drupal installation to run properly. This should be handled by a version control system such as Git, Subversion(SVN), or Mercurial. Git is really the best choice for version control when it comes to Drupal at this point, as the Drupal.org website has updated from CVS, an old deprecated version control system, to Git. Git will make it easy to download and possibly contribute code within the Drupal repository, as well as within your own projects. Sites such as Github and Unfuddle provide free Git repository hosting if you are looking to work on projects outside of Drupal.org. Keeping your code in version control will save you a ton of time and effort if you need to revert your code for any reason. It will also allow you to collaborate with multiple people on a single project without stepping on each others’ toes and overwriting someone’s changes.

This post was more of a wake up call rather than a how to on backing up your Drupal website. We were more interested in getting the word out there that it’s not that hard to backup your Drupal website’s files directory, database, and installation files. It’s better to know now then to find out next week when it’s too late. There is enough documentation out there to get you going on the right direction for now. Stay tuned for a follow up post on how to use the Backup and Migrate module to backup your Drupal database and then restore your database if needed.

Jun 28 2011
Jun 28

At drupal7camp in Leeds I presented about how we develop and deploy our sites using Git and various Drupal tools.

I talked about the pros and cons of using Features and various different tips and tricks that we find really useful when developing.

Here is a copy of my presentation uploaded to Slideshare.

Overall the weekend was a huge success with a good attendance and session list. Thanks to all those involved in making it a great get together!

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web