Jan 15 2016
Jan 15

This is the first time that I am working with automated tests. I have written tests before and and of course I believe that tests improve projects dramatically, but I have never had (/made) the time in a project to actually do it. So I was quite excited to get started with travis.

First things first, what do you test!?

I was terribly excited to start working with tests, but yesterday we didn't actually have any code to test against yet, so instead we decided, we actually want to run the drupal core tests. We want our module to be enabled, and to test know if any of our code causes a Drupal core regression, so I started with that.

Trial and Error (a lot)...

Travis build history logs showing eventually passing tests ​Travis build history logs showing eventually passing tests.

The documentation is pretty good and there are LOADS of example .travis.yml files floating around, so I started to just get an active environment build working and focused on getting php unit tests working first. I only wanted a couple of tests to run and the main thing I cared about was that drupal installed properly and that the environment was up and running. Eventually I got a passing build with the following .travis.yml

While I was really quite happy with this, I wanted more. Simpletests to say the least, and maybe even behat tests. I found a lot of inspiration when looking at BartFeenstra's (Xano on d.o) Currency module: https://github.com/bartfeenstra/drupal-currency/blob/8.x-3.x/.travis.yml
Most importantly, I found Drupal TI - https://github.com/LionsAd/drupal_ti

Drupal TI

This project makes it almost trivial to integrate Drupal Projects with Travis. It handles a lot of the setup in terms of MYSQL, Drupal download and how to start running tests. It managed to reduce my .travis.yml to the following:

This is quite a bit of code, but it is really clean and well commented so we can clearly see what is going on.

When to test?

So now there are tests... Great! But when do we run them? Running all of Drupal's tests can take quite some time, but we do want to have a check on development to catch things. So in short we need testing on pull requests between individual development repos (eg https://github.com/yanniboi/decoupled_auth) and the main project repo (https://github.com/FreelyGive/decoupled_auth) but when we are doing our development we only really care about project specific tests.

Seeing tests on GitHub

When I create a pull request now, I automatically have the tests run and the results on the pull request:

GitHub pull request GitHub pull request with Travis test results

Travis summary GitHub pull request test results in Travis CI

Eventually what we really want to do, is have a way of checking inside our .travis.yml, if the test has been started by a pull request, or by a code push/merge and run different parameters depending on that. But more about that next time. There will be a blog post on that soon...

In the meantime, use Travis and write tests :)

References:

Jan 13 2016
Jan 13

Developer experience is our primary concern with this Drupal 8 version of doing CRM.

We thought we could improve the experience of helping developers contribute to the project. We noticed that for Drupal 8 all the cool kids were moving to hosting their development to github, such as with Drupal Commerce, but even core bits of Drupal.

So we did some investigating and decided to join them. We thought it would be helpful to share a couple of our thoughts and reasons, we are by no means authorities on this!

Getting Started

Being able to work with Github is really nice. Someone can come along to github and easily fork the main repository which is possible on Drupal.org but much easier on github.

No Special Users

We have a principle where “no individuals is special”. On drupal.org the module maintainers get access to more tools then everyone else. On github everyone is basically the same. In theory someone’s fork may become a bigger deal than the original. This means everyone has the same tools and so the things we do to make lives for our developers easier, everyone else gets to share.

We found that when some developers were maintainers and had access to drupal.org’s git they had a much nicer experience than the people who had to just download the source code or set up their own git experiences.

Pull Requests

Pull Requests are really nice. We think pull requests are pretty much a nicer way of doing patches as you can just click a few buttons and copy and paste it into the issue queue. With Dreditor it is not a big deal but github keeps track of minor changes to a patch much more effectively especially if multiple people are working on it.

  • Although it does require giving others access to my fork of a project and so we have found that sometimes patches are easier
  • Although if multiple people are working on a pull request, they can do it by forking the pull request owner’s repository and do a pull request with that first!

Drupal.org

We definitely still use Drupal.org as the issue queue and turn off all of github’s issue tracking features. We then reference issue numbers in as many commits as possible and certainly all pull requests (We post pull requests in their issue).

One of the committers can, every so often push the “main repository” or any repository to the git repo on drupal.org

TravisCI

We also use travis-ci to handle tests and will follow up with a more detailed post about how we handle testing.

Feb 26 2015
Feb 26

This is the third in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

  • In part 1, I introduced the series and showed how for some modules, the exact same code can be used with both Drupal and Backdrop.
  • In part 2, I showed what to do when you want to port a Drupal module to a separate Backdrop version and get it up and running on GitHub.
  • In part 3 (this post), I'll wrap up the series by explaining how to link the Backdrop module to the Drupal.org version and maintain them simultaneously.

Linking the Backdrop Module to the Drupal.org Version and Maintaining Them Simultaneously

In part 2 I took a small Drupal module that I maintain (User Cancel Password Confirm) and ported it to Backdrop. In the end, I wound up with two codebases for the same module, one on Drupal.org for Drupal 7, and one on GitHub for Backdrop.

However, the two codebases are extremely similar. When I fix a bug or add a feature to the Drupal 7 version, it's very likely that I'll want to make the exact same change (or at least an extremely similar one) to the Backdrop version. Wouldn't it be nice if there were a way to pull in changes automatically without having to do everything twice manually?

If you're a fairly experienced Git user, you might already know that the answer is "yes". But if you're not, the process isn't necessarily straightforward, so I'm going to document it step by step here.

Overall, what we're doing is simply taking advantage of the fact that when we imported the Drupal.org repository into GitHub in part 2, we pulled in the entire history of the repository, including all of the Drupal commits. Because our Backdrop repository knows about these existing commits, it can also figure out what's different and pull in the new ones when we ask it to.

In what follows, I'm assuming a workflow where changes are made to the Drupal.org version of the module and pulled into Backdrop later. However, it should be relatively straightforward to reverse these instructions to do it the other way around (or even possible, but perhaps less straightforward, to have a setup where you can do it in either direction).

  1. To start off, we need to make our local clone of the Backdrop repository know about the Drupal.org repository. (A local clone is obtained simply by getting the "clone URL" from the GitHub project page and copying it locally, for example with the command shown below.)
    git clone [email protected]:backdrop-contrib/user_cancel_password_confirm.git
    

    First let's check what remote repositories it knows about already:

    $ git remote -v
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (fetch)
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (push)
    

    No surprise there; it knows about the GitHub version of the repository (the "origin" repository that it was cloned from).

    Let's add the Drupal.org repository to this list and check again:

    $ git remote add drupal http://git.drupal.org/project/user_cancel_password_confirm.git
    $ git remote -v
    drupal    http://git.drupal.org/project/user_cancel_password_confirm.git (fetch)
    drupal    http://git.drupal.org/project/user_cancel_password_confirm.git (push)
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (fetch)
    origin    [email protected]:backdrop-contrib/user_cancel_password_confirm.git (push)
    

    The URL I used here is the same one I used in part 2 to import the repository to GitHub (that is, it's the public-facing Git URL of my project on Drupal.org, available from the "Version control" tab of the drupal.org project page, after unchecking the "Maintainer" checkbox - if it’s present - so that the public URL is displayed). I've also chosen to give this repository the name "drupal". (Usually the convention is to use "upstream" for something like this, but in GitHub-land "upstream" is often used in a slightly different context involving development forks of one GitHub repository to another. So for clarity, I'm using "drupal" here. You can use anything you want to.)

  2. Next let's pull in everything from the remote Drupal repository to our local machine:
    $ git fetch drupal
    remote: Counting objects: 4, done.
    remote: Compressing objects: 100% (2/2), done.
    remote: Total 3 (delta 0), reused 0 (delta 0)
    Unpacking objects: 100% (3/3), done.
    From http://git.drupal.org/project/user_cancel_password_confirm
    * [new branch]          7.x-1.x -> drupal/7.x-1.x
    * [new branch]          master  -> drupal/master
    * [new tag]             7.x-1.0-rc1 -> 7.x-1.0-rc1
    

    You can see it has all the branches and tags that were discussed in part 2 of this series. However, although I pulled the changes in, they are completely separate from my Backdrop code (the Backdrop code lives in "origin" and the Drupal code lives in "drupal").

    If you want to see a record of all changes that were made to port the module to Backdrop at this point, you could run git diff drupal/7.x-1.x..origin/1.x-1.x to examine them.

  3. Now let's fix a bug on the Drupal.org version of the module. I decided to do a simple documentation fix: Fix documentation of form API functions to match coding standards

    I made the code changes on my local checkout of the Drupal version of the module (which I keep in a separate location on my local machine, specifically inside the sites/all/modules directory of a copy of Drupal so I can test any changes there), then committed and pushed them to Drupal.org as normal.

  4. Back in my Backdrop environment, I can pull those changes in to the "drupal" remote and examine them using git log:
    $ git fetch drupal
    remote: Counting objects: 5, done.
    remote: Compressing objects: 100% (3/3), done.
    remote: Total 3 (delta 2), reused 0 (delta 0)
    Unpacking objects: 100% (3/3), done.
    From http://git.drupal.org/project/user_cancel_password_confirm
      7a70138..997d82d  7.x-1.x     -> drupal/7.x-1.x
    
    $ git log origin/1.x-1.x..drupal/7.x-1.x
    commit 997d82dce1a4269a9cee32d3f6b2ec2b90a80b33
    Author: David Rothstein 
    Date:   Tue Jan 27 13:30:00 2015 -0500
    
            Issue #2415223: Fix documentation of form API functions to match coding standards.
    

    Sure enough, this is telling me that there is one commit on the Drupal 7.x-1.x version of the module that is not yet on the Backdrop 1.x-1.x version.

  5. Now it's time to merge those changes to Backdrop. We could just merge the changes directly and push them to GitHub and be completely done, but I'll follow best practice here and do it on a dedicated branch with a pull request. (In reality, I might be doing this for a more complicated change than a simple documentation fix, or perhaps with a series of Drupal changes all at once rather than a single one. So I might want to formally review the Drupal changes before accepting them into Backdrop.)

    By convention I'm going to use a branch name ("drupal-2415223") based on the Drupal.org issue number:

    $ git checkout 1.x-1.x
    Switched to branch '1.x-1.x'
    
    $ git checkout -b drupal-2415223
    Switched to a new branch 'drupal-2415223'
    
    $ git push -u origin drupal-2415223
    Total 0 (delta 0), reused 0 (delta 0)
    To [email protected]:backdrop-contrib/user_cancel_password_confirm.git
    * [new branch]          drupal-2415223 -> drupal-2415223
    Branch drupal-2415223 set up to track remote branch drupal-2415223 from origin.
    
    $ git merge drupal/7.x-1.x
    Auto-merging user_cancel_password_confirm.module
    Merge made by the 'recursive' strategy.
    user_cancel_password_confirm.module |   10 ++++++++--
    1 file changed, 8 insertions(+), 2 deletions(-)
    

    In this case, the merge was simple and worked cleanly. Of course, there might be merge conflicts here or other changes that need to be made. You can do those at this time, and then git push to push the changes up to GitHub.

  6. Once the changes are pushed, I went ahead and created a pull request via the GitHub user interface, with a link to the Drupal.org issue for future reference (I could have created a corresponding issue in the project's GitHub issue tracker also, but didn't bother):
    • Fix documentation of form API functions to match coding standards (pull request) (diff)

    Merging this pull request via the GitHub user interface gets it onto the official 1.x-1.x Backdrop branch, and into the Backdrop version of the module.

    Here's the commit for Drupal, and the same one for Backdrop:

    http://cgit.drupalcode.org/user_cancel_password_confirm/commit/?id=997d8...
    https://github.com/backdrop-contrib/user_cancel_password_confirm/commit/...

Using the above technique, it's possible to have one main issue (in this case on Drupal.org) for any change you want to make to the module, do essentially all the work there, and then easily and quickly merge that change into the Backdrop version without the hassle of repeating lots of manual, error-prone steps.

Hopefully this technique will be useful to developers who want to contribute their work to Backdrop while also continuing their contributions to Drupal, and will help the two communities continue to work together. Thanks for reading!

Further Backdrop Resources

Do you have any thoughts or questions, or experiences of your own trying to port a module to Backdrop? Leave them in the comments.

Feb 17 2015
Feb 17

This is the second in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

  • In part 1, I introduced the series and showed how for some modules, the exact same code can be used with both Drupal and Backdrop.
  • In part 2 (this post), I'll explain what to do when you want to port a Drupal module to a separate Backdrop version and get it up and running on GitHub.
  • In part 3, I'll explain how to link the Backdrop module to the Drupal.org version and maintain them simultaneously.

Porting a Drupal Module to Backdrop and Getting it Up and Running on GitHub

For this post I’ll be looking at User Cancel Password Confirm, a very small Drupal 7 module I wrote for a client a couple years back to allow users who are canceling their accounts to confirm the cancellation by typing in their password rather than having to go to their email and click on a confirmation link there.

We learned in part 1 that adding a backdrop = 1.x line to a module’s .info file is the first (and sometimes only) step required to get it working with Backdrop. In this case, however, adding this line to the .info file was not enough. When I tried to use the module with Backdrop I got a fatal error about a failure to open the required includes/password.inc file. What's happening here is simply that Backdrop (borrowing a change that's also in Drupal 8) reorganized the core directory structure compared to Drupal 7 to put most core files in a directory called "core". When my module tries to load the includes/password.inc file, it needs to load it from core/includes/password.inc in Backdrop instead.

This is a simple enough change that I could just put a conditional statement into the Drupal code so that it loads the correct file in either case. However, over the long run this would get unwieldy. Furthermore, if I had chosen a more complicated module to port, one which used Drupal 7's variable or block systems (superseded by the configuration management and layout systems in Backdrop) it is likely I'd have more significant changes to make.

So, this seemed like a good opportunity to go through the official process for porting my module to Backdrop.

Backdrop contrib modules, like Backdrop core, are currently hosted on GitHub. Regardless of whether you're already familiar with GitHub from other projects, there are some steps you should follow that might not be familiar, to make sure your Backdrop module's repository is set up properly and ultimately to get it included on the official list of Backdrop contributed projects.

Importing to GitHub

The best way to get a Drupal module into GitHub is to import it; this preserves the pre-Backdrop commit history which becomes important later on.

Before you do this step, if you're planning to port a Drupal module that you don't maintain, it's considered best practice to notify the current maintainer and see if they'd like to participate or lead the Backdrop development themselves (see the "Communicating" section of the Drupal 7 to Backdrop conversion documentation for more information). In my case I'm already the module maintainer, so I went ahead and started the import:

  1. Go to the GitHub import page and provide the public URL of the Drupal project's Git repository (which I got from going to the project page on Drupal.org, clicking the "Version control" tab, and then - assuming you are importing a module that you maintain - making sure to uncheck the "Maintainer" checkbox so that the public URL is displayed). Drupal.org gives me this example code:


    git clone --branch 7.x-1.x http://git.drupal.org/project/user_cancel_password_confirm.git

    So I just grabbed the URL from that.

  2. Where GitHub asks for the project name, use the same short name (in this case "user_cancel_password_confirm") that the Drupal project uses.
  3. Import the project into your own GitHub account for starters (unless you're already a member of the Backdrop Contrib team - more on that later).

Here's what it looks like:
GitHub import

Submitting this form resulted in a new GitHub repository for my project at https://github.com/DavidRothstein/user_cancel_password_confirm.

As a final step, I edited the description of the GitHub project to match the description from the module's .info file ("Allows users to cancel their accounts with password confirmation rather than e-mail confirmation").

Cleaning Up Branches and Tags

Next up is some housekeeping. First, I cloned a copy of the new repository to my local machine and then used git branch -r to take a look around:


$ git clone [email protected]:DavidRothstein/user_cancel_password_confirm.git
$ git branch -r
origin/7.x-1.x
origin/HEAD -> origin/master
origin/master

Like many Drupal 7 contrib projects, this has a 7.x-1.x branch where all the work is done and a master branch that isn't used. When I imported the repository to GitHub it inherited those branches. However, for Backdrop I want to do all work on a 1.x-1.x branch (where the first "1.x" refers to compatibility with Backdrop core 1.x).

  1. So let's rename the 7.x-1.x branch:


    $ git checkout 7.x-1.x
    Branch 7.x-1.x set up to track remote branch 7.x-1.x from origin.
    Switched to a new branch '7.x-1.x'
    $ git branch -m 7.x-1.x 1.x-1.x
    $ git push --set-upstream origin 1.x-1.x
    Total 0 (delta 0), reused 0 (delta 0)
    To [email protected]:DavidRothstein/user_cancel_password_confirm.git
    * [new branch] 1.x-1.x -> 1.x-1.x
    Branch 1.x-1.x set up to track remote branch 1.x-1.x from origin.

  2. And delete the old one from GitHub:


    $ git push origin :7.x-1.x
    To [email protected]:DavidRothstein/user_cancel_password_confirm.git
    - [deleted] 7.x-1.x

  3. We want to delete the master branch also, but can't do it right away since GitHub treats that as the default and doesn't let you delete the default branch.

    So I went to the module's GitHub project page, where (as the repository owner) I have a "Settings" link in the right column; via that link it's possible to change the default branch to 1.x-1.x through the user interface.

    Now back on my own computer I can delete the master branch:


    $ git push origin :master
    To [email protected]:DavidRothstein/user_cancel_password_confirm.git
    - [deleted] master

  4. On Drupal.org, this module has a 7.x-1.0-rc1 release, which was automatically imported to GitHub. This won't be useful to Backdrop users, so I followed the GitHub instructions for deleting it.
  5. Finally, let's get our local working copy somewhat in sync with the changes on the server. The cleanest way to do this is probably just to re-clone the repository, but you could also run git remote set-head origin 1.x-1.x to make sure your local copy is working off the same default branch.

The end result is:


$ git branch -r
origin/1.x-1.x
origin/HEAD -> origin/1.x-1.x

Just what we wanted, a single 1.x-1.x branch which is the default (and which was copied from the 7.x-1.x branch on Drupal.org and therefore contains all its history).

Updating the Code for Backdrop

Now that the code is on GitHub, it's time to make it Backdrop-compatible.

To do this quickly, you can just make commits to your local 1.x-1.x branch and push them straight up to the server. In what follows, though, I'll follow best practices and create a dedicated branch for each change (so I can create a corresponding issue and pull request on GitHub). For example:


$ git checkout -b backdrop-compatibility
$ git push -u origin backdrop-compatibility

Then make commits to that branch, push them to GitHub, and create a pull request to merge it into 1.x-1.x.

  1. To get the module basically working, I'll make the simple changes discussed earlier:
    • Add basic Backdrop compatibility (issue) (diff)

    If you look at the diff, you can see that instead of simply adding the backdrop = 1.x line to the .info file, I replaced the core = 7.x line with it (since the latter is Drupal-specific and does not need to be in the Backdrop version).

    With that change, the module works! Here it is in action on my Backdrop site:

    Cancel account using password

    (Also visible in this screenshot is a nice effect of Backdrop's layout system: Editing pages like this one, even though they are using the default front-end Bartik theme, have a more streamlined, focused layout than normal front-end pages of the site, without the masthead and other standard page elements.)

  2. Other code changes for this small module weren't strictly necessary, but I made them anyway to have a fully-compatible Backdrop codebase:
    • Replace usage of "drupal" with "backdrop" in the code (issue) (diff)
    • Use method on the user account object to determine its ID (issue) (diff)
  3. Next up, I want to get my module listed on the official list of Backdrop contributed projects (currently this list is on GitHub, although it may eventually move to the main Backdrop CMS website).

    I read through the instructions for applying to the Backdrop contributed project group. They're relatively simple, and I've already done almost everything I need above. The one thing I'm missing is that Backdrop requires a README.md file in the project root with some standard information in it (I like that they're enforcing this; it should help developers browsing the module list a lot), and it also requires a LICENSE.txt file. These were both easy to create following the provided templates and copying some information from the module's Drupal.org project page:

    Once that's done, and after reading through the rest of the instructions and making sure I agreed with them, I proceeded to create an issue:

    Application to join contrib team

    In my case it was reviewed and approved within a few hours (perhaps helped by the fact that I was porting a small module), and I was given access to the Backdrop contributed project team on GitHub.

  4. To get the module transferred from my personal GitHub account to the official Backdrop contrib list, I followed GitHub's instructions for transferring a repository.

    They are mostly straightforward. Just make sure to use "backdrop-contrib" as the name of the new owner (who you are transferring the repository to):

    Transfer repository to backdrop-contrib

    And make sure to check the box that gives push access to your repository to the "Authors" team within the Backdrop Contrib group (if you leave it as "Owners", you yourself wouldn't be able to push to it anymore):

    Grant access to the Authors team

    That's all it took, and my module now appears on the official list.

    You'll notice after you do this that all the URLs of your project have changed, although the old ones redirect to the new ones. That's why if you follow many of the links in this post, which point to URLs like https://github.com/DavidRothstein/user_cancel_password_confirm, you'll see that they actually redirect you to https://github.com/backdrop-contrib/user_cancel_password_confirm.

    For the same reason, you can keep your local checkout of the repository pointed to the old URL and it will still work just fine, although to avoid any confusion you might want to either do a fresh clone at this point, or run a command like the following to update the URL:

    git remote set-url origin [email protected]:backdrop-contrib/user_cancel_password_confirm.git
    

With the above steps, we’re all set; the module is on GitHub and can be developed further for Backdrop there.

But what happens later on when I make a change to the Drupal version of the module and want to make the same change to the Backdrop version (certainly a common occurrence)? Do I have to repeat the same changes manually in both places? Luckily the answer is no. In part 3 of this series, I’ll explain how to link the Backdrop module to the Drupal.org version and maintain them simultaneously. Stay tuned!

Further Backdrop Resources

Do you have any thoughts or questions, or experiences of your own trying to port a module to Backdrop? Leave them in the comments.

Feb 15 2015
Feb 15

Two new drupal distributions available on githubfrong

** https://github.com/alibama/cvillecouncilus is the distribution behind https://www.cvillecouncil.us - it’s an attempt to run a political campaign through a virtual proxy…

** https://github.com/alibama/rapid-prototyping-ecommerce-drupal – this is the code behind http://rpl.mae.virginia.edu/ it’s an e-commerce solution for 3d printing… A lot of this is implemented in rules and other well-standardized code thanks to Joe Pontani - a talented developer here in Virginia.  Joe integrated several third party tools, and set up the UVa payment gateway through Nelnet.

Both sites are getting updates over the next few months – the Charlottesville Council website also has a drupalgap implementation on it – absolutely awesome toolset…

18F API compliance is another feature I’m pretty stoked about… I got most of that done with the oauth2 server, views datasource, services and a couple of great notification features done with rules + views  i’ll get that feature out asap = it’s really convenient – matching a profile2 taxonomy field onto content taxonomy fields for notifications with new content.

any questions – please drop a line in the comments below

Dec 03 2013
Dec 03

Table of contents

[Skip to any section because you can]

Introduction

I have been working in software development a long time. Early on I recognized the need for Process as a way of re-using best practices for teamwork. At that time, Process implied for most of us the Waterfall model, which divides a development project into discipline based phases, each to be visited once in turn, in a kind of cascade: Requirements; capture the requirements, do the Design, Implementation, Verification and Validation, Deploy, then shift into Maintenance mode. Most people still follow that model implicitly, it has stayed in the everyday consciousness of process, much like Newton's laws instead of the Theory of Relativity, much like the theory of Creationism over the scientific Theory of Evolution kicked off by Charles Darwin. Yep, Waterfall often creeps in even when people say and even when people think they are using Agile. Of course, project difficulties and even failures based on the extremely high propensity (40% minimum) for requirements to change within the life-cycle of a project highlighted the dire need for at least an Iterative and Incremental model. And when that became too top-heavy, at least in its wonderful, eye-opening but hard to tailor and work with Rational Unified Process, I moved on to a kind of personal synthesis of CMMI (love that continuous improvement and organization wide adoption!) and Agile and Scrum approaches. More recently I have loved the simpler work in process and visual approach of the Kanban as a lean variety of Agile:

“Some Agile methods take a more flexible approach to time than Scrum does. (For example, Kanban does away with the notion of a two-week batch of work and places the emphasis on single-piece flow.) But you can still make time within a Scrum sprint in which creative activities can take place.” Gothelf, Jeff (2013-02-22). Lean UX: Applying Lean Principles to Improve User Experience. O'Reilly Media.

Following in the footsteps of C and Unix programming, C++, Enterprise Java and light-stepped Spring Framework Java, as well as Ruby on Rails, I found myself more as process engineer, architect and mentor in the web application world, rather than mainly programmer, and adopting the Drupal framework and community as my workplace. So I wanted to bring all my prior experience in process and architecture to Drupal and to its community. So I did, with the publication of Leveraging Drupal. Steeped in the Agile moment when it was published in 2009, the first chapter outlined and the whole book implemented in a number of projects, most clearly in Chapter 11: Full Swing Agile Approach to Drupal Development, a process for Drupal incorporating the best practices I had made my own throughout my career.

That was then, this is now, with, as Karen McGrane explains (Responsive Design Won't Fix Your Content Problem), responsive web design only scratching the surface of how content needs to be structured and based on a clear strategy for today's multi-device world of consumption. And with all of this heavily impacting the kind of process on which web app dev teams need to base themselves, it's time for a new process to emerge, and for a new book to be written. The process must take from Agile, from the application of lean startup process (web app dev as product dev of real value based on real needs), and from the need for the team to work on the same visible work in process together, without waterfallish handoffs and with maximum permanent client feedback; and the process must allow itself to be heavily impacted by the need for modern web apps to be responsive, adaptive and based on structured content. For each app is a back-end based API with multiple clients on multiple devices. And each app (and even each distributed component within the app) must have the most adequate framework as its vehicle. So Lean Process is the best process for Drupal based web apps, but at the same time, must emancipate itself from Drupal per se. People are facing today's challenges on many fronts, and the process will work at its best with Drupal, but also with Backdrop, other CMS's, and many other stacks all of us are flocking to.

Initial vision for Lean Process artifact workflow; but there are no handoffs here: what used to be scrums are all-eyes interdisciplinary motorized work applied to a common theme everyone is working on together.

Drupal Lean Process and Machu Picchu

Drupal Lean Process is Web App Lean Process, and is emerging as it puts a number of projects concretely under its belt. Today very few are starting from scratch, so we must focus on migrations needed right now, be it Drupal 6 to Drupal 7, Drupal 6 to Backdrop CMS, or from Drupal to other stacks. And it must be a repeatable process; that is, if it is used to migrate from Drupal 6 to Backdrop CMS, it may, in large part, be re-used for alternative future migrations to stacks which may not have come into being as of this writing, but which will capture hearts, minds and communities in the future.

The first public airing of Drupal Lean Process will be at DrupalPicchu, where I will head up an AWebFactory workshop (bi-lingual, English and Spanish) “Conquering an agile process for Drupal, driven by user experience enhancements, with tools” (link coming as soon as sessions are published), that is, on Drupal Lean Process.

Many will come to DrupalPicchu. Many may not. So I have decided to base the workshop on a concrete project, housed in a public GitHub repo, being kicked off as we speak.

Lit Drupal Lean

Lit Drupal Lean is both the process and the project. Characteristic of Lean Process projects is, just that, they incorporate a reusable process, with workflows and templates and artifacts, and, oh yes, the executable, deployable code, all as a work in progress, erm, in process.

This particular web app centers on the need writers, workshop leaders and publishers have for a community of literary workshops. The project is being kicked off right now, if you watch it you may witness its growth, as it acquires the form it will take during the free workshop January 20-24, 2014 (we might get enthusiastic and start it a day before the conference itself).

Again, here is the link for the public GitHub repo: https://github.com/victorkane/lit-drupal-lean with entry point at the issues page, not the wiki. The Kickoff Milestone tasks already explain a lot about the process.

It had better be self-explanatory (the use of GitHub Issues 2, as well as the automatic reference rich GitHub Flavored Markdown and Task Lists, are prominent tools adopted for Lit Drupal Lean).

Drupal Lean Process Book

The book will be self-published this time I think. I will welcome donations of course, but my income is derived mainly from the mentoring services I offer, so the advantage will be that the book will be kept up-to-date, with versioned downloadable tags in the repo for the usual formats. I'm going to take a stab at Markdown → Static Site → ePub and other portable formats (a project in itself :) , and it will be in a public repo). At some point, popular versions may be self-published in the usual sense if deemed useful, but the main objective will be to centralize and openly update the process.

Eclipse (Kepler) IDE and Project Management tool

Tools come and go, vary on a per-project basis. But bringing a number of recent articles together, let me share how I am using Eclipse as both an IDE for coding and DVCS, and a powerful (if novel and as yet not totally complete) project management tool.

Remote Perspective with Terminal for Git command-line management.

I could have used the incredible Git Repository Exploring perspective I wrote about recently, and managed version control with the Git visual interface, instead of the command-line as an added option, working directly on a local working copy of the code, while still accessing GitHub issues for project management. Increasingly however, I work on a server, so I prefer the Remote perspective as a starting point. Notice the use of the Markdown Editor plugin associated with *.md files, which comes with a Markdown HTML Preview view.

Remote Perspective showing Task Issues Repositories and Task List Views

I really enjoy the Task Issues Repositories and Task List Views, and have simply added them to the basic Remote perspective. In the former I specify a GitHub repo and an initial query (as I outlined in a previous post), and in the Task List Views, I can work with individual issues, synchronizing them from and to the Issues section of the project, specifying Milestones, specifying and even creating labels, assigning tasks to collaborators, cloning issues from template issues (for often used artifacts), etc. Again, it had better be self-explanatory.

Recent related articles:

Bookmark/Search this post with

Feb 22 2013
Feb 22

Simplifying Wordpress and Drupal configurationAt last year's Drupalcon in Denver there was an excellent session called Delivering Drupal.  It had to do with the oftentimes painful process of deploying a website to web servers.  This was a huge deep dive session that went into the vast underbelly of devops and production server deployment.  There were a ton of great nuggets and I recommend watching the session recording for serious web developers.

The most effective takeway for me was the manipulation of the settings files for your Drupal site, which was only briefly covered but not demonstrated.  The seed of this idea that Sam Boyer presented got me wondering about how to streamline my site deployment with Git.  I was using Git for my Drupal sites, but not effectively for easy site deployment.  Here are the details of what I changed with new sites that I build.  This can be applied to Wordpress as well, which I'll demonstrate after Drupal.

Why would I want to do this?

When you push your site to production you won't have to update a database connection string after the first time.  When you develop locally you won't have to update database connections, either.

Streamlining settings files in Drupal

Drupal has the following settings file for your site:

sites/yourdomain.com/settings.php

This becomes a read only file when your site is set up and is difficult to edit.  It's a pain editing it to run a local site for development.  Not to mention if you include it in your git repository, it's flagged as modified when you change it locally.

Instead, let's go ahead and create two new files:

sites/yourdomain.com/settings.local.php
sites/yourdomain.com/settings.production.php

Add the following to your .gitignore file in the site root:

sites/yourdomain.com/settings.local.php

This will put settings.php and settings.production.php under version control, while your local settings.local.php file is not.  With this in place, remove the $databases array from settings.php.  At the bottom of settings.php, insert the following:

$settingsDirectory = dirname(__FILE__) . '/';
if(file_exists($settingsDirectory . 'settings.local.php')){
    require_once($settingsDirectory . 'settings.local.php');
}else{
    require_once($settingsDirectory . 'settings.production.php');
}

This code tells Drupal to include the local settings file if it exists, and if it doesn't it will include the production settings file.  Since settings.local.php is not in Git, when you push your code to production you won't have to mess with the settings file at all.  Your next step is to populate the settings.local.php and settings.production.php files with your database configuration.  Here's my settings.local.php with database credentials obscured.  The production file looks identical but with the production database server defined:

<?php
    $databases['default']['default'] = array(
      'driver' => 'mysql',
      'database' => 'drupal_site_db',
      'username' => 'db_user',
      'password' => 'db_user_password',
      'host' => 'localhost',
      'prefix' => '',
    );

Streamlining settings files in Wordpress

Wordpress has a similar process to Drupal, but the settings files are a bit different.  The config file for Wordpress is the following in site root:

wp-config.php

Go ahead and create two new files:

wp-config.local.php
?wp-config.production.php

Add the following to your .gitinore file in the site root:

wp-config.local.php

This will make it so wp-config.php and wp-config.production.php are under version control when you create your Git repository, but wp-config.local.php is not.  The local config will not be present when you push your site to production.  Next, open the Wordpress wp-config.php and remove the defined DB_NAME, DB_USER, DB_PASSWORD, DB_HOST, DB_CHARSET, and DB_COLLATE variables.  Insert the following in their place:

/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') ) {
    define('ABSPATH', dirname(__FILE__) . '/');
}
if(file_exists(ABSPATH  . 'wp-config.local.php')){
    require_once(ABSPATH  . 'wp-config.local.php');
}else{
    require_once(ABSPATH . 'wp-config.production.php');
}

This code tells Wordpress to include the local settings file if it exists, and if it doesn't it will include the production settings file. Your next step is to populate the wp-config.local.php and wp-config.production.php files with your database configuration.  Here's my wp-config.local.php with database credentials obscured.  The production file looks identical but with the production database server defined:

<?php
// ** MySQL settings - You can get this info from your web host ** //
 
/** The name of the database for WordPress */
define('DB_NAME', 'db_name');
 
/** MySQL database username */
define('DB_USER', 'db_user');
 
/** MySQL database password */
define('DB_PASSWORD', 'db_user_password');
 
/** MySQL hostname */
define('DB_HOST', 'localhost');
 
/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');
 
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

What's next?

Now that you're all set up to deploy easily to production with Git and Wordpress or Drupal, the next step is to actually get your database updated from local to production.  This is a topic for another post, but I've created my own set of Unix shell scripts to simplify this task greatly.  If you're ambitious, go grab my MySQL Loaders scripts that I've put on Github.

Oct 14 2012
Oct 14

When I was unable to post to Twitter from my blog tonight, I found a very new thread on drupal.org discussing the issue: the Twitter API changed again!

I am only using the Twitter module on Drupal 6 sites right n   ow, I took two of the fixes and combined them in a patch for the 6.x-3.0-beta9 version. Then MinhH submitted new code that only requires a change in one place.

here's my latest patch for the 6.x-3.0-beta9 version

since the Twitter API is constantly changing, and the module is not very stable, always read the thread carefully to make sure you are applying the latest patch! Others have cleaned up the code I added from MinhH, and created a patch for the 6.x-3.x-dev branch which makes sense (however I suspect that 6.x-3.0-beta9 may be ahead of the -dev branch, and it's not a branch provided in the project's Github). Anyway, Open Source rocks, Murray!

and because my Twitter RSS feed URL was also broken, I found this post

https://dev.twitter.com/discussions/844

which provides an update to the URL format:

https://api.twitter.com/1/statuses/user_timeline.rss?screen_name=decibelplaces

Sep 24 2012
Sep 24

Well, the itch to blog rapidly with quick screenshots finally got to me, and I spent the last couple of days re-working the Drupal Evernote module to get it functional with the new Evernote API updates and Oauth.

One failing of the first module was that there were a lot of steps involved to get it set up. In this new version, I've simplified the options quite a bit. Today I started using it in practice on chrisshattuck.com, and it seems to be working pretty smoothly. Before I release it into the wild, though, I'm going to give it a bit to work out any kinks. Man, it's a lot of fun being able to blog straight from Evernote. I've also set up a system to post directly to Facebook and Twitter as well with some simple tagging (I'll talk more about that later).

I went ahead and added it up on Github so folks can goof around with it until I can get a chance to re-grok the Drupal module tagging scheme. Also, right now it's just for Drupal 6 (since that's where my itch is).

Here's a screenshot of the admin page:

Evernote Settings Content type: him ., ? 10 ChriS Shant.l(k OCR tut Do sync syncs up only the 100 ?atod notes. will the in.

Jul 04 2012
Jul 04

As a website is being developed, often times it is useful to have a server set up where clients can view the website in development, for project manager to gauge the progress, or simply just for making sure each commit does not break the site! In order to achieve this, the server must pull codes from the latest release branch, for clients viewing, or development branch, for internal purposes. Having to manually do this each time can be quite a burden! In this tutorial, we will set up a way for the codebase to be deployed to the server automatically on each new commit. Our choice of version control system will be Git. However if Git is not the version control system of your choice, then even though the codes posted here won’t be directly applicable to other version control systems, the ideas behind them should still be useful!
A PHP deployment script, is used to automate the deployment process after code changes are pushed up to a repo. The script handles the pull actions for the hosting servers, allowing them to pull down the changes without manual intervention. The keyword here is automation. Automation provides savings in time, as well as prevents careless mistakes or oversight. An increase in both efficiency and reliability? No wonder a quick Google search turns up so many examples.

Today we are going to walk through the creation of a simple deployment script, with some powerful features that could be customized to fit in with your work flow.

The Basic

Here is a layout of a basic deployment script that achieves automated code deployment, with the option of specifying which branch to pull from by supplying the bn argument. Simply place this script into the public folder of a vhost on the same server as where your websites are hosted and call it with the full path of your targets website as the subdirectories. For example if you placed the script into a vhost named “post-receive.mysrv.com” and your website is hosted in the directory “/var/www/vhosts/mywebsite.mysrv.com/public”, you would call “post-receive.mysrv.com/var/www/vhosts/mywebsite.mysrv.com/public” which will pull any new updates to your website.

If you find that you keep all your sites in the same vhosts directory, with the same name for the public folder, there is no reason to have to type out the full directory paths every time.

Let’s say you have another website hosted at “/var/www/vhosts/myotherwebsite.mysrv.com/public”, we can specify the default parent path as “/var/www/vhosts/” and the default public folder as “/public”. Now we can call the script for the two different websites by simply typing “post-receive.mysrv.com/mywebsite.mysrv.com”, and “post-receive.mysrv.com/myotherwebsite.mysrv.com”.

<?php
// request_path() is defined at the bottom
$path = '/' .

request_path

();

// Edit this string to reflect on the default location of vhost web roots
// do include the trailing slash
// Example: $default_parent_path = '/var/www/vhosts/';
$default_parent_path = '/var/www/vhosts/';

// The name of the public_html directory
// do include the leading slash
// do not include the trailing slash
// Example: $default_public_directory = '/public';
$default_public_directory = '/public';

// Specify which branch by appending a branch name variable 'bn' to the end of the url
// defaults to 'master' if none specified
// Example: http://post-receive.mysrv.com/mywebsite.mysrv.com?bn=develop
$default_pull_branch_name = 'master';
if (empty($_GET['bn'])) {
  $pull_branch_name = $default_pull_branch_name;
}
else {
  $pull_branch_name = $_GET['bn'];
}

// The idea is if only 1 argument is present, treat that as the /var/www/vhosts/<directory_name>
// and if more than 1 argument is given, treat that as the full "cd-able" path
$args = explode('/', $path);

if (count($args) === 1) {
  $working_path = $default_parent_path . $path . $default_public_directory;
}
elseif (count($args) > 1) {
  $working_path =/. $path;
}

// Do the routine only if the path is good.
// Assumes that origin has already been defined as a remote location.
// We reset the head in order to make it possible to switch to a branch that is behind the latest commits.
if (!empty($working_path) && file_exists($working_path)) {
$output = shell_exec("cd $working_path; git fetch origin; git reset —hard; git checkout $pull_branch_name; git pull origin $pull_branch_name);
echo "
<

pre

>$output</

pre

>";
} /**
 * Returns the requested url path of the page being viewed.
 *
 * Example:
 * – http://example.com/node/306 returns "

node

/306".
 *
 * See request_path() in Drupal 7 core api for more details
 */
function request_path() {
  …
}

Here we discuss many different optional features that either adds more functionality or improves convenience. The code snippets in each example builds upon the previous one and reflects all previous feature additions.

Security key

To make sure your script can only be called by you or those you trust, we are going to add a security key. The security key will be supplied by the user through another URL variable we will call `sk`, and will have to match a pre-set string.

To modify the code we simply add the `sk` URL variable and do a check for the variable that it matches with the security key before continuing. This block of code should go at the very beginning of the page.

// Checks the security key to see if it is correct, if not then quits
// Currently set to static key 'mysecuritykey'
// Example: http://post-receive.mysrv.com/mywebsite.mysrv.com?sk=mysecuritykey
if (empty($_GET['sk'])) {
  header('HTTP/1.1 400 Bad Request', true, 400);
  echo '<pre>No security key supplied</pre>';
  exit;
}
if ($_GET['sk']!='mysecuritykey') {
  header('HTTP/1.1 403 Forbidden', true, 403);
  echo '<pre>Wrong security key supplied</pre>';
  exit;
}

Tags


This is arguably one of the most versatile functionality we can add to the script. By adding the ability to pull commits based on certain tags, you can adjust this script to fit your work flow. For example, you may want a production server to only pull commits that are tagged with the latest version number. For more information on tags and how they work in Git here is a nice succinct description straight from the Git official documentation.

Once again, we had to change the shell commands in order to both retrieve tags information and to pull the appropriate commits. You can setup different tag rules by altering the regular expressions and the comparison done between tags. For example, a rule to only pull commits with tags containing the keyword beta. You can also set different rules for different branches by a switch-case structure based on the `bn` URL variable.

// Do the routine only if the path is good
if (!empty($working_path) && file_exists($working_path)) {

  // Fetch and check version numbers from tags
  $preoutput = shell_exec("cd $working_path; git fetch origin; git fetch origin --tags; git tag");
  // Finds an array of major versions by reading a string of numbers that comes after '7.x-'
  preg_match_all('/(?<=(7\.x-))[0-9]+/', $preoutput, $matches_majver);
  // Finds the latest major version by taking the version number with the greatest numerical value
  $majver = max($matches_majver[0]);
  // Finds an array of minor versions by reading a string of numbers that comes after '7.x-{$majver}.'
  // where {$majver} is the latest major version number previously found above
  preg_match_all('/(?<=(7\.x-' . $majver . '.))[0-9]+/', $preoutput, $matches_minver);
  // Finds the latest minor version by taking the version number with the greatest numerical value
  $minver = max($matches_minver[0]);
  // Concaternate version numbers together to form the highest version tag
  $topver = '7.x-' . $majver . '.' . $minver;
  echo "<pre>The latest version detected is version $topver</pre>";

$output = shell_exec("cd $working_path; git fetch origin; git reset --hard; git checkout $pull_branch_name; git fetch origin $pull_branch_name; git merge tags/$topver;");
echo "<pre>$output</pre>";
}

Drush


If you’re using Drupal as your CMS, chances are you’re using Drush. Here we will integrate the script with Drush clear cache commands. The idea is the same as the above features; we start by defining an URL variable `cc` as our drush command variable. The user can execute predetermined drush commands. Clearing cache will clean out all the cached data and force the website to rebuild itself, it is important after code changes in order for those changes to be reflected on the website, especially for the theme layer.

—Update—
As Dustin pointed out in the comments below, there is often a need to perform a database update, and for those that work with features as a tool for site building, running a feature revert is a must on any update. The addition of a few new URL variables will give us the option to do so.

if (!empty($_GET['cc'])) {
  switch ($_GET['cc']) {
    case 'all':
      shell_exec("cd $working_path; drush cc all");
      break;
    case 'cssplusjs':
      shell_exec("cd $working_path; drush cc css+js");
      break;
    case 'cssminusjs':
      shell_exec("cd $working_path; drush cc css-js");
      break;
  }
}
if (!empty($_GET['up'])) {
  shell_exec("cd $working_path; drush updatedb -y");
}
if (!empty($_GET['fr'])) {
  shell_exec("cd $working_path; drush fra -y");
}

Github integration


If your repository is hosted on Github, you can make use of their POST callback service by going to Admin>Service Hooks and adding a post commit webhook with the url of the script, complete with the security key and any other arguments. Github will then call the script whenever a new commit is pushed to the repo. BitBucket similarly offers a post commit webhook.

Twitter integration


The Github POST service sends along a payload object with quite a bit of useful information with the POST call to our deployment script. By including the twitter-php library by David Grudl we can setup a twitter account to tweet updates with information from the POST payload.

You can find the source files as well as the documentation on how to setup your twitter account here on Github.

To include this in our script we simply add the below block of code, with the
require_once dirname(__FILE__) . '/twitter-php/twitter.class.php';
//insert appropriate values for the keys and tokens
$consumerKey = 'consumerkeygoeshere';
$consumerSecret = 'consumersecretgoeshere';
$accessToken = 'accesstokengoeshere';
$accessTokenSecret = 'accesstokensecretgoeshere';// Tweet on success the total number of commits, latest commit number, which repository and branch received new commits, and who pushed the commit.
if (!empty($_POST) && !empty($_POST['payload'])) {
$payload = json_decode($_POST['payload']);
if (!empty($payload)) {
$twitter = new

Twitter

($consumerKey, $consumerSecret, $accessToken, $accessTokenSecret);
$last_commit = end($payload->commits);
$twitter->send('[' . $payload->repository->name . ':' . $pull_branch_name . '] ' . count($payload->commits) . ' commit(s) deployed. Last commit: ' . end($payload->commits)->id . ' by ' . end($payload->commits)->author->name . end($payload->commits)->author->email);
}
}

Here is the script in its entirety

<?php
require_once dirname(__FILE__) . '/twitter-php/twitter.class.php';
$consumerKey = 'consumerkeygoeshere';
$consumerSecret = 'consumersecretgoeshere';
$accessToken = 'accesstokengoeshere';
$accessTokenSecret = 'accesstokensecretgoeshere';
$path = '/' .

request_path

();
$default_parent_path = '/var/www/vhosts/';
$default_public_directory = '/public';

if (empty($_GET['sk'])) {
  header('HTTP/1.1 400 Bad Request', true, 400);
  echo '<pre>No security key supplied</pre>';
  exit;
}
if ($_GET['sk']!='mysecuritykey') {
  header('HTTP/1.1 403 Forbidden', true, 403);
  echo '<pre>Wrong security key supplied</pre>';
  exit;
}

$default_pull_branch_name = 'master';
if (empty($_GET['bn'])) {
  $pull_branch_name = $default_pull_branch_name;
}
else {
  $pull_branch_name = $_GET['bn'];
}

$args = explode('/', $path);

if (count($args) === 1) {
  $working_path = $default_parent_path . $path . $default_public_directory;
}
elseif (count($args) > 1) {
  $working_path =/. $path;
}

if (!empty($working_path) && file_exists($working_path)) {
  $preoutput = shell_exec("cd $working_path; git fetch origin; git fetch origin --tags; git tag");
  preg_match_all('/(?<=(7\.x-))[0-9]+/', $preoutput, $matches_majver);
  $majver = max($matches_majver[0]);
  preg_match_all('/(?<=(7\.x-' . $majver . '.))[0-9]+/', $preoutput, $matches_minver);
  $minver = max($matches_minver[0]);
  $topver = '7.x-' . $majver . '.' . $minver;
  echo "<pre>The latest version detected is version $topver</pre>";

  $output = shell_exec("cd $working_path; git fetch origin; git reset --hard; git checkout $pull_branch_name; git fetch origin $pull_branch_name; git merge tags/$topver;");
  echo "<pre>$output</pre>";
}

if (!empty($_GET['cc'])) {
  switch ($_GET['cc']) {
    case 'all':
      shell_exec("cd $working_path; drush cc all");
      break;
    case 'cssplusjs':
      shell_exec("cd $working_path; drush cc css+js");
      break;
    case 'cssminusjs':
      shell_exec("cd $working_path; drush cc css-js");
      break;
  }
}

if (!empty($_GET['up'])) {
  shell_exec("cd $working_path; drush updatedb -y");
}

if (!empty($_GET['fr'])) {
  shell_exec("cd $working_path; drush fra -y");
}

if (!empty($_POST) && !empty($_POST['payload'])) {
  $payload = json_decode($_POST['payload']);
  if (!empty($payload)) {
    $twitter = new Twitter($consumerKey, $consumerSecret, $accessToken, $accessTokenSecret);
    $last_commit = end($payload->commits);
    $twitter->send('[' . $payload->repository->name . ':' . $pull_branch_name . '] ' . count($payload->commits) . ' commit(s) deployed. Last commit: ' . end($payload->commits)->id . ' by ' . end($payload->commits)->author->name . end($payload->commits)->author->email);
  }
}

 /**
 * Returns the requested url path of the page being viewed.
 *
 * Example:
 * – http://example.com/node/306 returns "node/306".
 *
 * See request_path() in Drupal 7 core api for more details
 */

function request_path() {
  static $path;

  if (isset($path)) {
    return $path;
  }

  if (isset($_GET['q'])) {
    // This is a request with a ?q=foo/bar query string. $_GET['q'] is
    // overwritten in drupal_path_initialize(), but request_path() is called
    // very early in the bootstrap process, so the original value is saved in
    // $path and returned in later calls.
    $path = $_GET['q'];
  }
  elseif (isset($_SERVER['REQUEST_URI'])) {
    // This request is either a clean URL, or 'index.php', or nonsense.
    // Extract the path from REQUEST_URI.
    $request_path = strtok($_SERVER['REQUEST_URI'], '?');
    $base_path_len = strlen(rtrim(dirname($_SERVER['SCRIPT_NAME']), '\/'));
    // Unescape and strip $base_path prefix, leaving q without a leading slash.
    $path = substr(urldecode($request_path), $base_path_len + 1);
    // If the path equals the script filename, either because 'index.php' was
    // explicitly provided in the URL, or because the server added it to
    // $_SERVER['REQUEST_URI'] even when it wasn't provided in the URL (some
    // versions of Microsoft IIS do this), the front page should be served.
    if ($path == basename($_SERVER['PHP_SELF'])) {
      $path = '';
    }
  }
  else {
    // This is the front page.
    $path = '';
  }

  // Under certain conditions Apache's RewriteRule directive prepends the value
  // assigned to $_GET['q'] with a slash. Moreover we can always have a trailing
  // slash in place, hence we need to normalize $_GET['q'].
  $path = trim($path, '/');

return $path;
}

That should be enough to get you started on a deployment script. Let us know in the comments if you have any questions, or any other ideas you might have for an improved script!

Credits goes to Brandon Shi, our senior developer here at ImageX Media, for the ideas behind much of these scripts.

Jun 20 2012
Jun 20

We've tried a lot of project management systems over the years. In some way, they have always seemed lacking, confusing or just a pain in the rear end. If they had good tools for project managers, they were confusing to developers. If they were useful for developers, designers complained about the eye-sores. No one system ever seemed to satisfy the team.

We recently started using GitHub for project management after the developers started raving about how much they loved it for managing code. To our surprise, GitHub has proven a solid option for project management. Our designers have even started using it for some of their projects, which I think says something about GitHub's aesthetics. With a little bit of something for each role, GitHub is starting to come out on top as the tool of choice for hosting code, managing projects, and facilitating project communication.

Project Introductions

GitHub is pretty developer-centric. As such, the first thing a developer sees when they open a project, is a view of the code repository. Below that, GitHub automatically renders the README file found in the root of the code base. It's a very typical practice for software projects, especially open source software projects, to have this file in place. The README can be in various formats, but a favorite of mine is Markdown. Simply giving the README an extension of .md tells GitHub to render your README.md using the Markdown syntax. Even better, GitHub has it's own flavor of markdown. Since the developers of your project see the README first, this is a great place for information that will get them up and running with the project as quickly as possible. Be concise. If you need to write more than a few sentences, chances are, you should be linking off to more in-depth documentation in your project's wiki. Here's a quick guideline of some of the things that you might want to include in your README.

  1. A quick project overview.

    Provide a short description of the project's goals and a bit of background. Any links that you frequently access are also good to include up at the top as well, for easy access. Everyone loves easy access.

  2. Information about the directory structure.

    Typically we have more than just Drupal in our repository root, so it's helpful to have a brief description of what is in there. We typically have a drush folder for aliases and commands, as well as a patches directory with its own README.

  3. How to get started developing.

    Tell the developers what the best way to jump into the project might be. Things like, "clone this repository, create a feature branch, run the installer, download a copy of the database, etc.. Whomever reviews the pull request should also do things like remove the remote branch from the repository once it is merged."

  4. Code promotion workflow.

    It's a good idea to outline your development process, as it may change from project to project. Do you want people to fork your repository and send pull requests; create feature branches and send pull requests; or just go ahead and commit to master? Let the developers know up-front, so there's no confusion.

  5. Environments.

    Outline information for your dev, staging and live environments, if you have them. Also, outline the process for getting things to the various places. How do I make sure my code is on staging? What is the best way to grab a database dump? We like to setup drush aliases for each environment ahead of time as a means of outlining this information and giving developers a good starting point. This document contains some example commands for doing some typical operations. Here's an example.

  6. Links to where to find more information.

    Typically this is our wiki, where we keep more detailed documentation and notes on things; project details like the original proposal's SOW, credentials to environments, Scrum Notes, Pre-launch checklists, etc.

We've attempted to create a drupal-boilerplate, of sorts, for our Drupal projects which we're continuously re-using for new projects and modifying when we find things that work better. Take a look, and if you find it useful, please feel free to use it! If you find anything missing, or have ideas on improving it, please fork it and send us a pull request!

Working with GitHub Issues

GitHub has a pretty simple issue management system for bug tracking, but it is flexible enough to be a pretty powerful tool for managing entire projects, large and small. It has issues which can reference each-other; labels for attaching meta data to your issues; methods for attaching code to your issues; and even milestones for grouping and focusing your issues within time blocks.

Referencing and Association

Issues can be associated with each other by simply throwing an #issue-number (ex: #3) within the body of another issue. This is useful in many ways. Firstly, it keeps the relationship simple. We don't have to worry about what kind of relationship it is (parent/sibling/child), just that it's related. Nevertheless, there are a couple of tricks that make this a little more useful if you understand how it works. Let me give you an example.

Let's say you typically create an issue for a content type, and one of the fields on that content type is a taxonomy vocabulary. You probably want to break that vocabulary creation out into it's own issue. So you create the issue for the news content type news-content-type

and then you create an issue for the taxonomy vocabulary and, within your description, link to the news issue. tags-taxonomy

Just by putting in the #ticket-number (in this case #4) GitHub creates a link to the news issue AND it places a back-link reference within the news issue to your tags issue! news-tags

As a part of this reference you will notice that it also gives you the status of the referenced issue. Very handy for whomever is assigned this news issue. They can easily see the status of it's 'dependency'. I use that term loosely because it is a dependency in this instance, but not always.

Issue Labels

Tags are a simple and effective way to add metadata to your issues. A lot of systems tend to create fields and categories with various values in an effort to allow you finite control of the metadata for an issue. I've found the simple tagging system that GitHub employs to be very efficient and more flexible.

GitHub comes with a few labels by default: bug, duplicate, enhancement, invalid, question, and won't fix. These give you a good idea of how to start using labels. For example, "bug" is a type of issue, while "won't fix" is more of a status. Tags can be anything, and if chosen wisely, can give any developer an immediate clue as to what sort of ticket it is, what section it might apply to, or what status it is in at a quick glance.

While they're useful for developers, they're also good for the organizer of the project in that they serve as a great filtering mechanism as well. For instance, just by selecting various labels, I can see all of the issues that are "migration" issues for "taxonomy", or "content types." content-migration

Attach Code to an Existing Issue

Pull requests are an amazing tool for code collaboration. If you're new to the concept, check out this pull request demo. It's a quick and easy way for developers to basically create a copy of the code base (by either forking or branching) and suggest modifications to the existing code, or contribute new code. It allows the other members of the project to then review that code, make their own suggestions with in-line commenting, and then make a decision as to whether to merge it into the main code base or not. We've found the in-line commenting with pull requests to be immensely useful since they keep everyone in the project in the loop with changes that are happening. code-comments

Pull requests in general are a great means of peer review and have helped to keep the quality of our code up to everyone's standards. There's a bit of overhead in that it may take a little longer for some new piece of code to be merged in, so plan accordingly. But this also means we find bugs sooner, typically before they're actually introduced into the master branch of the code.

I had one gripe with pull requests: when you create one through GitHub's web interface, it basically creates a new issue. Though you can certainly reference a particular issue within your pull request, it's still a separate issue. However, through a nice command-line tool called Hub, we've found there's a way to turn issues into pull requests! Very handy for keeping your discussions and code all in one place and not having to deal with multiple issues about the same thing.

Milestones

GitHub has a mechanism for milestones that is actually quite typical of many project systems these days. When you create a new milestone, it simply has a title, description, and a date choosing mechanism. milestone

You can have a nice overview during the time-boxed iteration that gives you the percentage complete. closed

We tend to only plan one sprint ahead, but there is a milestone created for each iteration up until the end of the project. open

We grab these tickets from the Backlog, which is essentially just any ticket that is not in a Sprint. backlog

Huboard

GitHub's issue tracking system lacks a mechanism for prioritizing your issues. You could probably come up with labels for High, Medium and Low priorities, but I tend to prefer an ordered list with the highest priority things on top.

Enter Huboard, which gives you a nice Kanban-style interface (similar to Trello) right on top of the GitHub api. You're looking at your GitHub issues, but with a different interface. The instructions for setting this up are quite sufficient, so I'll not re-iterate those, but I've found that it's quite easy and quick to setup on Heroku with very little maintenance overhead. With Huboard, we now have a means of seeing what the priority tasks are for the week and it gives developers an easy way to see what they should work on next. huboard

Logins

Some project management systems require a new login for every instance of the software. For instance, if you have two different clients using the same project management software you may have to remember two different username and password combinations and your authentication will not transfer from one to the other. Github allows users to access all the projects and repositories you have permission to without multiple authentication.

Github is lean and spare, and you may find there are features missing that you're accustomed to. Luckily, the team over at GitHub is continually making improvements to their product and they blog about it often.

In summary, GitHub is great for the technically-minded person, but less tech-savvy people may not find it as attractive. I'm still working on ways to report on progress to project stakeholders in a more visual way and when I find one I like, I plan to update you all.

If you have any suggestions on things we might also do to improve our process, or would like to share with us some of the exciting things you're doing with your own processes, please hit us up in the comments section! We'd love to hear from you! And remember, Lullabot loves you!

Fork me on GitHub */
Sep 20 2011
Sep 20

Connecting a Github private repository to a private instance of Jenkins can be tricky. In this post I’ll show you how to configure both services so that pushes to your Github repository will automatically trigger builds in Jenkins, all while keeping both safely hidden from the general public.

Step 1: Grant your server access to your private Github repository.

You may have Jenkins running on the same machine as your webhost, or they may be on separate machines with Jenkins configured to treat the webhost as a remote node. Either way, you’re going to want to SSH into the webhost and ensure that whichever Linux user Jenkins is building jobs as, can authenticate to Github. We have a robot user called ‘Bender’ (yeah, from Futurama) exactly for this purpose, so I’ll use that in the examples.

Instead of installing your own private key to the Bender account, create a new set of private/public keys, and then either create a Github user for Bender or use the Github deploy keys feature. Follow those links for the excellent guides from Github.

There are pros and cons to each approach which are discussed on the deploy-keys help page, but if you have multiple private repositories and don’t want a separate key for each, rather create a Github user for Bender.

Don’t proceed until you get the message “You’ve successfully authenticated” when executing ssh [email protected] as Bender.

Step 2: Install the Git and Github plugins.

Under ‘Manage Jenkins’ -> ‘Manage Plugins’, select and install both Github and Git plugins. Restart to finish the installation.

Configure both of these at ‘Manage Jenkins’ -> ‘Configure System’. Make sure the path to git is correctly set, and choose ‘Manually manage hook URLs” under the ‘Github Web Hook’ section.

Step 3: Configure a Jenkins job to use your repository.

The interface for configuring a job is peppered with references to Github, so it can be confusing.

Firstly, add the https:// url for your repository in the ‘GitHub project’ textfield under the general settings.

Then you’ll need to enable Git under ‘Source Code Management’. Use the SSH style syntax for the URL repository: [email protected]:user/repo.git (this is required as it’s a private repo), and specify a branch if needed. The advanced settings are optional.

Under ‘Build Triggers’, tick ‘Build when a change is pushed to Github’.

Save and build your job. You should get a successful build that correctly clones the repository to the webhost. Confirm by SSH’ing in and inspecting it.

Step 4: Grant Github access to your private Jenkins instance.

Unfortunately, this step will require you to store a plain-text user/password combination on Github, unless you’re using the Github OAuth plugin (see below). The good news is that you can lock down the user pretty tightly, so that in the event of a security breach on Github, an attacker would not be able to do anything more malicious than build your project and view previous builds.

There are a number of different authentication options in the ‘Security Realm’ section of ‘Manage Jenkins’ -> ‘Configure System’. Depending on your setup, these steps could differ, but in essence you need to create a new user for Github (I’ll just assume you used the username ‘Github’). If you’re using ‘Unix user/group database’ method, be sure to lock that new user down by restricting the shell so that SSH sessions are denied.

If you’re using the Github OAuth plugin for Jenkins to tightly tie your access to Github accounts, you can just tick the option to allow access to the POST webhook URL. However, this option is only available when using Github as the authentication server. I won’t go into detail but this allows you to skip this step entirely, as it allows anonymous access to the URL.

In the ‘Authorization’ section, choose ‘Project-based Matrix Authorization Strategy’, so that you can give project-level permissions to the Github user. You’ll probably deny access to everything for anonymous users, then grant just one permission here for Github: ‘Overall/Read’.

In the configuration for the job that will be automatically triggered, tick ‘Enable project-based security’ and then grant ‘Job/Read’ and ‘Job/Build’ to Github.

Test the Github user by logging into Jenkins with it’s credentials and ensuring that you can only see and build the relevant job.

Step 5: Add the hooks to Github.

Click the ‘Admin’ button on the main page of your private repository in Github. Under the ‘Service hooks’ -> ‘Post-Receive URLs’ section, add the URL of your Jenkins webhook, including the credentials of the Github user you created. For example:

https://USERNAME:[email protected]/github-webhook/

It’s great that Jenkins supports authentication in this format. You can test the hook from here, and confirm that there is a result under the ‘Github Hook Log’ section of your Jenkins project (it’s in the left column).

That’s it! Push some code to your repository and your project will gracefully begin building. As an added bonus, you get great information from the Github plugin, such as links to the diff for each build shown on the build page itself:

Jun 07 2010
Jun 07
Printer-friendly version

I finally got around to making a "real" feature rather than just playing around with the functionality.

I had left a comment a while back on Peter Rukavina's post 'How to run a silent auction using Drupal' about how this would be perfect as a feature. So, I installed a new site, grabbed the necessary modules, and turned Peter's description into a silent_auction feature.

The code is available from my (also brand new) Silent Auction Github repo.

This was a very easy process. I'm familiar with Drupal site building, so I point and clicked my way through the content type and manage fields screens as the main component of the project. Peter did a very good job of documenting the steps he took, so it was easy to do.

I then added his custom code to the silent_auction.module file to override comment displays. I didn't quite complete the "amount raised" block - my buggy custom block code needs some help, Github collaborators welcome :P

This was also my first experience sitting down and getting into git / Github. It's pretty amazing how great the experience of git is when you're having your hand held by Github.

The nice thing to see about features like this is that it supports a builders point of view: a certain way of building something - no user signups / accounts, using comments as bids directly, etc., plus the choice of certain modules to put it together.

Node to node relations, making a full module, enabling paypal, requiring user accounts -- all are different ways one COULD choose to build a silent auction feature. If such a silentauction.module were to be built, we'd surely find a simple_silentauction.module not far behind.

Features nicely encapsulates a starting point, but with overrides being as easy as changing a few settings, it makes it simple to continue the lego block building tradition of Drupal, as opposed to the much heavier full module or full install profile. Nice!

Now, where do I go to bang people over the head to make sure all their contrib modules are exportable…

Useful links:

  • How to Build a New Feature - this has extended information for making an Open Atrium specific feature, but the basics are still there, especially the non-UI methods for making a new feature.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web