Jul 31 2019
Jul 31

Pantheon is an excellent hosting service for both Drupal and WordPress sites. But to make their platform work and scale well they have set a number of limits built into the platform, these include process time limits and memory limits that are large enough for the vast majority of projects, but from time to time run you into trouble on large jobs.

For data loading and updates their official answer is typically to copy the database to another server, run your job there, and copy the database back onto their server. That’s fine if you can afford to freeze updates to your production site, setup a process to mirror changes into your temporary copy, or some other project overhead that can be limiting and challenging. But sometimes that’s not an option, or the data load takes too long for that to be practical on a regular basis.

I recently needed to do a very large import for records into a Drupal database and so started to play around with solutions that would allow me to ignore those time limits. We were looking at needing to do about 50 million data writes and the running time was initially over a week to complete the job.

Since Drupal’s batch system was created to solve this exact problem it seemed like a good place to start. For this solution you need a file you can load and parse in segments, like a CSV file, which you can read one line at a time. It does not have to represent the final state, you can use this to actually load data if the process is quick, or you can serialize each record into a table or a queue job to actually process later.

One quick note about the code samples, I wrote these based on the service-based approach outlined in my post about batch services and the batch service module I discussed there. It could be adapted to a more traditional batch job, but I like the clarity the wrapper provides for breaking this back down for discussion.

The general concept here is that we upload the file and then progressively process it from within a batch job. The code samples below provide two classes to achieve this, first is a form that provides a managed file field which create a file entity that can be reliably passed to the batch processor. From there the batch service takes over and using a bit of basic PHP file handling to load the file into a database table. If you need to do more than load the data into the database directly (say create complex entities or other tasks) you can set up a second phase to run through the values to do that heavier lifting. 

To get us started the form includes this managed file:

   $form['file'] = [
     '#type' => 'managed_file',
     '#name' => 'data_file',
     '#title' => $this->t('Data file'),
     '#description' => $this->t('CSV format for this example.'),
     '#upload_location' => 'private://example_pantheon_loader_data/',
     '#upload_validators' => [
       'file_validate_extensions' => ['csv'],
     ],
   ];

The managed file form element automagically gives you a file entity, and the value in the form state is the id of that entity. This file will be temporary and have no references once the process is complete and so depending on your site setup the file will eventually be purged. Which all means we can pass all the values straight through to our batch processor:

$batch = $this->dataLoaderBatchService->generateBatchJob($form_state->getValues());

When the data file is small enough, a few thousand rows at most, you can load them all right away without the need of a batch job. But that runs into both time and memory concerns and the whole point of this is to avoid those. With this approach we can ignore those and we’re only limited by Pantheon’s upload file size. If they file size is too large you can upload the file via sftp and read directly from there, so while this is an easy way to load the file you have other options.

As we setup the file for processing in the batch job, we really need the file path not the ID. The main reason to use the managed file is they can reliably get the file path on a Pantheon server without us really needing to know anything about where they have things stashed. Since we’re about to use generic PHP functions for file processing we need to know that path reliably:

$fid = array_pop($data['file']);
$fileEntity = File::load($fid);
$ops = [];

if (empty($fileEntity)) {
  $this->logger->error('Unable to load file data for processing.');
  return [];
}
$filePath = $this->fileSystem->realpath($fileEntity->getFileUri());
$ops = ['processData' => [$filePath]];

Now we have a file and since it’s a csv we can load a few rows at time, process them, and then start again.

Our batch processing function needs to track two things in addition to the file: the header values and the current file position. So in the first pass we initialize the position to zero and then load the first row as the header. For every pass after that we need to find point we left off. For this we use generic PHP files for loading and seeking the current location:

// Old-school file handling.
$path = array_pop($data);
$file = fopen($path, "r");
...
fseek($file, $filePos);

// Each pass we process 100 lines, if you have to do something complex
// you might want to reduce the run.
for ($i = 0; $i < 100; $i++) {
  $row = fgetcsv($file);
  if (!empty($row)) {
    $data = array_combine($header, $row);
    $member['timestamp'] = time();
    $rowData = [
             'col_one' => $data['field_name'],
             'data' => serialize($data),
             'timestamp' => time(),
    ];
    $row_id = $this->database->insert('example_pantheon_loader_tracker')
             ->fields($rowData)
             ->execute();

    // If you're setting up for a queue you include something like this.
    // $queue = $this->queueFactory->get(‘example_pantheon_loader_remap’);
    // $queue->createItem($row_id);
 }
 else {
   break;
 }
}
$filePos = (float) ftell($file);
$context['finished'] = $filePos / filesize($path);

The example code just dumps this all into a database table. This can be useful as a raw data loader if you need to add a large data set to an existing site that’s used for reference data or something similar.  It can also be used as the base to create more complex objects. The example code includes comments about generating a queue worker that could then run over time on cron or as another batch job; the Queue UI module provides a simple interface to run those on a batch job.

I’ve run this process for several hours at a stretch.  Pantheon does have issues with systems errors if left to run a batch job for extreme runs (I ran into problems on some runs after 6-8 hours of run time), so a prep into the database followed by running on queue or something else easier to restart has been more reliable.

View the code on Gist.

Apr 18 2019
Apr 18
We have some pretty heavy sites on Pantheon, with big tables to manage our circulation, etc, with sizes over 1Gb and close to 2 million records. When we clone the environment for CI or dev purposes, it can take a while (15-30 minutes). So we have created a reference environment called master-lite, which mimics the content, but the huge tables are truncated. This exponentially reduces the backup size and time to clone.

We've set up some CI to test new branches we push up (hattip to Steve Persch, who helped us better understand CI and DevOps) and it will create a CI environment based on the master-lite, which doesn't take as long. Then one of the things it will do is perform a visual regression test between the CI environment (based on my dev branch) and the master-lite environment (which is based on the master branch).

All that sounds great, but unless I keep master-lite up-to-date, it can get stale pretty quickly, so I have created this Bash script to run several terminus commands to build the master-lite environment from our production environment:

Along with this script, I have separate .sql files named after the sites which have site-specific SQL commands to trim away the fat, so to speak. This script also has an if branch to exclude "site3" since that site hasn't grown to the point that I need to do any trimming.

Then I set this up as a cronjob to run every day at 3am from a separate server and now my master-lite environment is kept up-to-date automatically and it's much faster/easier to clone for CI/Dev purposes.

Dec 14 2017
Dec 14
December 14th, 2017

When working with Pantheon, you’re presented with the three typical environments: Dev, Test and Live. This scheme is very common in major hosting providers, and not without a reason: it allows to plan and execute an effective and efficient development process that takes every client need into consideration. We can use CircleCI to manage that process.

CircleCI works based in the circle.yml file located at the root of projects. It is a script of all the stuff the virtual machine will do for you in the cloud, including testing and delivery. The script is triggered by a commit into the repository unless you have it configured to just react to commits to branches with open pull requests. It is divided into sections, each representing a phase of the Build-Test-Deploy process.

In the deployment section, you can put instructions in order to deploy your code to your web servers. A common deployment would look like the following:

deployment: dev: branch: master commands: - ./merge_to_master.sh deployment:    branch:master    commands:      -./merge_to_master.sh

This literally means, perform the operations listed under commands every time a commit is merged into the master branch. You may think there is not a real reason to use a deployment block like this to do an actual deployment. And it’s true, you can do whatever you want there. It’s ideal to perform deployments, but in essence, the deployment section allows you to implement conditional post-build subscripts that react differently depending on the nature of the action that triggered the whole build.

The Drops 8 Pantheon upstream for Drupal 8 comes with a very handy circle.yml that can help you set up a basic CircleCI workflow in a matter of minutes. It heavily relies on the use of Pantheon’s Terminus CLI and a couple of plugins like the excellent Terminus Build Tools Plugin, that provides probably the most important call of the whole script:

terminus build:env:create -n "$TERMINUS_SITE.dev" "$TERMINUS_ENV" --yes --clone-content --db-only --notify="$NOTIFY" terminusbuild:env:create-n"$TERMINUS_SITE.dev""$TERMINUS_ENV"--yes--clone-content--db-only--notify="$NOTIFY"

The line above creates a multidev environment in Pantheon, merging the code and generated assets in Circle’s VM into the code coming from the dev environment, and also clones the database from there. You can then use drush to update that database with the changes in configuration that you just merged in.

Once you get to the deployment section, you already have a functional multidev environment. The deployment happens by merging the artifact back into dev:

deployment: build-assets: branch: master commands: - terminus build:env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes deployment:    build-assets:        branch:master        commands:            -terminusbuild:env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes

This workflow assumes a simple git workflow where you just create feature branches and merge them into master where they’re ready. It also takes the deployment process just to the point code reaches dev. This is sometimes not enough.

Integrating release branches

When in a gitflow involving a central release branch, the perfect environment to host a site that completely reflects the state of the release branch is the dev environment, After all, development is only being actively done in that branch. Assuming your release branch is statically called develop:

deployment: dev: branch: develop commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh test: branch: master commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh # Deploy to TEST environment. - terminus env:deploy $TERMINUS_SITE.test --sync-content - ./rebuild_test.sh - terminus env:clone-content "$TERMINUS_SITE.test" "dev" --yes deployment:    branch:develop    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh    branch:master    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh        # Deploy to TEST environment.      -terminusenv:deploy$TERMINUS_SITE.test--sync-content      -./rebuild_test.sh      -terminusenv:clone-content"$TERMINUS_SITE.test""dev"--yes

This way, when you merge into the release branch, the multidev environment associated with it will get merged into dev and deleted, and dev will be rebuilt.

The feature is available in Dev right after merging into the release branch.

The same happens on release day when the release branch is merged into master, but after dev is rebuilt, it is also deployed to the test environment:

The release branch goes all the way to the test environment.

A few things to notice about this process:

  • The --sync-content option brings database and files from the live environment to test at the same time code is coming there from dev. By rebuilding test, we’re now able to test the latest changes in code against the latest changes in content, assuming live is your primary content entry point.
  • The last Terminus command takes the database from test and sends it back to dev. So, to recap, the database originally came from live, was rebuilt in test using dev’s fresh code, and now goes to dev. At this moment, test and dev are identical. Just until the next commit is thrown into the release branch.
  • This process facilitates testing. While the next release is already in progress and transforming dev, the client can take all the time to give the final approval for what’s in test. Once that happens, the deployment to live should occur in a semi-automatic way at most. But nothing really prevents you from using this same approach to automate also the deployment to live. Well, nothing but good judgment.
  • By using circle.yml to handle the deployment process, you contribute to keep workflow configuration centralized and accessible. With the appropriate system in place, you can trigger a complete and fully automated deployment just by doing a commit to Github, and all you’ll ever need to know about the process is in that single file.
Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Feb 08 2014
Feb 08

In Kalatheme in Kalabox on Pantheon for a minute about time, we pulled down a Kalatheme based sub-theme into a recently installed Kalabox on our laptop, so we could run it locally and work on the project using Eclipse or any other IDE.

In this article we explore a simple but realistic Git-based workflow for Multidev and non-Multidev topic branches of a Pantheon dev project.

Branches and environments

When you are going to make a change, make a branch.

When you are going to make a change, make a branch.

$ git status
# On branch adevbranch
nothing to commit (working directory clean)
$ 
$ git checkout -b mytinycontribution
…
mess around
…
works well!
...
$ git commit -am “My tiny contribution makes a big difference. Oh, and downloaded views”
… 
$ git checkout adevbranch
$ git merge mytinycontribution
$ git push origin adevbranch

Cool!

Now, an environment in Drupal is the actual running code. Meaning versioned code + database + files

If you are working with everything in code, and you should, the database and the files basically constitute content plus superficial state (cached images, css, javascript). But you need them to actually see what your commit has done. Hence, “environment”.

The beauty of multidev on Pantheon, for example, is that you are given a full environment for each topic branch on your git workflow.

For more on branches, see References 1.2

For more on multidev, see References 2.5 and resources on Pantheon site.

Making changes and pushing back to a Multidev environment

We covered cloning a site into the Kalabox environment here.

Without Kalabox, it's a question of generating ssh keys on your dev environment (laptop or VPS), loading them into your Pantheon account, and getting started with Git.

So however you got it, you change directory into your newly cloned repo, do your changes, commit them, and push back.

$ cd multidevbranch
$ git branch -a
* master
  remotes/origin/HEAD -> origin/master
  remotes/origin/multidevbranch02
  remotes/origin/master
  remotes/origin/master_before_restore_to_nnnnnnnn
  remotes/origin/multidevbranch
$ git checkout multidevbranch
Branch multidevbranch set up to track remote branch multidevbranch from origin.
Switched to a new branch 'multidevbranch'

So now, conscious of the fact that you are not on “master” and not going to screw anything up, you make your changes, test them in your local environment, then if happy:

$ git commit -am “done it”
$ git push origin multidevbranch

Now if you're really getting confident, and someone has approved the fruit of your efforts, perhaps you'd like to actually merge into master (the Pantheon 'dev' environment):

$ git checkout master   # switch to dev branch branch
$ git pull origin master # did anyone else commit anything while I was working? If so fetch it and merge it into local master
$ git merge origin/multidevbranch # merge in your stuff you just pushed to pantheon
$ git push origin master  # merge it into dev environment

Coming into work and keeping your Multidev branch up-to-date

At the risk of redundancy, here is what you do on any morning, actually; also works for after lunch, or getting home and wanting to do something after dinner:

$ cd multidevbranch
$ git branch -a
* master
  remotes/origin/HEAD -> origin/master
  remotes/origin/multidevbranch02
  remotes/origin/master
  remotes/origin/master_before_restore_to_nnnnnnnn
  remotes/origin/multidevbranch
$ git pull origin master
$ git checkout multidevbranch
$ git merge origin/master
$ git push origin victorkane ## bring your multidev environment up-to-date!

Keeping your local Kalabox environment up-to-date

Now that the local branch is up-to-date and useful into the bargain, what happens if others have added files, etc.? Got to keep your local environment up-to-date. For VPS or non-Kalabox situations, just download and untar and/or drush.

For Kalabox, we have terminatur (see References 4.1), a Kalamuna creation, included in the Kalabox setup.

To grab database and files from dev, then download to kalabox:

1. Go to multidevbranch Workflow
2. Clone from dev Environment (database and files and run update.php all checked)
3. Hit button "Clone the Database & Files from Dev..."

4. Use terminatur within Kalabox to refresh local environment with the database and files

$ drush help pulldata
Pulls down the database for a Pantheon site.

Examples:
 drush pulldata sitename.dev               Pulls down the database for a site 
                                           at @terminatur.mysite.dev.         
Arguments:
 sitename                                  The sitename.dev part of  
                                           @terminatur.sitename.dev. 

$ drush help pullfiles
Pulls down the files for a Pantheon site.

Examples:
 drush pullfiles sitename.dev              Pulls down the files for a site at 
                                           @terminatur.mysite.dev.            
Arguments:
 sitename                                  The sitename.dev part of  
                                           @terminatur.sitename.dev. 
Options:
 --destination=/www/>                 The destination of your webroot. 

$ drush ta          # refresh aliases usable by terminatur
$ drush sa
...
...
@terminatur.multidevbranch.dev
@terminatur.multidevbranch02.dev
@terminatur.multidevbranch02.dev

So the sitename parameter will be: multidevbranch.dev
And the commands will be:

$ drush pullfiles multidevbranch.dev
$ drush pulldata multidevbranch.dev
$ drush cc all  ## Did you remember to clear cache?

Results:


[email protected]:/var/www/multidevbranch$ drush pullfiles multidevbranch.dev
Downloading files... [warning]
Files downloaded. [success]
[email protected]:/var/www/multidevbranch$ drush pulldata multidevbranch.dev
How do you want to download your database?
[0] : Cancel
[1] : Get it from the latest Pantheon backup.
[2] : Create a new Pantheon backup and download it.
[3] : Pick one from a list of Pantheon backups.

2
Creating database backup... [warning]
Database backup complete! [success]
Downloading data... [warning]
Data downloaded. [success]
[email protected]:/var/www/multidevbranch$

References

  1. Git

    1. Git Pro book

    2. Git Pro book on branches

  2. Kalabox

    1. Kalabox website

    2. Kalabox wiki page

    3. Video: http://www.youtube.com/watch?v=ed1EufMfAJQ (Jan 15 2014)

    4. This Kalamuna article, Power to the People, explains that Kalabox is built upon a powerful Vagrant driven stack, Kalastack

    5. This Kalamuna article, Ride the Hydra: Reduce Complexity, introduces the three goals for reducing Drupal workflow complexity:

      1. Developers must use a standardized local development platform. (Kalabox)

      2. Deployment (moving code between local, staging, and production environments) must be automated. (Pantheon)

      3. Development must be transparent to site owners and team members alike. (Pantheon workflow, including branches coupled with complete environments (code+db+files), i.e. Multidev  (My own take: Of course you can do “branching on the cheap” and just use Kalabox for that, or own VPS server; mix with additional GitHub remote!).

    6. Kalastack on Github

  3. Kalatheme

    1. Kalatheme project page

    2. Kalatheme docs

      1. Panels layouts

      2. SASS and COMPASS

    3. Bootwatch bootstrap themes

    4. Wrapbootstrap example of paid bootstrap themes

    5. Mike Pirog's classic Kalatheme video Keep theming simple with panels and bootstrap

  4. terminatur

    1. GitHub page

Bookmark/Search this post with

Feb 04 2014
Feb 04

What flavor is Kalatheme?

Create a Kalatheme sub-theme project right on Pantheon

Pull it down to your laptop on Kalabox

Work on it in Eclipse IDE, for example

What flavor is Kalatheme?

Kalatheme is a very convenient theme to use, and should be the default theme for Panopoly, with all due respect. Peruse its Drupal project page.  Panopoly + Bootstrap 3 + Bootstrap themes + browser based sub-theme generator (Bootswatch, etc., etc.!) + views grids + reusable custom CSS classes that can be registered as optional in any panels pane + advanced stuff for the folks that, inline with Kalatheme philosophy, don't like to admit they use it:Sass and Compass Tools.

I watched an interesting video given by Mike Pirog of Kalamuna, which gives you a really good feel for Kalatheme's philosophy, objectives and look and feel, despite being a few months old. Then take a gander at the Kalatheme docs on d.o. 

Some cool concepts are:

  • Twitter bootstrap
    • Drupal Libraries API for themes!
    • Straightforward upgrade path for any library
    • Responsive classes
  • One region: content (that's it). Then, panels layouts and panes. Page manager, Panelizer, Panopoly goodness.
    • No more blocks! No more regions!
    • Way, way fewer files!
  • Panopoly layouts + Kalatheme layouts + custom layouts

Create a Kalatheme sub-theme project right on Pantheon

  • Sign up and/or login to your pantheon dashboard.
  • Add a new site
  • Select the Panopoly distribution
  • SFTP mode is required, and it will be (should be) by default
  • Visit the site to complete the installation of Panopoly. Initially, just use any old theme. I installed the Panopoly News demo too, just to see some stuff.
  • Once the install process is complete, visit your new site as admin.
  • From the Appearances page click on "Install a new theme" and paste in a link the latest stable archive of Kalatheme. I entered http://ftp.drupal.org/files/projects/kalatheme-7.x-3.0-rc1.tar.gz in Install from a URL field and clicked Install (it works since we are in SFTP mode and the necessary permissions are automatically set up).
  • Initially enable Kalatheme and set it to the default and admin theme. You can safely disregard the error message "You do not have a Bootstrap library installed but that is ok! To get equipped either check out our Start Up Guide or run our Setup Wizard."
  • Now to create your sub-theme based on your favorite Bootswatch theme.
    • Did you remember to clear cache after setting a new theme :)  ?
    • Go back to your Admin > Appearances page.
    • At the top is the Setup Kalatheme link, click on it.
    • Complete the setup webform, with name, bootswatch theme (with preview! I chose Simplex; you can also choose third-party Bootstrap themes, for example there are paid themes at https://wrapbootstrap.com/ ), whether or not you want awesome font included (you do!), then click on Dress me up.
    • Lo and behold it becometh the default theme everywhere! REJOICE, as the instructions say.
  • Important Pantheonic note: Commit your changes on your site dashboard! Then you can switch to Git mode and do a backup or clone the project with Git. This will be important if you want to download a backup to your local laptop or workstation, say using Kalabox.

Pull it down to your laptop on Kalabox

"Kalabox is more than just a local development environment. It's an easy to use, site building and deployment toolkit for the masses. Advanced webtools now belong to the people." Built on kalastack using Vagrant and VirtualBox, integrated with Pantheon, I'm interested!

I shot them an email at [email protected] to apply for a keycode since kalabox is in private beta. Mike Pirog shot me a nifty code, and I entered it together with my name and address in order to get "boxed". I downloaded the kalabox-1.0-beta4.dmg file for my Mac.

From the Readme.md (please read in its entirety) included in the install package:

Requirements

  • Kalabox has been tested mostly on Mac OS X 10.8 and partially tested on 10.7 and 10.6. It may work on 10.6 or lower. If you try it out on an older OS X version, please share your experience.
  • For now, Kalabox supports 64-bit operating systems only. So if you're on a 32-bit machine, just hang tight, as support is coming soon!
  • Vagrant 1.3.5 and VirtualBox 4.2.18
  • 1GB+ of RAM Kalabox dynamically allocates memory to itself based on your available RAM. It needs at least 1GB available to run. 

Installation

  1. Double click on the installer.
  2. Agree to the terms of service
  3. Rejoice. NOTE: Apple may warn you...

Connecting with Pantheon

All you need is an account and a site on Pantheon. Go to the configure tab, enter your username and password, click back to My Sites and start developing! If you're interested in interacting with your Pantheon sites directly from the command line, you can use some of the handy Drush commands that come packaged with the Terminatur. https://github.com/kalamuna/terminatur

More? https://kalamuna.atlassian.net/wiki/display/kalabox/Kalabox+Home

After installing in the usual Mac way, I executed it. It asked me for permissions and downloaded some extra stuff... After a while (quite a while, actually, something like 15 minutes with a pretty decent internet connection), I had my Kalabox up and running. Edit: Actually, this is a very short time if you take into consideration that a full Linux server is being downloaded and setup!

I clicked on the Configure tab and entered my Pantheon credentials and logged in.

Then I clicked on My Sites and all my sites were to be found. I clicked on one, I thought I checked Download my files also, and chose the nifty option Create a new Pantheon backup and download it, and hit Submit.

The site was downloaded and I was greated with the Boomshakalaka! message that my site was good to go right here on my laptop. I clicked Great in answer to the offer Give it a try. There was my site right in my local browser!

I had forgotten to click on the Download my files also, so the images weren't present. So from the My Sites tab, I clicked on the gear just below the front page thumbnail of my site, and selected Refresh, then selected the Files checkbox only, and clicked Refresh. My images appeared on my site :)

I then clicked on the Home tab, and then selected SSH. Local Terminal opened at /home/vagrant. I cd'd to /var/www and then to my site and did a drush status

Cool.

Work on it in Eclipse IDE, for example

From https://github.com/kalamuna/kalastack:

"Kalastack uses NFS file sharing. You can access your server webroot at ~/kalabox/www on your host machine. This way you can use your local IDE to edit files on your server."

Well, that was easy!

In a later article, we'll deal with Pantheon integrated workflow using Kalabox. Can't wait!

Bookmark/Search this post with

Feb 03 2014
Feb 03

More and more of my clients are using Pantheon to host their Drupal based web applications. This is not an ad, it's just a fact. I'm finding more and more of my development work involves cloning Pantheon based workflow instances and coding and site building within that workflow, and I've seen how it has improved greatly over the years. Now I had to import a quite large Drupal 6 site for a client hoping for a trouble-free Drupal oriented hosting experience while we got on with the site renovation project. While the process was straightforward, and the necessary documentation is there (see References), I thought I'd share my experience as warm and fuzzies for others having to do the same:

Regular import

From your regular Pantheon dashboard (initial login page after registering for an account) you simply click on the Add a site link and provide a name and click on the Create Site link. In a little while you are offered the choice of Start from scratch and Import manually radio buttons. Starting from scratch offers Drupal 6, Drupal 7 or a host of Distribution choices that allow you to start up an off-the-shelf solution via installation profile.

Selecting the latter offers a variety of alternatives for manual import. In the old days of Pantheon, one would just upload a tarball with database.sql in the Drupal document root. But things are much more organized now. The manual upload is divided into Code, Database and Files archives, each of which should be tarrred/gzipped or zipped into its own separate file. Also, for each there are URL (default) and File upload options. It says “Archives must be in tar/gz or zip format. Uploads are limited to 100MB in size. Imports via url are limited to 500MB.”

Now, the URL method, rather than uploading from you laptop, is much better because it's a server to server file transfer, with no dependency on the browser window connection, which may time out, etc.. So how do I provide that? Very simple, just create your three code, database and files folder tarred or zipped files and stick them into the default document root of your VPS or even shared hosting (a secure http over ssl (“https”) URL would provide the best security). Once your site is created on Pantheon, you can quickly delete or move these files from your VPS or shared hosting.

I created my three archives to import one site that did not exceed these limits in the following manner (following References 1):

Creating the code archive (after changing directory on the command line into the Drupal document root, and taking care to exclude .git and the files directory – note the ending dot signifying the current directory):

[email protected]:~/mysite7-legacy$ tar czvf /var/www/4pantheon/mysite_code.tgz –exclude=sites/default/files* --exclude=.git* .

Creating the database archive (from the Drupal document root and using drush although you can use mysqldump of course):

[email protected]:~/mysite7-legacy$ drush sql-dump | gzip > /var/www/4pantheon/mysite.sql.gz

Creating the files archive (from the files directory itself – note the ending dot):

[email protected]:~/mysite7-legacy$ tar czvf /var/www/4pantheon/mysite_files.tgz .

So I ended up with the three files exposed in a web document root as URL's:

  • http://example.com/4pantheon/mysite_code.tgz
    
  • http://example.com/4pantheon/mysite.sql.gz
    
  • http://example.com/4pantheon/ mysite_files.tgz
    

I then entered these URL's into the import site manually form fields with URL option selected (default), and hit the red Import Site button.

If the database is close to the 500 MB limit, that means it is actually several GB in size untarred or unzipped. So it could be quite a few minutes of one server talking to the other and then Pantheon unzipping and stuffing the sql into the database.

Now, you can pack quite a few GB of database into a zip or gzip file, and clearing cache (or even truncating cache tables) prior to creating the file will significantly reduce its size also. Not so much for GB of files folder assets, however. Anyway, the good news is that you can create the site with just the codebase and then once you obtain its ssh credentials, you can use alternative methods for database and files tarball uploads of unlimited size.

I'm going to repeat that:

The good news is that you can create the site with just the codebase and then once you obtain its ssh credentials, you can use alternative methods for database and files tarballs of unlimited size.

Here's how it's done.

Highly irregular import

Now for the fun part. What if my database file is bigger than 500 MB, even zipped or g'zipped? What if my files folder is GB's in size and of course zipping doesn't really help anyway? Let's see about a fun way to take care of the files folder first.

Files

Turns out you can just omit the files folder by leaving the Import manually Files archive field blank althogether. Then, once the site has been created, we can Sftp or rsync the files in directly.

rsync is really cool. It's one of those really flexible command-line Linux utilities that just works, and saves an enormous amount of time and bandwidth too.

Based on Reference 2, the well documented support doc rsync and SFTP, here's what I did to upload almost 3GB of user files to my new Pantheon site with rsync:

  • Added my public key from my Pantheon dashboard

    • Click Add key button

    • Paste in public key

    • Click Add Key button

  • I then went to my site dashboard by clicking on my site home page image and clicked on Connection info and obtained the following info:

Git

SSH clone URL:

ssh://[email protected]nn1111-1n1n-n11n-1n11n1n11111.drush.in:2222/~/repository.git xfrmlegacy

Database

Command Line

mysql -u pantheon -pverylongpantheonpassword -h dbserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in -P 12801 pantheon
Host: 
dbserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in
Username:
pantheon
Password:
verylongpantheonpassword
Port:
12801
DB Name:
pantheon

SFTP

Command Line
sftp -o Port=2222 [email protected]n11n-1n11n1n11111.drush.in
Host:
appserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in
Username:
dev.n1nn1111-1n1n-n11n-1n11n1n11111
Password:
Use your dashboard password
Port:
2222

So, I grabbed this info and “kept it in my records”.

Then I simply changed directories into the parent directory containing my files directory on the copy of the drupal site running on my server and shunted my files over to my new Pantheon site via rsync with the following commands:

[email protected]:~/mysite7-legacy$ export ENV=dev
[email protected]:~/mysite7-legacy$ export SITE=n1nn1111-1n1n-n11n-1n11n1n11111
[email protected]:~/mysite7-legacy$ rsync -rlvz --size-only --ipv4 --progress -e 'ssh -p 2222' files/*  [email protected]$ENV.$SITE.drush.in:files/
The authenticity of host '[appserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in]:2222 ([166.78.242.215]:2222)' can't be established.
RSA key fingerprint is b5:ea:23:eb:7b:7b:0d:17:c7:13:47:92:ea:70:c1:b5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[appserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in]:2222,[166.78.242.215]:2222' (RSA) to the list of known hosts.
sending incremental file list
... 

A while later, all my files (various GB!) placed in ./sites/default/files on Pantheon! Cool. Yes.

Database

Turns out you can just leave the Import manually Database archive field blank also. Then, once the site is created you can use best practices remote database tools to deploy a database of any size. In my case I just shaved the database down by truncating cache tables, etc., so it fit in the less than 500 MB size limit as a gzip'd file.

See Reference 3.

A little help from my friends

Whenever you hit Support and raise a ticket on Pantheon, you get a response really quickly, like in a few minutes. Just sayin'. So I did all this with more than a little help from my friends.

Once example was that the legacy Drupal 6 site had its files directory, not in ./sites/default/files, but in a ./files directory just off the Drupal document root. Support clued me in, in just a few minutes (See Reference 4):

“If you are importing a site which has files in another location (e.g. "/files") you will need to move the files into the standard location, and add, commit and push a symlink from that location to the new location via git:

$ ln -s ./sites/default/files ./files
$ git add files
$ git commit files -m "adding legacy files location symlink"
$ git push origin master

Your legacy file paths should now work, and your fies will be stored in our cloud files location!”

I was told to be sure to make sure it a relative symlink like the example and not an absolute system path.

References

  1. Importing an existing Drupal site to Pantheon

  2. rsync and SFTP

  3. Accessing MySQL databases

  4. Non-standard files locations

  5. Get Pantheon

  6. Hire us to do this and other stuff for you

  7. Even better, hire us to mentor you on how to do it and other stuff yourself

Bookmark/Search this post with

Nov 17 2013
Nov 17

I am working on a Drupal project for the Columbia University Office of Alumni Affairs and Development, and I was unable to connect to the Pantheon servers from the Ubuntu command line from home.

I have Ubuntu Linux installed in an Oracle VirtualBox, because the laptop provided by Columbia is pretty locked down and I don't have admin rights, but they installed VirtualBox so I could add an Ubuntu machine and configure it as needed.

It worked fine while I was on site, but now that I am working remotely I could not connect.

Pantheon support suggested that my ISP is unable to connect to their IPs in DNS, and I might be able to connect using Google Public DNS:

There are two steps in debugging the problem you are experiencing.

1. Check to see if you are getting an I.P. address returned when you run the following command, replacing “<xxx>” with your site’s UUID:

dig appserver.dev.<xxx>.drush.in
(ex.:dig appserver.dev.38fde024-2874-4cce-b02a-072686c4ded9.drush.in)

If there is no I.P. in the output then the ISP on the network you are currently on is failing to recognize the hostname of the database.

2. For some users that may fail so the next step is to test this command with name server, in this case Google’s 8.8.8.8 I.P address:

dig @8.8.8.8 appserver.dev.<xxx>.drush.in
(ex:dig @8.8.8.8 appserver.dev.38fde024-2874-4cce-b02a-072686c4ded9.drush.in)

If that returns an I.P. address, this means that using Google’s DNS you were able to resolve the hostname. To resolve the issue you can set your DNS to use Google’s service and you should be able to connect:

https://developers.google.com/speed/public-dns/

It works fine, but the directions for a VirtualBox instance are a little different than what Google has posted in the instuctions for Ubuntu:

Ubuntu in the VirtualBox is using its eth0 wired connection to the host OS to piggyback on the Windows wireless network adapter.

I had to configure the Ubuntu wired connection (there is no wireless connection defined). The rest of the guide is applicable, and I changed Method: "Automatic (DHCP)" to "Automatic (DHCP) addresses only"

configuring Google public dns for Ubuntu in VirtualBox
Oct 30 2013
Oct 30

This week we continue to re-publish lessons from the Pantheon Academy, with Jon Peck. In these free lessons we'll be taking a look at working with Git and MultiDev, the Pantheon cloud-based development environment for teams working on projects together. We'll also take a look at dealing with error codes, and setting up SSL fro your server. We're going to wrap up our current list with a checklist for taking your site live.

Keep your eyes open in the future for more lessons in the Pantheon Academy series. Starting next week, we are diving back into the world of Using Drupal, continuing on with Chapter 4: Media Management. In addition to the Using Drupal series, which uses a development version of Media Module 2, we will also be starting a parallel series using Media Module 1.

Nov 05 2012
Nov 05

Frustration has motivated me to write a post about this. These are largely a lot of issues I have with Pantheon. I will address the good things first.

The good bits (Some of which you may already know)

Innovative: I feel Pantheon is doing a great thing. They're helping Drupal developers simplify their workflow. They've created a service that stands above most other hosting services that currently exist. Additionally, Pantheon exudes the culture of listening to the Drupal community and continually trying to improve their product.

User Interface: For the most part, the user interface is pretty good. You're able to find things that you need without too much searching. It's not perfect and it could be improved.

Multi-tenet deployment is cheap: Being able to deploy a multi-tenet deployment structure is very easy and takes little time and effort. It's just a couple clicks away. You upload your database and then upload your files folder and after you get your favorite warm morning beverage you have a new Drupal installation created.

Upstream updates: Being able to run updates for Drupal core is awesome. A single click allows you to update code quickly without having to click through the interface. Although, drush is equally sufficient for this task.

Quick backup: With one click you're able to execute a complete backup. The backup is also conveniently compressed for quick download.

The bad bits

Pulling in changes: Moving from dev/staging/live requires clicking. I may be in the minority but I really enjoy using git purely through the command line interface. Having to pull in changes in your testing environment by clicking can get old quickly.

Drush / SSH-less access: While some innovative folks in the contributed module space have created a Drupal project for drush access, it still limits you from what you can do. I understand the limitations exist due to large concerns of security. Without Drupal or ssh access, it can often be a burden for Drupal developers. I would much prefer to have the ability to sync databases and files a command line interface with Drupal. I know Pantheon does a great job of creating user interfaces to replace this but being able to `drush cc all` is superior in my opinion.

Working the Pantheon way: Using Pantheon, you're tied to a specific workflow. This includes exporting the database, downloading it from the browser and then importing it into your machine. This is OK the first 10 times. After a while it gets quite old. I would much rather use `drush sql-sync` for this.

New Apollo Interface: The new Apollo interface has too many tabs and fancy dropdowns. Changing the environment now requires clicking twice. Click the dropdown, then pick your environment, then pick the tab on the left side. Someone went a little crazy with Twitter bootstrap. I would rather see a longer page. The tabs/dropdowns often abstract where and what you need. Also, you have to re-learn another new workflow for it to work. This is a slight curveball.

503 Errors: This issue was of the most problematic. On one of our website setups, it produces an unhelpful 503 error every time you would visit the features page or to clear cache. This became instantly an impediment on our team's productivity. We've posted a ticket; however, the ticket process has been rather slow. Different techs have come in, passed the issue and escalated it each time; but we have yet to have a resolution. (We're on day 7 of this problem.) Being able to call and wait for an hour or two and getting it resolved then would be more efficient of my time; especially when something like this is becoming an impediment to our project.

In the end it's up to you

Overall, it all depends on the workflow and tastes of the Drupal developer. Pick and choose what works for you. For some people, Pantheon is the right service / tool for their job. For me, I would much rather prefer more granularity and control. I really desire for Pantheon to succeed. Pantheon is fulfilling a need that exists in the Drupal community. Hopefully, they'll continue to improve their product and I'll give them another shot later on. At the moment, it's not what I'm looking for.

Do you agree, disagree or have comments? Please let us know below.

May 23 2012
May 23

In my book Leveraging Drupal I set out to wed what have always been my career basing best software development process practices with Drupal site building and development. Chapter 1 (possibly the only part of the book not immediately obsolete the moment it was published in February, 2009), entitled “Keeping it Simple”, describes the process you can practice in order to squarely face the varied responsibilities of getting a web app up and running. It names the steps you can follow towards fulfilling that goal. It is still freely downloadable as an example chapter. We will use it to gear ourselves towards implementing a properly prioritized backlog of stories in order to revamp AWebFactory.com .

Now, we could just say, as is increasingly the fashion, “we use scrum”, or “we use agile” and even provide the obligatory life-cycle diagrams. But how do we actually get to that? In what context are we even operating? The only fair starting point for any target app is: Why build it at all?

The Meme Map

AWebFactory MemeMap 20120521

This first step is often called “business modeling” and “positioning”. At AWebFactory we believe it actually has more to do with answering people's real needs in a sustainable fashion in the midst of the current world crisis, in this real world where we all live and work. And this affects how we, yes, do business, and especially how we code. So the first question we need to answer in order to build our target app, AWebFactory.com (and the first questions you need to answer in building yours) is “What right do we even have to build it?” In other words, What makes it unique? What is it about? What's it for? Who needs it?

If we can answer these questions as a first step in our building process, then we will be free to go on to develop a vision of the scope and feasibility, determine the roles and stories, the backlog, get it done, test it and get it out there (what Sam Boyer calls “done done” DevOps style). If we can answer these questions then we can have a chance of doing that without failing.

So let's say it again: What makes it unique? What is it about? What's it for? Who needs it? So how can we do this? Because if you come to AWebFactory and you say to us “We want a clone of site such-and-such” we can only answer “site such-and-such is already out there; what is it you want and need, exactly?” Otherwise we #fail.

What we use to answer these questions is the Meme Map

At the core in the orange box are the guiding principles. Bubbling up from the list of guiding principles are the public manifestations, the available functionality putting them into practice. And the big orange box of principles sits upon the foundation of the production processes and the context of social relations forming the material groundwork upon which the principles may support the proposed functionality.

The meme map shows how we answer those hard questions. I was about to execute mine using my regular drawing program, when I realized almost by accident that you can do a great job with much greater ease if you use a mind mapping tool for the purpose. In the above MemeMap diagram for AWebFactory now, I used FreeMind.

Roles and Stories

Roles and User Stories for AWebFactory 20120521From the Meme map as input (together with anything else we can lay our hands on), we can extrapolate the roles and stories. I was so hepped up on FreeMind that I just used that again.

It works out very well, you can visualize the roles alone, and click to open up the user stories associated with each one, and you can have sub-user stories which may be reusable includes, extensions or sub-routines, or just plain detailing.

In my experience this step is essential and can easily spell the difference between success and failure.

Backlog Creation and Prioritization

Although we try not to get locked into tools, we are currently using Pivotal Tracker for backlog creation, prioritization, tracking and for all things related to project communications (outside of assets and major documents stored in Dropbox or Google Drive (we are attempting to overcome the limitation of the permissions and access matrix being tied to a particular Google account)). Pivotal Tracker is one of those tools you bump into on so many projects and which just becomes the natural thing to be using, at first unnecessary to replace, then finally ubiquitous. It's only then you realize how well it performs its function, even though the main thing is to use any tool you feel comfortable with that gets the job done.

Initial backlog for AWebFactory 20120521With Pivotal Tracker, you don't have to artificially create the sprints. You just specify the length of the sprint, and the backlog becomes organized on the basis of which stories are actually started (current sprint), and the backlog automatically gets assigned to future sprints on the basis of the number of points assigned to them and real, monitored team velocity. And you can bring any story into the current sprint just by starting it. Here is a screenshot of the initial sprint (we will be publishing an article with each sprint).

Scope and Candidate architecture

The scope of the project is what actually has to be built, as opposed to what is merely interfaced. If you are integrating with an existing chat program running on an existing server, the interface to the chat program has to be built, so it's in scope. The chat program doesn't, it's out of scope. The scope is best expressed at a glance using UML use case modeling:

AWebFactory scope 20120521

Notice the succinctness of vocabulary (there's something about a use case diagram that makes for that: going to go to Pivotal Tracker and put these semantics in) and the relationship between functional specifications and business objects (some of them related to multiple use cases, others more specific), plus the definition of the application boundary (with some components being inside while others are interfaces to external components).

This scope is what they call in CMMI the baseline. Not a bad idea even if you are using agile. CMMI and agile actually go great together.

The candidate architecture (a work in progress during the first sprint but probably already a pretty firmed up list by the end of the second and in the light of its deliverables; impossible to discern without doing at least discovery sprints, so Dear Client, please don't ask us for recipes for disaster) boils down to what kind of platform the target app will run on, and the target app's main software components, including to which “tiers” they belong.

Even though this really isn't going to be known for sure until the analysis and design is completed during the course of each story's actual implementation, the Architect does need to extrapolate, and in the second sprint confirm, a candidate architecture. She must come up with this scientific hypothesis on the basis of the stories chosen for the first sprint, and also on the basis of the stories presenting the most risk (new methodology, paucity of talent in that area and/or of giant's shoulders). The candidate architecture itself is represented by a domain model made up of the principle objects (analysis, usually built on use case realization diagrams composed of entities (main business objects), boundaries (interface elements the user interacts with during the story) and controllers (business logic)) capable of implementing the project.

In the case of AWebFactory.com this means:

  • Will Drupal continue to be the CMS framework?

  • What classes and Drupal entities (content types, fields and field groups associated with the user and other entity bundles) will likely be involved?

  • Which core and third party modules will be leveraged as they map to analysis and design objects and classes? Which custom modules will have to be built?

  • What integration to external components has to be built (payments, project tracking...?)

  • Which database will be used for persistence? Any others?

  • Which search architecture will be used (Drupal's standard, Apache Solr, Google?)

  • What theming architecture will be used (Responsive panel layouts that come with Panopoly...)

  • What platform will it run on (Pantheon at last!)

  • And so on and so forth

So the domain diagram will emerge as the user stories implemented in turn, and will allow us to refine things on a higher level as we progress to the Candidate Architecture we will be working from after about the second sprint.

User experience and wireframes

So let's eat our own dogfood and make a mobile first wireframe, starting out with keeping it simple, and mapping out to appropriate usability for the larger breakpoints as the sprints go on. So on the basis of the scope and candidate architecture, we have wireframes for the home page, for the services section and individual services, for the work section and portfolio, for the blog section and individual articles, and for the contact page and various forms of contact.

These wireframes will live and breathe throughout the process. AWebFactory early home 320 breakpoint mockupThey may be a kind of starting point, and then you realize you can't go on until the scope and candidate architecture is fleshed out a bit more, then you come back to the wireframes, while specifying the different roles that will be interacting with the site, and the stories as ways they will interact, and then come back with more ammunition to embody things in the wireframes a bit more, and so on.

The starting point stands on a couple initial balsamiq wireframe I did based on frabulous input from Bay Area Web Designer Wini Hung (a.k.a. Skinni Wini, evangilist of the power of cute) (who is working on AWebFactory branding and graphic design in general: see nice logo?)

As usual, Auntie Celie and the kids from the neighborhood elementary school will be doing the usability testing.

Process

Now we can say “we use scrum”: in successive articles, and upon this foundation, we will implement the backlog.

Bookmark/Search this post with

May 13 2012
May 13

This is the first of a series of articles which will log the revamping of the AWebFactory company website and its migration to Pantheon, the "Cloud Platform for Drupal", which will not only be host to live deployment, but which will also serve as a development platform.

Signing up

So I signed up for an account on Pantheon, a free developer account to begin with. I went to https://www.getpantheon.com/ and clicked on Create Free Account. Filled out the details, received a confirmation account (curiously, even with GMail, which is pretty discerning about those things, it arrived in the Spam folder, so do check that when you create your account), and after validating my site when I was logged in (https://dashboard.getpantheon.com/login ) there was a sign on my dashboard offering a link to Create a site now.

Uploading my ssh public key

Before doing anything, I uploaded my public key (clicking on Add key under Your Keys), required whether you use the git push and pull or On-Server Development and sftp editing method for managing your code.

I then created a Panopoly site, since I wanted to leverage the power of Panels, Panelizer and a host of other convenient apps (see Report back on a set of key DrupalCon Denver 2012 presentations for background, especially the sections on the What's new in the Panels Universe and Open Academy presentations).

Creating and commiting the barebones starting point based on a Drupal distribution

To create this site, I repeated the steps documented in Making your life simpler: Just do it on Pantheon! Once I had finished and had a basic Panopoly based site with the Stark theme, I refreshed the site dashboard and since On Server Development was enabled (automatically upon choosing the Panopoly distribution as the site building start state) all I had to do was type in a commit message for the 236 file changes which had resulted. I typed in “Initial commit after installing Panopoly distribution with the Stark theme created” and clicked Commit. In order to make a local continuing backup of my work, I clicked on Make backup. After being notified of the completion of the backup, I clicked on the Configuration tab and download the database, the files and the code, unpacked it all and created a git repo on one of my own servers which gave me a nice warm secure feeling. So after each important commit I decided I would do the same and mirror the commits on my own local server.

Setting up a suitable development environment using the Eclipse IDE

In order to set up a development environment that clicked with the On-Server Development, I set up Eclipse with Remote User Explorer as my IDE for the project. After readying my Eclipse IDE with remote explorer support, from the Site Dashboard for Development (Testing and Live not yet being used and in any case to be managed through deployment via the Pantheon dashboard) I triple-clicked on the textarea to the right of the Codebase title, and selected the sftp connection string, which looked something like the following:

sftp -oPort=2222 dev.{lots of numbers}@appserver.dev.{lots of numbers}.drush.in

Actually, clicking on the context sensitive help “?” to the right of the text area, the string is broken down into its component parts suitable for sticking into an sftp client (like Transport on the Mac, for example, or, Eclipse Remote Explorer):

username: dev.{lots of numbers}

host: appserver.dev.{lots of numbers}.drush.in

port: 2222

In Eclipse I clicked on the Define a connection to a remote system icon, left ssh selected and click Next, filled in the Host name field (which was replicated in the Connection name field and typed in “AWebFactory dev instance on Pantheon”, left Verify host name checked and clicked Finish. I right-clicked on the Sftp files node and open Properties; clicked on the Subsystem node, and filled in the Port number and User ID. I then opened up the Sftp files node on the Code tree, and then the My Home node. The file system tree then opened up as shown in the Pantheon Support Center documentation (if you receive a request to enter a password, it will be because you neglected to upload a public ssh key as mentioned above).

Now, if I add, delete or modify files via the Eclipse IDE, I will be prompted to commit those changes as described above. I have a great and simple develoopment environment!

Next I set up the user stories on Pivotal Tracker, which we will review as they are implemented in future editions of this log.

Bookmark/Search this post with

Oct 07 2011
Oct 07

I started doing development at C3 in 2009 back in the old warehouse office on 3rd street, and have been riding the wave of Drupal ever since. About a year ago Zack, Matt, and Josh started working on a new project called Pantheon. Since they have officially launched, I wanted to put together a blog post about my experience using the their product and how it has rounded out the development process over here.

The first Chapter Three site built on Pantheon on was our overhaul of the USC Marshall School of Business website, switching them from an aging hand-rolled .NET/ASP to Drupal. It was an interesting case because development had to kick-off a few weeks before Pantheon was ready for even the most beta of beta testers. What this meant was that we were going to have to move everything from our internal SVN-based environment to the new Pantheon environment about one month into the development process.

Smart guys that they are, one of the first things Pantehon built was an importer for existing Drupal sites. I ran a mysqldump of the database and made a tarball of the code and simply uploaded them into Pantheon. After a  few hiccups — one of the perks of beta testing something being worked on in the next room is that your bugs get fixed fast! —  it was working like a charm.

Getting started on Pantheon took some getting used to — for instance, I’d never used git before. Luckily they made it easy to get up and running: all the basic commands were right there. That helped with the initial learning curve and now that I’ve got the hang of it I honestly don’t think I’d ever switch back to SVN. 

Learning git as a side effect was a definite win. It’s just so much faster to use, plus it’s what all of Drupal is built on now. Definitely the way of the future. 

Having the full repository on your desktop is great for peace of mind. Not to mention how easily Pantheon allows you to create backups. This was a real confidence booster before I started on any gnarly piece of development. Going back to a known working state was always just a click away. It might sound cheesy but this probably saved me a few grey hairs.

Having separate workflow environments with the ability to easily push changes and pull content really made a big difference too. Not only does this make sure we’re deploying the “right” way, it also made the practicalities of working with an external owner easier.

For instance, once we got to the point of doing client reviews and QA  I would push the completed work to the “live” area of the site for the client to take a look. Meanwhile I could be shredding code (and more importantly, breaking code) in the dev area without having to worry about confusing the client.

The workflow environments also came in handy when we got to the end of the project. The USC Marshall folks were able to start adding content and doing final launch prep without interrupting my final push of bugfixes. Being able to pull down their live content was also essential for debugging subtle issues with layout and navigation.

Overall I spent about two months working on the site through Pantheon, and it was one of the first real live sites launched on the platform. We are now moving all new sites to Pantheon for development and it is awesome. While I respect our legacy tools, I also can’t wait to ditch them!

Aug 29 2010
Aug 29

DrupalCon Copenhagen comes to an end, as does my blogging hiatus.

Two of my primary learning objectives here in Copenhagen were configuration management and deployment process. Historically, working with Drupal in these areas has been unpleasant, and I think that's why there is tons of innovation going on in that space right now. It needs to be fixed, and new companies are springing up to say "hey, we fixed it." Often, the people running the companies are the same people running the project that encapsulates the underlying technologies. I'm referring to:

  • The hyper-performant core distro, Pressflow
  • Distros with sophisticated install profiles, like OpenAtrium, ManagingNews and OpenPublish
  • Configuration externalization with Features
  • Development Seed's "for every site, a makefile" workflow using drush make
  • The different-yet-overlapping hosting platforms Pantheon and Aegir

Dries commented in his keynote that as Drupal continues to grow, it also needs to grow up. I think advances like these are part of the community's answer to that. I want to wrap my head around some of these tools, while continuing to watch how they progress. Others, I want to implement right now. What's perfectly clear though is that I have a lot of work to do to keep up with the innovation going on in this hugely powerful community. Which is actually nothing new, but reading a blog post about these technologies doesn't make my jaw drop the way that it does when I'm in the room watching Drupal advance.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web