Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Feb 09 2023
Feb 09

We work with Pantheon a lot. We love their platform, and how it allows us to focus on developing Drupal projects while leaving the system administration to them. We have a reliable CI pipeline that allows us to develop in GitHub and push a production-ready artifact to Pantheon’s downstream repository - it’s a great developer experience. We’re happy with this portion of our workflow, but once our work is merged to the main branch and deployed to the dev environment on Pantheon, things began to get a little more dicey. Deploying to test and live seems like it should be the easiest part, since Pantheon has their drag & drop UI that everyone reading this is probably already familiar with. The issues that we bump into tend to come when configuration changes are made directly to a production environment.

How we used to deploy

First, let’s take a look at how we have historically deployed to these environments:

  1. Deploy production-ready code to target environment by using Pantheon’s drag & drop UI.
  2. Use a Quicksilver script to run drush cim
  3. Use a Quicksilver script to run database updates using drush updb

This workflow is great, but it makes the big assumption that there are no config overrides on the target environment. Sure, we like to imagine that our code is the only source of truth for production configuration, but that is not always the case. Sometimes, there’s a legitimate reason for a client to make a quick change to production config. When we deploy to an environment with overridden configuration using the above workflow, the client’s configuration changes will get reverted unless the developer catches the overridden config prior to initiating deployment. While there are many approaches that we as developers can and should take to help prevent configuration overrides on production - like setting up appropriate roles, using config_ignore for certain special cases, and core’s config_exclude_modules settings - they can still happen from time to time.

We’ve had a lot of success using Pantheon’s Quicksilver Hooks to automate our deployment steps (seen above), but what are we to do when we deploy to an environment that has overridden configuration? Should we not import our new configuration? Or should we blindly import our config changes and revert the existing overrides? Clearly, neither option is ideal. Along with this dilemma, relying solely on Quicksilver hooks presented a few other challenges that we wanted to improve on:

  • Reporting: Unless you are running terminus workflow:watch or looking at terminus workflows:info:logs for every deployment, it’s not clear what’s actually taking place during a deployment.
  • Lack of clarity: Without reading about them in a project’s docs or checking the project’s pantheon.yml , a developer initiating a deployment may not even be aware that quicksilver hooks exist and are going to execute on the target environment!
  • Inflexible: Quicksilver hooks do the same thing every deployment, and don’t ask questions. Without reverting to something like keywords in commit messages, there’s no way that a step can be skipped or altered per-deployment.
  • Lack of an escape hatch: Once a deployment is initiated, there’s no pre-flight check that can give the option to abort.

Our new approach

These are the reasons that we started investigating a new method to handle deployments to test and live on Pantheon, and in order to address them we created a few hard requirements:

  • As a developer, I should be able to abort a deployment if there are configuration overrides on the target environment.
  • As a developer, I should be able to easily know what steps are executed during a deployment. There should be no surprises.
  • As a developer, I should be able to easily see logs from all deployments initiated by our team.
  • As a developer, I should be able to update our deployment workflow in one place across all of our projects. (This one was a nice-to-have.)

As a developer, I should be able to abort a deployment if there are configuration overrides on the target environment.

To start, we looked at how we could create a deployment process that could self-abort if there were configuration overrides. This was our highest priority requirement. We needed to avoid blindly reverting existing configuration changes that had been made on production. Since telling our development team to “just check config on prod prior to deployment” was not an acceptable solution for us, we created a new terminus plugin to help with this: lastcallmedia/terminus-safe-deploy. This plugin adds a new command terminus safe-deploy:deploy SITE.ENV that will run through all of the steps of our traditional deployment (along with a few more optional ones). Before initiating the deployment on Pantheon, the plugin will check for overridden configuration on the target environment and abort if it finds any. If the --force-deploy flag is set it will still check for overridden configuration and output what it finds, and then continue the deployment.

As a developer, I should be able to easily know what steps are executed during a deployment. There should be no surprises.

We added several other flags to the safe-deploy:deploy command that would allow us to explicitly state which operations we wanted to perform during a deployment:

  • --with-cim: triggers config imports post-deploy
  • --with-updates triggers DB updates post-deploy
  • --clear-env-caches clears the target environment CDN and Redis caches. This is something that we didn’t typically include in our Quicksilver scripts, but we saw value in making it easily accessible for the times that we needed it.

Each flag must be added so we make the conscious decision to include it as a part of our deployment.

As a developer, I should be able to easily see logs from all deployments initiated by our team.

We preferred not to rely on terminus workflow:info:logs to see what happened for each deployment. Our developers were already visiting the GitHub repository to review and merge pull requests, so GitHub seemed like the perfect place to initiate our deployments and store the logs as well. We decided to use GitHub Actions to trigger the deployments. We use their workflow_dispatch event to initiate our deployments manually, and as a bonus they provide an interface for workflow inputs, which we can correlate to the flags on the terminus command. We also included the ability to post success/failure messages to Slack, with links to each job so the team can easily see when deployments are run, if they pass/fail, and have a link to view the logs without having to search.

To use Slack alerts the command accepts a --slack-alert flag and a --slack-url argument (or a SLACK_URL can be set as an environment variable).

As a developer, I should be able to update our deployment workflow in one place across all of our projects.

This was a bonus requirement that we’re really excited about. GitHub actions allows the reuse of workflows from public repositories in other workflows, so we built a single terminus-safe-deploy workflow, and we are able to reference it from individual project workflows as seen in this gist. This lets us merge changes into the workflow (like updating the docker image used, or adding another step if needed) without having to update each individual project’s workflow files. In the example above, we are calling the workflow from the main branch but you can reference specific tags or commits if you prefer to prevent changes to the workflow for a particular project.

The End Result

Initializing a deployment from GitHub Actions Initialization of a deployment from GitHub Actions

The time spent on this investigation was well worth the effort for our team. As we remove the quicksilver hooks from our projects and replace them with GitHub Actions workflows, we feel a new sense of confidence, knowing that if our deployments to test and prod are going to impact overridden configuration, the deployment will abort itself unless we explicitly tell it to continue. Having a user interface that allows us to explicitly choose which steps we run (with the most common options being set by default) gives us the control that had we desired for these deployments, while still being as simple as using the drag and drop UI. An added benefit of this approach is that it doesn’t require any institutional knowledge, so if another team gets involved, or the client pushes code but is not familiar with using GitHub Actions, there’s no harm for them to use the drag & drop UI within Pantheon, and they don’t have to worry about any unexpected operations taking place in the background once their code is deployed to the target environment.

Setting it up yourself

We chose to implement this as a single terminus command that gets invoked by a reusable GitHub Actions workflow to keep setup easy. In order to add this deployment workflow there are just a few steps:

  1. Copy the contents of this gist workflow file to .github/workflows/pantheon_deploy.yml
  2. Add required secrets to your repository:
    • Visit https://github.com/{ORGANIZATION}/{REPO_NAME}/settings/secrets/actions
    • Add:
      1. PANTHEON_PRIVATE_SSH_KEY: A private key associated with an account that has access to run Drush commands on Pantheon.
      2. TERMINUS_MACHINE_TOKEN: A Pantheon machine token associated with a user who has access to deploy to the current project on Pantheon
    • Note: At LCM since we use this workflow across multiple projects, we have stored these as organization secrets. This makes the secrets available to any repositories we specify, and we only have to create one set.
  3. Add required actions variables:
    • Visit https://github.com/{ORGANIZATION}/{REPO_NAME}/settings/variables/actions
    • Add:
      1. PANTHEON_SITE_NAME: The machine name of the pantheon site. This is the name used as a part of the pantheon-specific domains such as https://dev-{SITE_NAME}.pantheonsite.io
      2. SLACK_URL (optional): A url provided by slack that you can post to, which will send messages to your channel of choice. Talk to your Slack admin to set one of these up if needed.

One last thing

We still love Quicksilver hooks. We continue to use them for other types of background tasks such as creating deployment markers in New Relic and notifying Slack when certain operations are performed. They offer great functionality, but we prefer to keep our mission-critical deployment steps elsewhere.

Dec 19 2017
Rob
Dec 19

Last year, we released an ambitious project to open source our own internal workflow. We called it the Last Call Drupal Scaffold (definitely not the most original name), and we presented it to the world at several different camps and conferences, including Drupalcon Baltimore.

Over the last year, the scaffold has evolved and grown as we’ve used it, but it’s been pretty minor stuff, and not really worth noting in a blog post. BUT TODAY, we’re excited to announce the 2.0 version of our project, and I’d like to outline some of the more exciting aspects.

First, we’ve migrated to CircleCI 2.0. This let us cut our build times by as much as half by taking advantage of parallel processing, workflows, and faster containers. Additionally, we’re now using the same containers in our local environments as in CI, keeping things predictable and consistent. What’s more, we’re taking advantage of Circle’s workflows to deploy things before tests have even passed. As ridiculous as that sounds, when you’re using a PR workflow to deploy to an isolated environment and have an additional dev/staging/prod buffer on a merge to master, having the tests and deployment run in parallel is a low-risk way to speed things up.

The second big new feature is that we’ve fully moved into per-project Docker environments for our local stack. And after a year and a half of development, these environments are dialed in. Want XDebug? It’s ready to go. Blackfire? Check. And of course, since it’s Docker, you are welcome to swap in whatever containers you want in the docker-compose.yml

The third thing I want to talk about is a move to Selenium. We’ve been using BackstopJS for the last year or so via SlimerJS, and have been pretty disappointed with the stability of the system, as well as the difficulty of supporting multiple browsers. Selenium (also via Docker) lets us run full, arbitrary browsers remotely, and we’ve baked in Visual Regression Testing using Webdriver.io and the WDIO Visual Regression Service. Admittedly, it’s not as pretty or easy to work with as Backstop is, but it’s rock solid and fast.

Another thing we’ve introduced in the 2.0 is full fledged Pantheon integration. Out of the box, the project comes with feature branch deployment to Multidev environments, local environment refresh using composer site:import, and much, much more. While we still use this project for other hosting providers, Pantheon’s multidev infrastructure just makes this kind of workflow incredibly nice.

Finally, I’d like to point out a few other enhancements we’ve made recently that didn’t make the cut for “biggest features”:

  • Mannequin integration - Mannequin is a tool we released this year that attempts to make front end Drupal development less painful. Our Drupal Scaffold serves as the reference implementation, although our example templates are a little sparse right now.
  • Composer Upstream Files - A Composer plugin (command, actually) we built to allow updating quasi-core files like index.php, as well as files you might want to keep in sync with the scaffold, like docker-compose.yml. It’s an easy, configurable way to stay up to date.
  • Better documentation! - Having handed off a few scaffold projects, we’ve realized that the documentation was… nonexistent. While it’s awesome that we’ve had a ton of really amazing tools baked into this project from the beginning, having a Ferrari is really only useful if you can figure out where the gas pedal is. So we’ve been steadily working on the documentation, and will continue to do so.
Oct 13 2017
Rob
Oct 13

A week ago, we released Mannequin, which we’re calling a “Component theming tool for the web.” I wanted to explain what we mean by “Component theming,” and to explain why we’re (belatedly) switching to this approach for our Drupal development.

Our Story

We used to be terrible at theming. Five or six years ago, the sites we built were consistent in one way - they all had inconsistent page margins, big cross-browser issues, and enough CSS to theme a dozen sites. As a developer, I used to hate jumping betweeen our projects, because each one had it’s own rats-nest of CSS, and every time I made a change, it broke parts of the site that nobody even knew existed.

That changed when we discovered Foundation. Foundation solved the vast majority of our consistency and cross-browser problems right away. It was a night-and-day shift for us. We’d just apply a few simple classes to our markup, and our theming was “roughed in”. Some custom styles would be written to cover the rest, but in general we were writing much less code, which made the bugs easier to find and fix. There was still a pretty major problem though - small changes still had a tendency to break things in unexpected ways.

These days, we’re starting a new chapter in our journey toward front-end excellence… the shift to Component theming. Component theming is theming based on chunks of markup (“Components”) rather than pages. If you haven’t read Brad Frost’s excellent Atomic Design, you should. It’s a great intro to the topic, although the terminology is a little different from what we’ll use here… Atomic Design is as much a specification for design as it is for development, and what we’re primarily interested in here is the development portion (theming).

What we’re changing

Long story short, for many of our newer projects, we’ve been shifting away from using globally available utility classes (such as Foundation’s .row and .column), and toward theming specific classes that are only used by the templates under our control. To use a very simple example, let’s consider how we might theme something like a grid of images:

Foundation Markup:

See the Pen Foundation - Card by Rob Bayliss (@rbayliss) on CodePen.

Component Markup:

See the Pen Foundation - Card by Rob Bayliss (@rbayliss) on CodePen.

The thing that’s immediately obvious is that we’ve gotten rid of the Foundation layout classes. This forces us to handle the layout ourselves, and to do that, we’re targeting the markup we know this component uses directly. What’s more, all of our CSS we’re using for this component is scoped to the ImageGrid class, so there’s no chance it will leak out into the global styling. We might say that this component is very isolated from the rest of the system as compared to the Foundation version, which is going to depend on the unscoped .row and .column selectors. As a result, when the client adds feedback at the end of the sprint that the first image in the grid is 1px off, we can make that fix without touching anything but the ImageGrid CSS. That is to say - refactoring just got a whole lot easier.  For example, imagine that we’re shifting toward a CSS Grid layout, and we want to shift this component over without breaking the rest of the site.

Component Markup with CSS Grid

See the Pen Foundation - Card by Rob Bayliss (@rbayliss) on CodePen.

This is a pretty significant change on the component scale (swapping layout systems), but as long as we’ve properly scoped our CSS, there’s no danger of this change leaking out to the rest of the system.

But what about shared styles?

There will always be shared styles. Even if we added no unscoped CSS or utility classes, browsers have their own stylesheet that would change the look of your components. The key is to keep these as minimal as possible. We’ve still be applying the following two Foundation mixins globally on the projects where we’ve been working with components:

This gives us a good global baseline that gets shared by all components, but you need to be extremely judicious about what gets included in unscoped CSS, since changing it later on will affect every single component (exactly what we want to avoid).

How we’re changing

Last week, we released Mannequin, a tool for isolating your templates as components and rendering them with sample data. Going forward, we plan to use Mannequin on all of our new projects from the get-go. Rather than writing extremely Drupal-specific templates, our front end developers will be writing a much cleaner version in Mannequin, then we’ll be wiring it into Drupal using what we’re calling “glue templates.”

Mannequin does not dictate that we use a glue template here - we could be writing a single node.html.twig and use it with Mannequin just fine. But glue templates give us two important benefits. First, we’re free to reuse the component template just by writing another glue template that includes it, keeping our project DRY while making things nice and discoverable by leaving the Drupal defined templates in place. Second, writing our templates without considering Drupal’s funky data structures means we can work with developers who don’t know their nodes from their assets (non-Drupal developers). As much as I poke fun, we’re excited to be leaving a lot of the Drupalisms behind.

That’s all for now!

Next week looks like a a busy one! If you’re going to BADCamp and are interested in Component based theming, please find us to talk, or come to our session on Mannequin!  Also, look for a blog post next week with a step-by-step guide to setting up Mannequin for Drupal.  

Oct 04 2017
Rob
Oct 04

Today, we’re excited to announce the first stable version of our new Component Theming tool, Mannequin. We’ve been working on this for months now, and couldn’t be happier with the progress we’ve made.

Why we built it

When we set out, we had a simple goal: we wanted to be able to do theming work on Drupal Twig templates before the content types backing them were fully built. Like many others in the Drupal community, we turned to Pattern Lab as a possible solution. Pattern Lab was effective at pumping data into our templates, but it turned out to be a pain when incorporating it with Drupal - requiring workarounds for accessing the pattern templates from Drupal and adding complexity to the repository since it can’t just be added as a composer dependency. This makes sense - Pattern Lab is built to be a standalone application that takes ownership of your templates. Pattern Lab patterns are meant to represent repeatable HTML snippets, and the primary purpose of allowing a templating engine (rather than just using static HTML files) seems to be promoting reusability of HTML within other patterns rather than reusability of the actual template within an application.

At some point, it became apparent that the problem we were hitting is one that a lot of people are wrestling with right now… as an industry we’re moving toward bigger, more sophisticated user interfaces built out of a lot of different components. On top of that, the divide between the front-end and back-end is growing larger. Full-stack developers are increasingly hard to find, as complexity on both sides increases. What we needed was a tool that drew a hard line between the front and the back end, but would allow the two sides to integrate without additional work.

Screenshot of the Mannequin website. The mascot waves hello next to the site title and navigation.

What it is

So we created Mannequin! At it’s core, Mannequin is a template viewing system. You tell it where your templates and assets live on the filesystem and what assets you need, and you start development. As you work, you enrich your templates with some metadata that describes them to Mannnequin. Using that metadata, you can pump sample data into the template, which allows you to see how it will look under various conditions. By using the Mannequin UI, you are able to focus on one template at a time, ensuring your styling remains targeted. If all goes well, there should be no further work to do to integrate your styled “component” template into the framework or CMS you are using, since it already lives wherever it needs to within your application.

How it’s changed things

Over the course of developing this tool, we’ve found it changing the way we work. Once you have the ability to visualize your application’s “components” in one place, it becomes much easier to refactor and adjust individual pieces. We’ve found ourselves thinking in terms of “components” rather than pages on the site, and it’s pushed us to isolate and decouple the composite parts of a page, making each component better. This isn’t a new idea, of course… programmers have been thinking this way since at least the 1960’s, and it’s shown up in the theming world via the OOCSSSMACSSS, and BEM methodologies, as well as Atomic Design. What’s different (for us) is that now we have the tooling we need to apply these concepts across a broad array of PHP applications in a flexible way. We should mention that the only organization scheme that Mannequin imposes is that there is a 1-1 relationship between a template and what shows up in Mannequin as a “component”. Other than that, you are free to use whatever filesystem, CSS, or Javascript structure works for your team.

So that’s Mannequin, in a nutshell. We’re excited to hit a stable release so we can finally start tapping into our backlog of bigger features we have planned for this tool in the future (Live reload, WordPress and Laravel support, Visual regression testing, the list goes on!). We hope you enjoy it, and please let us know if everything doesn’t work out perfectly for you!  If you’re interested in receiving updates as this project develops, join our mailing list below.

Visit Mannequin.io

Mar 25 2017
Rob
Mar 25

Continuous Integration & Delivery: Resources

High performing software development teams will use some form of an agile methodology to produce their work, releasing it in iterations, or even continuously. Awareness can be seen as the root of agility in these development environments. An example of increasing agility by increasing awareness can be seen in implementations of continuous integration, which is the maximizing and automating of awareness during software development.

We recently did a webinar with Pantheon to go past the tooling hype and look at the benefits, of the Continuous Integration & Delivery possible on their platform, for developers, project managers, and clients. 

[embedded content]

We also put Acquia Pipelines Beta through similar paces and delivered the following webinar, giving a glimpse of what’s coming soon for Continuous Integration & Delivery on their platform.

[embedded content]

Below are links to resources and more information on our work in this area:

  • Our  Drupal scaffold - A starting point for any new Drupal 8 project that comes equipped with many best practice features and tools, including pre-baked CircleCI and TravisCI integration
  • Rob’s 2016 Badcamp session talks about our work on the scaffold and explains the nuts and bolts.
  • See the full article from Kelly on how Awareness Enables Agility.
  • We’re proud to provide Continuous Delivery services. See some of that work here.

Get to know more about who we are. Or, contact us here to work with us.

Oct 18 2016
Oct 18

In the fall of 2016, the Rainforest Alliance and Last Call Media launched an exciting redesign of www.rainforest-alliance.org, built on Drupal 8, employing seasoned agile software development methodologies.  Our productive partnership with the Rainforest Alliance resulted in a technically groundbreaking site that allowed users unprecedented access to the riches of their content after just four months of development.  The tool is now primed to drive the Rainforest Alliance’s critical end-of-year development activities. 

Our relationship with the Rainforest Alliance began in August of 2013 when LCM undertook a massive Drupal 6 to Drupal 7 upgrade.  We enjoy a strong relationship with the Rainforest Alliance team, working together to continuously deliver strategic value in their digital properties, and were proud to be chosen for a full site redesign and upgrade.

The lobby at RA Headquarters in NYC.The lobby at RA Headquarters in NYC.

Over the years, RA has cultivated a repository of structured content to support their mission.  While the content is primarily displayed as long form text, there is a wide variety of metadata and assets associated with each piece of content.  One of the primary goals of the new site was to enable discovery of new content on the site through automatic selection of related content driven by the metadata of the content the user was viewing.  Additionally, RA had a future requirement for advanced permissioning and publishing workflows to enable stakeholders outside of the web team to play a role in the content lifecycle.  

After some initial consideration, the Rainforest Alliance and Last Call Media decided to build a responsive Drupal 8 website, which included both building out new content types and migrating existing content from their then-current Drupal 7 site. It needed to launch on a 4 month timeline, by the end of September 2016.

Why Drupal 8

Drupal 8 was selected for this project based on several factors.  First, its focus on structured data fit well with Rainforest Alliance’s need for portable and searchable content.  Second, the deep integrations with Apache Solr allowed for a nuanced content relation engine.  Solr was also used to power the various search interfaces.  Third, Drupal has historically had powerful workflow tools for managing content.  While these tools weren’t quite ready for Drupal 8 when we built it, we knew they would be simple to integrate when they were ready.  In short, Drupal was a perfect fit for the immediate needs, and Drupal 8 met the organization’s longer term goals.

Why Agile/Scrum

To meet the 4 month timeline, the project followed a highly collaborative, agile project management style (Scrum) such that RA could provide wireframes, design and UX direction, technical specifications, user testing and QA for all content types, and LCM would carry out all Drupal Development, including theming, and project management, providing guidance based on our expertise and best practices. 

Furthermore not all requirements were known at the outset and many things were known to potentially need to go a different direction depending on some of the outcomes along the way. Agile methodologies avoid the “big reveal” better than other styles of project management. Issues requiring a change in direction are raised in near realtime, resulting in saved time, in the long run, by better avoiding going in the wrong directions.

Backlog Development, Grooming, and Refinement

The above photo is from our Project Sizing and Sprint Forecasting 2 day workshop, on-site at RA Headquarters in NYC.
The above photo is from our Project Sizing and Sprint Forecasting 2 day workshop, on-site at RA Headquarters in NYC.

Leading up to Sprint 1, for a period of 4 weeks, I worked as a Product Owner (PO) and Agile Coach, with the Rainforest Alliance (RA) webteam in a Business Owner (BO) role, and Rob Bayliss, as a Subject Matter Expert (SME) from Last Call Media (LCM), to build and groom the initial backlog. During this time, we coached RA on Agile/Scrum and being effective in the Business Owner role. 

Together, meeting twice per week, we groomed and refined the project backlog to the point where SME guesstimated sizing seemed reasonable for forecasting purposes. We did this sizing together too with each epic guesstimated for size by Rob, on note cards. We went through each of these note cards together with RA in planning poker style, where everyone guessed at how big they thought the epic was and then, revealing Rob’s guesstimate, we compared the group’s sizing differences. It didn’t take everyone too long to pick up on Rob’s style of sizing, but many misconceptions and misunderstandings were revealed when someone’s sizing guess was wildly off from someone else’s. This exercise fine tuned our alignment on the backlog items and allowed for more accurate forecast sizing. 

Sprint Forecasting

Based on the sizing exercise, we set to strategically sorting our epic note cards into sprint piles. The strategy was to group cards into earlier sprints piles based not only on the importance of the epic but also on how helpful it would be to have that epic in place for later sprints. A grouping was considered full when it reached resource capacity, which was determined by adding the sizing from each note card up to an agreed upon threshold. It turned out we were able to forecast 7 sprints worth of backlog items, with approximately 2 more sprints worth of “nice-to-haves” as well as some epic cards determined to be no longer needed.

The result was each sprint’s goal being forecasted to better enable following sprint goals and concurrent releases for user testing, feedback, and iteration. Additionally, as the project budget and timeline only allowed for 6 sprints, our forecast safely assumed for a backlog of certain items being left undone at the final public release after the 6th sprint. This concept of moving items to a backlog for after sprint 6 would become a critical one later in the project as complexities were uncovered, new directions emerged, and priorities changed.

6 Sprint Forecast

The above spreadsheet shows a forecast of 6 Sprints worth of guesstimated epics, The concept of a forecast was an important one. Just like with the weather forecast, the further out it goes, the less it is to be expected to be accurate. The sprint forecast further became a living document for maintaining an evolving project vision across all of its iterations.

Agile Planning

With each sprint, we made three planning ceremonies available to the project. Pre-sprint grooming was a group exercise for the development team to go over the upcoming sprint’s wish list for the purposes of optimizing the official Sprint Planning meeting. Sprint Planning was held on the first day of a sprint and followed traditional Scrum guidelines. In Re-Forecasting, the team gave feedback from the trenches on the original forecast of SME guesstimates. This enabled the opportunity to adjust the forecast to be more realistic, evolving the project vision as it was adapted to these reports. Additionally, in later sprints, we began doing mid-sprint releases of completed work to be previewed. We did this to enable better planning and re-prioritization from the RA team.

The image above shows a typical daily standup meeting from the project.
The image above shows a typical daily standup meeting from the project.

For openness, I kept a Product Owner journal shared with the RA team. In it I tracked the daily standups, their sticking points and resolutions, as well as each day’s User Stories that were completed and the work-unit points that I awarded. This last piece of daily info was used to keep a realtime project build up chart.

Agile doesn’t tell us not to have a plan, but to always be planning.

A plan requires an awareness of things to consider for planning. Agility is the concept that we need to be ready to adapt our plans over time as we gain additional awareness. To that end, all of Scrum, from its Values to its Ceremonies, is designed to increase awareness, enabling better adaptation to change. The following build up chart is a Scrum artifact from this project that was used for the purpose of increasing awareness and better planning.

RA 6 Sprint Project Build Up Chart

The Build Up Chart additionally serves well for telling this project’s story across its six sprints. For example, one can tell things about the project just from this chart. The sharp upticks toward the end of each sprint are indicative of a new build, where substantially complex functionality is being attempted each sprint resulting in shippable increments on most stories not being realized until the very end or their sprint. One can also tell that this project was re-forecasted at least three times, resulting in the adjustments to the estimated project size over time.

Sprints 1 and 2

Sprints 1 and 2 stayed on track nicely, getting most of the critical core functionality in place. Some key Drupal 8 modules implemented during these sprints included Page Manager, Layout Plugin, Panels, as well as Search API/Search API Solr, Media Entity (and related modules), Entity Embed, Entity Browser, Inline Entity form, and Features. Core project functionalities are described by Jeff Landfried, lead project developer, below in the context of each of these Drupal modules.

Page Manager/Layout Plugin/Panels:
Page manager is a great tool for making it easy to create specialized landing pages, and when combined with Layout Plugin and Panels it provides the ability to use different layouts when viewing different node types (or other entity/bundle combination). Specialized landing pages were built as specialized page manager pages, many with their own layouts. All of the different full node displays were handled by Page Manager, using different variants for each node type.

Search API/Search API Solr:
Most content types have a “related content” section at the bottom of the page. Tagging content is one great way to handle something like this, but for our requirements we needed to have logic that was more robust than only showing other content with the same taxonomy terms. We went with Solr for this, specifically for the “more like this” (MLT) functionality that it provides. Search API Solr provided the interface for managing our servers and indexes, then with a custom block we were able to leverage MLT with our own boost criteria to help control how the related content lists were generated.

Media Entity (and related modules):
Drupal core provides file fields, which allow us to upload files to different entities, but this project had a requirement that we must be able to reuse the uploaded files, and have the ability to add additional related information for each file or image that is uploaded. Things like caption, image source, etc. On top of that, we needed to be able to display these files in different ways - in some places an image may display the caption as a tool tip, while in others it should display below the image. The Media suite of modules is perfect for this type of thing. We were able to use different modules from within the media ecosystem to handle images, embedded videos, and PDF documents, and add appropriate fields to each media entity bundle, and using Drupal core’s view mode system we were able to set up multiple displays for each media type.

Entity Embed / Entity Browser/Inline Entity form:
It hasn’t always been easy to empower content teams to easily add images and other entities to WYSIWYG fields, especially when those items need to be themed in a special way. Entity Embed allowed us to add new CKEditor buttons to the WYSIWYG that provide a dialog where the user can choose an entity that they want to appear in content, the view mode that they want it to display with, and then position it on either the left, center, or right side. One great thing about this is that the module uses a text format filter, so different text formats can display the embedded entities, while the others don’t. Entity Embed is primarily used for embedding images, but we also used it to give content editors the ability to embed blocks in their wysiwyg content as well.

Inline Entity Form allowed us to create entity reference fields, but gave content editors the ability to create the referenced entities right from the node edit form, something that can be a big time saver for content editors.

Entity Browser ties in with both Entity Embed and IEF by adding a button that opens a dialog displaying a view that allows users to select the entity that they want to use from a list, rather than having to remember media names, taxonomy names, or node titles and enter them into an autocomplete field.

These modules combined help make for a great editorial experience.

Features:
Drupal 8’s CMI initiative solved a lot of issues around managing configuration. That being said, we’ve found that bulk exporting/importing an entire site’s configuration isn’t a great workflow for our team that involves multiple environments and developers, each potentially having a few of their own special configuration options that need to be be set. Manually seeking out and overriding those configuration options in settings.php isn’t something that we decided was sustainable, and has its own drawbacks. The features module in D8 allows us to package and ship the configuration that we need to be consistent, while allowing us to leave out what may be different across environments (such as development only modules, css/js aggregation and page caching).

Sprint 3

Even though things were on track nicely across the first two sprints,  we could see work in progress and technical debt catching up with us during sprint 2. The first two sprints laid a tremendous amount of ground work, but also left as much technical debt. This resulted in a pivotal moment in the build chart, shown up close below, with progress dipping below target.

Sprint 3 shorfall

Notes from Sprint 3’s Planning Meeting:

This was forecasted as a really full sprint, but the team decided to only accept the stories above to make sure that there would be time to completely wrap up. The team is trying to be mindful of the amount of WIP tasks that already exist, and avoid creating more of those loose ends.

The Homepage was completed in Sprint 3. The above image is of the final homepage.The Homepage was completed in Sprint 3. The above image is of the final homepage.

Fewer Stories were committed to, meaning fewer work-unit points earned, resulting in a below target buildup, but the silver lining was that the results were beyond excellent with far less technical debt left over to be carried into future sprints.


Sprint 4

Sprint 4 Build up

For Sprint 4, we reconfigured project priorities with RA and reassessed forecasted story sizing. This reconfigured sprint forecasting resulted in batches of stories different than originally forecasted. It also resulted in the Estimated project size increasing. By the end of Sprint 4, we worked with RA to move tasks out of the project. This resulted in the Estimated Project size decreasing, putting the project on target with the build team’s velocity.

The team’s continued focus on careful commitment paid off big in this delivery in terms of comprehensive task completion, as well as delivering on a stretch goal, which nudged them just above target, leaving them in a great place for the next sprint. In addition to further iterating on the groundwork from the previous sprints, Sprint 4, by way of Drupal’s Panelizer,  brought with it Landing Pages and Content Hubs.

Landing Pages and Content Hubs

Panelizer:
When viewing taxonomy terms for several vocabularies we needed the ability to have a consistent layout, but to place different Custom Block entities on each term, which is essentially what Panelizer’s made for. The module doesn’t have full taxonomy term support yet, but the community is working to get it added, and the patches provided in this issue were far enough along that we were able to make it work without issue.

Editor experience

We were technically crushing it, but Sprint 3 was the beginning of a turning point that comes in every project worth discussing. We didn’t know it at the time but Sprint 3 was the transition point from Certainty to Doubt in this project’s Emotional Cycle.

emotional lifecycle

The idea is that first there is a honeymoon period of Optimism and Certainty, but inevitably things don’t always work out as expected and Doubt creeps in, pulling the team into a Pessimistic state. Good teams identify and adapt to this shift, moving from Doubt to Hope quickly with minimal emotional damage. With every project, I’ve tended to look to preempt the slip from Certainty to Doubt, somehow looking to skip to Hope or even somehow to Satisfaction. Reality, however, can often be too elusive until it smacks you right in the face. I’ve come to accept that traveling the path through Doubt, into the pessimism, is simply the sign that a project is attempting to do at least as much as it should. Doubt comes from the realization that not everything imaginable is realistic to expect. 

How do you know if you are doing as much as you could if you never run out of time to do more?

Pessimism is a part of the process of grieving the loss of things hoped for that now seem unrealistic. The uptick toward Hope comes from acceptance and adapting expectations to reality. 

Sprints 5 and 6

The following overlaying of the two graphs, Build Up and Emotional Cycle, show their relation visually.
 

overlaying of the two graphs, Build Up and Emotional Cycle

Reality will always win, but are you really on its team?

The ascension to Hope, on this project, can be attributed to an understanding of and a dedication to the five Scrum Values (Openness, Focus, Commitment, Respect, and Courage), followed by a more rapid iteration strategy, with frequent mini-releases during Sprint 6. Since most core functionality was solidly in place by this time, it became possible to squash larger numbers of site-wide bugs by relating them to the current sprint’s stories. This resulted in those stories shipping as more highly polished than was possible in previous sprints, while at the same time further iterating on stories from past efforts. Also, to increase awareness, and thereby project agility, during the final sprint, all Accepted Backlog Items were released and reviewed, as they were completed. The result was a highly collaborative finishing of the final shippable increment. At the close of Sprint 6, there were zero critical and only 3 moderate issues. The final Sprint/Project review had only 3 support questions.

screenshots

The project used its remaining time until launch day running extensive QA with the LCM Continuous Delivery team making adjustments, finally launching as arguably the most impressive Drupal 8 site launched within a year of the initial release of the latest major version of the Open Source CMS, and most importantly in time for Rainforest Alliance’s major end-of-year donation campaign. The site delivers on its promise to showcase the Rainforest Alliance’s exciting and informative messages and beautiful imagery, and stands as testimony for the efficacy of the agile approach.

Last Call Media is a full-service creative agency developing solutions for partners online and off through innovative strategy, branding, print, and digital design. Last Call Media enjoys work with purpose– building engaging solutions that assist and support organizations working to improve their communities.

*This post was written with the assistance of Rob Bayliss, Jeff Landfried, and Alan Wolf.

Mar 29 2016
Mar 29

(From left: Last Call Media’s Alan Wolf, Kelly Albrecht, and Brianna Doxzen with conference organizer Sven Aas and Keynote speaker Dave Cameron)

Recently, I had the honor of attending the 2016 HighEdWeb New England regional conference. It was a fantastic group of higher education web professionals from across New England, the surrounding regions and a long plane ride (or two) beyond. 

All Drupal All the Time

At Last Call we’re pretty invested in the Drupal community. And when I say pretty invested, I mean we’re completely invested. We live and breathe Drupal and open source everyday. We go to conferences all over the world, sharing our own expertise and learning from others in the open source community. This event was our first in the HighEdWeb Association community and what a fabulous introduction it was!

We Like to Build Stuff

We are made up of insatiably curious and driven problem solvers. We love to tinker. When we have spare time, or are between tasks and need to clear our minds with a fun activity, we design cool gifs, add neat features to our own site like “making it snow,” or — in HeWebNE’s case— build a Twitter Train.

We were honored to be able to sponsor HeWebNE as their Gold Sponsor to support Dave Cameron’s moving keynote as well as accessibility (sign language interpreters and live captioning) to support Svetlana Kouznetsova’s session “Communication Access for Deaf Students & Employees“. I mention this because Gold Sponsorship also came with the added benefit of a slightly larger exhibitor space. When we were informed of the increased real estate our first thought went to: room for activities

Ahem, I mean, the Twitter Train.

Now, this was not the debut of our Twitter Train. In fact, HeWebNE marked it coming out of retirement. Previously the Twitter Train has been seen at The United Nations in NYC in participation of NYCCamp, a regional Drupal event.

What’s a Twitter Train, Anyway?

Previously we’d written a program in Python for a Raspberry Pi to hook to a toy train and have it run around the track when triggered by a follow on our Twitter, @LastCallMedia. This was cool, but it meant that participants could only interact with the train once, and where’s the fun in that? For this event we added mentions, so you could follow us on Twitter and mention us as much as you wanted to make the train “choo-choo” as the program triggered the sound effect through an attached speaker. This was immensely more satisfying, or at least it was when we first set it up in the office as our developers (and I) tweeted to get it to move non-stop for about an hour… because trains.

The train in action GIF(The train in action)

What If the Internet Didn’t Exist : The Horror

Internet is crucial, that goes without saying really, but in this case we needed the Raspberry Pi to be connected to internet to receive the Twitter trigger to run the program. When we got to the conference we were unable to connect as there were issues within the facilities. Despite the herculean efforts of the conference organizers, the ethernet ports, and then with our Raspberry Pi’s Wi-Fi dongle, the train wasn’t choo-ing.

A Reroute to Show The World

So it seemed as if the Twitter train was something of a fail. 

Why am I blogging to the world about what some might see as a failure? Because people came together to help. Not just the wonderful organizers, but random attendees, too! We each had a different piece of the puzzle, from knowledge of laying toy train tracks to finding a computer with a working ethernet port. Eventually, Kelly saved the day by setting up a DHCP server on his laptop to give the Raspberry Pi an IP address and to shell into it directly. It was a beautiful coming together of people that spoke so strongly to this particular higher ed community.

We’re Stronger Together

The collaboration and willingness to share is exactly why we do what we do. It’s why we love the open source community and now it’s why we love the HighEdWeb Association community, too. By lunch time the Twitter Train was racing around the track, powered by a mini Twitter storm.

The high ed web community as viewed from above(The wonderful HighEdWeb New England Community as seen from above!)

Thank you to everyone that made the Twitter Train and HighEdWeb New England possible. We can’t wait to see you next year!

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web